*/
I'm running GoogleNet (caffe models) on DSP using SNPE 1.12 on
Samsung Galaxy S8+ ( Android Version 7.0 , CPU: Octa-core(4 x 2.3 GHz Kryo & 4 x 1.9 GHz Kryo) , GPU Adreno 540 , DSP: Haxagon 682 )
And observed that GoogleNet does 38.4 images/sec. However , when I set the performance profile to "NeuralNetwork.PerformanceProfile.HIGH_PERFORMANCE " , I do not see any improvement. Is there something I'm missing ?
Thanks
High performance mode for DSP results in the DSP attempting to run at higher clock rates and increased memory bandwidth. For some networks there can be a significant difference, and for others not as much.
Did you use the benchmarking tool? Did you run multiple inferences in your test or just one?
*/
Thanks for the response Jesliger.
1. Did you use the benchmarking tool?
I'm using the android example provided in the snpe-sdk and timing this function
2. Did you run multiple inferences in your test or just one?
I'm running 101 iterations and averaging the performance omitting the first run.
3. For some networks there can be a significant difference, and for others not as much.
Will you be able to point me to one such network in which i can see the difference.
Hi manasa,
Do you need to root the phone to use the DSP runtime on Samsung phones?
Sorry for asking an unrelated question.
Hi manasa,
Do you need to root the phone to use the DSP runtime on Samsung phones?
Sorry for asking an unrelated question.
Hi raingomm ,
I was able to use DSP back when I haven't updated the Samsung device. After the update, looks like the DSP is not accessible anymore.
I did not try rooting the device though.
Thanks
Manasa
Hi,
Yes, the benchmarking tool provided by NPE SDK is to know the inference time of individual network layer in architecture and how they are performing on different run times.
The performance profiles like "HIGH_PERFORMANCE" will improve the performance at a cost. The cost can be time/power/resource. In "HIGH_PERFORMANCE" case the performance comes at the cost of power as it increases the clock cycles per second.
You have mentioned that you didn't find a significant difference in different runtimes, we suggest you to compare each layer inference time of the network on different run times. What makes SNPE more effective is the optimized way of computing. The reason behind your observation can be that the layers may have no alternate way of computing which is efficient than existing.