Hello,
i am currently using the Snapdragon NPE Android example with a converted tensorflow model on a Lenovo PB2-690M. I switched the runtime from CPU to GPU for the network inference, however i get the more or less the same execution time as on CPU. Also, the execution time is very slow, it is a shorter network than the inception, and has about 1.2 inferences per second.
Best Regards,
Simon
Which version of SNPE do you use? Can you please try other one?
Btw, what's the chipset that PB2-690M is using?
Thank you for the reply! I am using SNPE version 1.8.0, this should be the current one. The hardware of the smartphone is:
It seems i fixed the problem. In the tutorial source i had to set GPU or CPU in the LoadNetworkTask.java:
Setting CPU/GPU inside the app does not work, it always chooses CPU no matter what has been selected. Compared to the CPU, the GPU is in average 2 times as fast, using a relatively small network (5mb non-quantized).
Best Regards,
Simon