Hi,
I have recently tested my models on the most recent phones (One plus 9 pro and Motorola moto g200). I have noticed that the models that were performing quite well before using the NNAPI are really slow on these new phones. I guess it happens because neither the Hexagon delegate nor the NNAPI can be used with the snapdragon 888. Is there any plan to support the NNAPI or any good tutorial to use SNPE and tensorflow lite in Android?
Dear customer,
The NNAPI is maintained by Google and SNPE is developed by Qualcomm. The NNAPI will perform the model on Android CPU runtime instead of SNPE supported multi-core like as CPU, DSP or GPU.
I'm not sure whether we have any plan to support NNHAL on SNPE engine.
BR.
Wei
Hi Wei,
Thank you for your reply. I want to ask a naive question since I am not very familiar with this. Did you know under what circumstance we should use NNAPI or SNPE? I notice during app development using Java, you can include Tensorflow/Pytorch package to use NNAPI. So I assume the NNAPI will automatically distribute the computation on CPU, GPU and DSP (https://developer.qualcomm.com/qualcomm-qcs610-development-kit/learning-...)? Why did you mention that "NNAPI will perform the model on Android CPU runtime instead of SNPE supported multi-core like as CPU, DSP or GPU".
For SNPE, I can convert different models to dlc file and do quantization and then run on the device. It seems to me the functionalities of SNPE and NNAPI are similar. I am a bit confused about which one I should use for testing DL model on devices?
Best,
Yaoshen
Hi Alessandro,
Thank you for your question. Did you know what are the actual differences between NNAPI and SPNE? Why should we use SNPE while NNAPI can be used in android development? I am a bit confused about these to runtime. They seems have similar functions.
Thanks,
Yaoshen