Thank you for providing the awesome SDK. I'm developing the Android app with this SDK of version 1.45.3.2435. Following the document of Android tutorial, I could execute the inference on my Android device GPU. However, it's performance is not enough for me. I would like to accelerate the inference speed. So, I tried to use UserBuffer according to the document of performance tips.
Android tutorial document is here. https://developer.qualcomm.com/docs/snpe/android_tutorial.html
Performance tips document is here. https://developer.qualcomm.com/docs/snpe/prog_performance.html
I can find the TF8UserBufferTensor class that extends the UserBufferTensor class in Java API document. But there is no FLOATUserBufferTensor in the document and no method to instantiate the UserBufferTensor that has the FLOAT type of encoding. So, how can I use the UserBufferTensor that has the FLOAT type from Java API?
I would appreciate it if you could reply.
Please use FloatTensor for the same:
https://developer.qualcomm.com/docs/snpe/group__java__apis.html#classcom...
thanks,
Steven