Hi,
I have converted a Caffe based MobilnetSSD model to DLC file and i have modified the android app available in the sdk to run my model.
The output is different from pc for the same image while running on snapdragon 820 dev kit.
For the same image while running on the pc using caffe model i get maximum confidence score as 90%. But while running on the Snapdragon 820 dev kit using converted DLC file i get maximum confidence score as 15%.
I was executing the DLC file on gpu runtime with CPU fallback option enabled.
Does anybody know how to improve the accuracy of DLC file by providing more info while converting the model?
It is really surprising me that no body has replied to the issue that i am facing. I guess snapdragon neural processing engine is not a good solution if user is trying to improve the performance of deep learing networks. For cross verifying i had tried using the tensorflow ssd model mentioned in your forum and results are completely wrong after converting to DLC format and running on snapdragon 820.
If you people don't care or don't want to support the end user why you are promoting this in the first place. Worst customer support ever.
Hi gopinath,
i also met the wrong result issue, and tensorflow model conver never suceed, caffe model conver runtime result is not right, could you please tell if you sovle the problem?
Hi gopinath.r,
Caffe and SNPE use different input tensor shapes. Caffe is BCHW while SNPE is BHWC. Did you consider this difference when you test SNPE model?
For details, refer to following:
https://developer.qualcomm.com/docs/snpe/image_input.html
Thanks,
jihoonk