I tried example provided by Qualcomm here:
https://github.com/globaledgesoft/deeplabv3-application-using-neural-pro...
https://github.com/globaledgesoft/deeplabv3-application-using-neural-pro...
It takes 200ms to process one frame on DSP. I am using Tensorflow Deeplab Mobilenetv2 model for Image-Segmentation.
I have already optimised my model for interference. Also, tried quatized model but the results are same. How can I make it faster ?