Forums - Neural processing performance on GPU with MobileNetSSd model

1 post / 0 new
Neural processing performance on GPU with MobileNetSSd model
shabbir.limdiwala
Join Date: 10 Sep 18
Posts: 1
Posted: Thu, 2018-10-25 11:36

Hi,

I am running MobilenetSSD object detection model on GPU on Snapdragin 820 platform. My application is based on "examples/NativeCpp/SampleCode/" sample application provided with SDK.  I have converted coco model into a DLC file and running network on GPU. I have also enabled the CPU fallback option as suggested in docs provide with SDK. 

Network execution takes around 90ms per frame to execute on GPU which is little high for the usecase I want to achieve. Is there any way to improve the performance on GPU for MobilenetSSD model ? Can I have multiple instance of SNPE, have multiple threads running snpe execution simultaneously ? Does it help improve the performance ? 

Any help, pointers would be appreciated. 

Thanks,

Shabbir

  • Up0
  • Down0

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.