Anyone know how to use DSP and GPU runtime together to inference faster? If we can use DSP and GPU together, we can inference faster. Does SNPE sopports this function? Or are there any other ways?
thanks!
Anyone know how to use DSP and GPU runtime together to inference faster? If we can use DSP and GPU together, we can inference faster. Does SNPE sopports this function? Or are there any other ways?
thanks!
Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.
I have the same question. But according to the latest SNPE, it does not support this feature.
Hi,
We cannot run a single model on two different runtimes and collate the inference finally. But we have had the facility to run multiple models(same or different) using the same application with the help of threads.