Hi,
We are using SNPE SDK in-house to run object detection inference using our coco-based models.
We noticed that while DSP inference time is definitely lower than GPU, the DSP for some reason gives quirky results. For example, there were several instances where we saw with the exact same test image, GPU returned 3 inference results, but DSP returned 1, using the exact same model, exact same scene, zero changes. Just swapping between the runtimes during runtime.
Not sure what the cause of this is, whether this is expected, or are we doing something incorrect?
Your help with this matter would be greatly appreciated.
Thanks,
Ray
I guess it is "Yes"
GPU is use float is 4byte normaly(except FLOAT_16 OR HYBRID)
but DSP is quatized data to uint8, so the accurate is less than GPU
maybe if you want 3 object output, you can down score of confidence threshold(i guess, but not sure)