I'm currently working with HDK 8350. By performing quantization on identical images, I observed that the model's runtime was slower by 2-3 ms when using SNPE-2.5, i tried to use uint_8 as well using other SNPE versions, but still saw the time issue. Also when using snpe-net-run I have noticed a big difference in later SNPE versions between avrage_total_inference_time and avg_foward_propagate_time.Do you know if this is a recognized problem?
Dear developer,
How do we understand the time gap for your mentioned about 2~3ms on SNPE2.5?
You can specify the proile_level==moderate to check the accelator and NetRun time.
BR.
Wei
Dear Weihuan,
I used snpe-net-run and got those measures, the same network with snpe 2.5 has 2ms different bettwen avrage_total_inference_time and NetRun, unlike with snpe 1.5 .
Dear customer,
Could you please share the model you used via github so that I can run it on the device to analyze the cause of this problem.
BR.
Yunxiang
Dear Yunxiang
I use YOLOX-S (just onnx version to dlc)
https://github.com/Megvii-BaseDetection/YOLOX
Br,
Dan
Dear customer,
I have run this model on 8350 using snpe2.5 and get the avg_total_inference_time is 8815us,is this data similar to your data?
Do you say that snpe-v1.50 is used for the data with a big gap, we don't have snpe1.5. Looking forward to your reply.
BR.
Yunxiang
Dear,Yunxiang.
I got different results, using snpe-net-run around 33ms. ,
Those are the commands that we used to convert, quantize and run the Yoloxs model with the results from snpe-net-run :
Commands:
BR,Dan
Dear customer,
You can use 'snpe-net-run --perf_profile burst' to accelerate running time.
Br.
Yunxiang