Hi
I am measuring the inference time of the model with snpe_bench.py
I compared the inference time of Inception v3 using snpe-2.7.0.4264 with snpe-1.63.0.3523 and found that the speed of the DSP and AIP has become very slow in snpe-2.7.0.4264.
Why is snpe-2.7.0.4264 slower
Device:RB5
python snpe_bench.py -c inception_v3.json -a -t ubuntu64_gcc75
The config file is below
{
"Name":"inceptionV3","HostRootPath": "inception_v3","HostResultsDir":"inception_v3/results","DevicePath":"/tmp/data/snpe_sample","Devices":["<Device id>"],"HostName": "localhost","Runs":5,"Model": {"Name": "Inception_v3","Dlc": "../models/inception_v3/dlc/inception_v3_quantized.dlc","InputList": "../models/inception_v3/data/target_raw_list.txt","Data": ["../models/inception_v3/data/cropped"]},"Runtimes":["GPU", "CPU", "DSP", "AIP"],"Measurements": ["timing"],"ProfilingLevel":"detailed"}
Dear developer,
Could you please help to test your model on SNPE SDK standalone instead of inside bench_mark?
We don't know what's conversion and execution commands you used.
BR.
Wei