Hi, community.
I'm trying to port a NN model following the repo https://github.com/quic/qidk/tree/master/Solutions/VisionSolution3-Image...
My NN model needs 2 inputs, say their names are y0 and y1.
Running the quantized .dlc model with snpe_bench successfully return timing of the model run.
When I tried to port the .dlc file with the app of the above repo,
Gpu targetted run succeed. Below is the result printed out for debugging reason,
and you can see the nNetwork.getInputTensorsNames() return y0 and y1.
# codeinputLayerNames = nNetwork.getInputTensorsNames();
outputLayerNames = nNetwork.getOutputTensorsNames();
# logsNetwork built successfully.inputLayerNames[y0, y1]outputLayerNames[output]input shape{y1=[I@def9e1d, y0=[I@8a15892}
# logsNetwork built successfully.inputLayerNames[input]outputLayerNames[output]
Of course the model doesn't run where I can't reach 'y0' and 'y1' inputs and the app shuts down immediately.
Is there any possible reason the dsp-targetted run has above problem occurring?
I'm leaving the result printed out when executing snpe-dlc-info -i model_quantized.dlc for you may need it.
Note: The supported runtimes column assumes a processor target of Snapdragon 855Key : A:AIPD:DSPG:GPUC:CPU---------------------------------------------------------------------------------------------------------------------------------------| Input Name | Dimensions | Type | Encoding Info |---------------------------------------------------------------------------------------------------------------------------------------| y1 | 1,224,224,3 | uFxp_8 | bitwidth 8, min 0.000000000000, max 0.997449457645, scale 0.003911566455, offset 0.000000000000 || y0 | 1,224,224,3 | uFxp_8 | bitwidth 8, min 0.000000000000, max 0.974563419819, scale 0.003821817227, offset 0.000000000000 |---------------------------------------------------------------------------------------------------------------------------------------Total parameters: 4840945 (18 MB assuming single precision float)Total MACs per inference: 19200M (100%)