Hi,
I'm try batch for mobilenet ssd on android
but output error when i build network
Model source:
wget http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_2018_01_28.tar.gz
Convert model comand:
snpe-tensorflow-to-dlc --graph ssd_mobilenet_v1_coco_2018_01_28/frozen_inference_graph.pb -i Preprocessor/sub 2,300,300,3 --out_node detection_classes --out_node detection_boxes --out_node detection_scores --dlc mobilenet_ssd.dlc --allow_unconsumed_nodes
Builder info on android:
final SNPE.NeuralNetworkBuilder builder = new SNPE.NeuralNetworkBuilder(mActivity.getApplication())
.setDebugEnabled(false)
.setOutputLayers("Postprocessor/BatchMultiClassNonMaxSuppression")
.setRuntimeOrder(NeuralNetwork.Runtime.GPU)
.setCpuFallbackEnabled(true)
network = builder.build();
snpe sdk version: 1.17.0
error message:
E/LoadNetworkTask: Unable to create network! Cause: error_code=811; error_message=GPU tensor dimensions are invalid. Tensor dimension mismatch; error_component=GPU Runtime; line_no=103; thread_id=-399001312
java.lang.IllegalStateException: Unable to create network! Cause: error_code=811; error_message=GPU tensor dimensions are invalid. Tensor dimension mismatch; error_component=GPU Runtime; line_no=103; thread_id=-399001312
at com.qualcomm.qti.snpe.internal.NativeNetwork.nativeInitFromFile(Native Method)
try case:
1. Runtime.CPU is OK, but GPU is failed
2. Two different phones(zenfone 5Z, zenfone 3) and the processer S845 and S625 have the same situation
3. The comand "snpe-tensorflow-to-dlc --graph ssd_mobilenet_v1_coco_2018_01_28/frozen_inference_graph.pb -i Preprocessor/sub 1,300,300,3 --out_node detection_classes --out_node detection_boxes --out_node detection_scores --dlc mobilenet_ssd.dlc --allow_unconsumed_nodes" is work fine!
How can i fix it?
Thanks
Would i Preprocessor/sub 1,300,300,3 (changed 2 to 1) make it work? with single and multiple images?
Hi Enrico Ros,
I try Converting model by 1,300,300,3 is OK and builder build network also work fine!
But I cannot input multiple images, mNeuralNetwork.getInputTensorsShapes().get("Preprocessor/sub:0") is {1,300,300,3}
Hi Enrico,
Does user buffer mode supports batch input now?
I saw that batch input is supported with ITensor input, but user buffer mode is not mentioned in document.
Any comments are appreciated, thanks.