Converted a sample MoibleNetSSD mode to a dlc file following the tutorial at https://developer.qualcomm.com/docs/snpe/convert_mobilenetssd.html. Three output nodes (detection_classes, detection_boxes, and detection_scores) are specified in the coversion command line as follows:
snpe-tensorflow-to-dlc --graph ssd_mobilenet_v1_coco_2017_11_17/frozen_inference_graph.pb -i Preprocessor/sub 300,300,3 --out_node detection_classes --out_node detection_boxes --out_node detection_scores --dlc mobilenet_ssd.dlc --allow_unconsumed_nodes
After executing spne-net-run with the converted dlc file, the output only contains one raw file (detection_classes:0.raw).
snpe-net-run --container mobilenet_ssd.dlc --input_list data/mobilenet_file_list.txt
Should we expect two more output files for the other output nodes (detection_boxes, and detection_scroes) ?
Thanks a lot.
https://developer.qualcomm.com/docs/snpe/tools.html#tools_snpe-net-run
Thanks,
Jihoon
Hi Jihoonk,
Thanks a lot for your reply. Specifying the layer names in the input list works when using the mobileNet-SSD model in ssd_mobilenet_v1_coco_2017_11_17.tar.gz as described in the tutorial.
However, when using a newer Mobilenet v2 based model, the conversion tool snpe-tensorflow-to-dlc freezes in the middle of the conversion and will not produce any output. Can you advise what could be the reason? Is the latest Mobilenet v2 supported by the tool?
Thanks,