Hello guys, I am trying to convert my trained MobileNet SSD Models using snpe-tensorflow-to-dlc but I received this error:
phong@storm:~/snpe-1.6.0$ snpe-tensorflow-to-dlc --graph ~/tensorflow/optimized_SSD.pb --input_dim image_tensor 300,300,3 --out_node detection_boxes,detection_scores,detection_classes,num_detections --dlc test2.dlc --allow_unconsumed_nodes --verbose2017-11-09 11:38:31.302045: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero2017-11-09 11:38:31.302430: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1031] Found device 0 with properties:name: GeForce GTX 1070 major: 6 minor: 1 memoryClockRate(GHz): 1.7085pciBusID: 0000:02:00.0totalMemory: 7.92GiB freeMemory: 7.29GiB2017-11-09 11:38:31.302456: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:02:00.0, compute capability: 6.1)2017-11-09 11:38:31,947 - 126 - ERROR - Encountered Error: graph_def is invalid at node u'ToFloat': Input tensor 'image_tensor:0' Cannot convert a tensor of type float32 to an input of type uint8.Traceback (most recent call last):File "/home/phong/snpe-1.6.0/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc", line 116, in mainmodel = loader.load(args.graph, in_nodes, in_dims, args.in_type, args.out_node, session)File "/home/phong/snpe-1.6.0/lib/python/converters/tensorflow/loader.py", line 67, in loadgraph_def = self.__import_graph(graph_pb_or_meta_path, session, out_node_names)File "/home/phong/snpe-1.6.0/lib/python/converters/tensorflow/loader.py", line 128, in __import_graphtf.import_graph_def(graph_def, name="")File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/util/deprecation.py", line 316, in new_funcreturn func(*args, **kwargs)File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/importer.py", line 428, in import_graph_defnode, 'Input tensor %r %s' % (input_name, te)))ValueError: graph_def is invalid at node u'ToFloat': Input tensor 'image_tensor:0' Cannot convert a tensor of type float32 to an input of type uint8.
I followd the instruction from Tensorflow API object detection and sucessfully train with my own dataset, then run this command to optimize the model for inference also:
bazel-bin/tensorflow/python/tools/optimize_for_inference \--input=frozen_inference_graph.pb.pb \--output=optimized_SSD.pb \--frozen_graph=True \--input_names=image_tensor \--output_names=detection_boxes,detection_scores,detection_classes,num_detections
I dont know what's wrong with my model ? Please help, guys !
You seem to have an invalid graph as far as the error shows. Tensorflow itself can not load your graph.
2017-11-09 11:38:31,947 - 126 - ERROR - Encountered Error: graph_def is invalid at node u'ToFloat': Input tensor 'image_tensor:0' Cannot convert a tensor of type float32 to an input of type uint8.
I'd advise you to post the error above to the tensorflow project's forum and get support there.
Hello, I made a mistake by optimize the trained pb model to floar number input so that's why tensorflow can not open my model. I fixed that and trying to convert my original model to DLC format. As you know the output nodes of FasterRCNN model trained by Tensorflow API is detection_boxes,detection_scores,detection_classes,num_detections but when I tried those output as the output of the conversion command then I received this error:
This is my convert command:
The difference I see from my FasterRCNN model and Inception v3 model is the output of Inception is Softmax op when the output of my Faster RCNN model is 4 Identity op. So I dont know what should I do to make the snpe-tensorflow-to-dlc understand my output ? Please help me !
You are using the --out_node option wrong. Instead of passing comma separated list of output node names use multiple --out_node <node-name> arguments.
Hope that helps,
Hello, I changed the command like this as u suggested:
However, I still have errors like this:
Would you help me to fix this ?
This error has to do with the graph being very large. We have addressed this issue and it should be available in a future release.
Unfortunately until then your only option seems to be compressing the graph as much as possible to try working around the issue with graphs as large as this one.
As i remember, the size of the Inception v3 model used on the tutorial is larger than the model that I am using so i dont think model size is the problem here.
I am using the lastest version of SNPE 1.8.0 and I dont have the error that I had above with version 1.6. I am running the same command with the same model:
However, I have this new error:
We are aware of this issue with the Mobilenet SSD model. It seems to be graph specific and will be fixed in a future release.
Unfortunately at the moment SNPE has a few limitations which need to be adressed before we can support to Mobilenet SSD.
Same error encountered with SNPE 1.10.1
Hi Rex. We are aware of this. SNPE 1.10.1 doesn't support Mobilenet SSD.
Hi , Even I'm facing the same issue.
ERROR - Encountered Error: You must feed a value for placeholder tensor 'image_tensor' with dtype uint8 and shape [?,?,?,3]
[[Node: image_tensor = Placeholder[dtype=DT_UINT8, shape=[?,?,?,3], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'image_tensor' with dtype uint8 and shape [?,?,?,3]
[[Node: image_tensor = Placeholder[dtype=DT_UINT8, shape=[?,?,?,3], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Is there a way to pass a placeholder for image_tensor or change the input datatype( dtype) from uint8 to float32 ?
The inpout node `image_tensor`, accepts a uint8 4-D tensor of shape. So below command is working for me.
snpe-tensorflow-to-dlc --graph /tensorflow/ssd_mobilenet_v1.pb --input_dim image_tensor 512,512,512,3 --dlc ssd_mobilenet_v1.dlc --out_node detection_boxes --out_node detection_scores --out_node detection_classes --out_node num_detections --allow_unconsumed_nodes
However, i am getting another error. Please find below error logs.
2018-02-15 14:46:59,537 - 126 - ERROR - Encountered Error: Retval[0] has already been set.
[[Node: _retval_Preprocessor/map/while/ResizeImage/ExpandDims_0_0 = _Retval[T=DT_FLOAT, index=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Preprocessor/map/while/ResizeImage/ExpandDims)]]
Traceback (most recent call last):
File "/home/pragnesh/Android/snpe-1.10.1/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc", line 120, in main
converter.convert(args.dlc, args.model_version, converter_command)
File "/home/pragnesh/Android/snpe-1.10.1/lib/python/converters/tensorflow/converter.py", line 262, in convert
self._convert_layers()
File "/home/pragnesh/Android/snpe-1.10.1/lib/python/converters/tensorflow/converter.py", line 299, in _convert_layers
descriptors.extend(self._resolve_descriptors_from_scope(scope.name, scope.child_ops()))
File "/home/pragnesh/Android/snpe-1.10.1/lib/python/converters/tensorflow/converter.py", line 368, in _resolve_descriptors_from_scope
candidate_descriptor = resolver.resolve_layer(scope_name, remaining_ops, self._graph_helper)
File "/home/pragnesh/Android/snpe-1.10.1/lib/python/converters/tensorflow/layers/resize.py", line 47, in resolve_layer
input_tensor_shape = graph_helper.get_op_output_shape(input_tensor)
File "/home/pragnesh/Android/snpe-1.10.1/lib/python/converters/tensorflow/util.py", line 161, in get_op_output_shape
shapes = self._evaluate_tensor_output_shape([tensor])
File "/home/pragnesh/Android/snpe-1.10.1/lib/python/converters/tensorflow/util.py", line 174, in _evaluate_tensor_output_shape
outputs = self._session.run(fetches=tensors, feed_dict=input_tensors)
File "/home/pragnesh/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 895, in run
run_metadata_ptr)
File "/home/pragnesh/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1128, in _run
feed_dict_tensor, options, run_metadata)
File "/home/pragnesh/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1344, in _do_run
options, run_metadata)
File "/home/pragnesh/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1363, in _do_call
raise type(e)(node_def, op, message)
InternalError: Retval[0] has already been set.
[[Node: _retval_Preprocessor/map/while/ResizeImage/ExpandDims_0_0 = _Retval[T=DT_FLOAT, index=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Preprocessor/map/while/ResizeImage/ExpandDims)]]
Any inputs on what might be causing this errors?
Thanks in advance!
i convert MobilenetSSD by snpe_1.12.0 with the follow command,and i meet the same issue.
and i meet this error too!
i converted sucess with the command(snpe-1.12.0):
hey guys,
I've converted the dlc model successfully by the following script.
Please try it yourself.
But you'll still encounter the adreno GPU memory overflow issue.
For now, it doesn't seem to have any solid solution...
If anyone has the solution, please kindly let me know.
Hope this thread helped. Thanks
hi,lcycoding.
your mobilenetssd model input name is Precessor/sub ?My computer crashed with your script!!
Sorry, I didn't mention that my pb file is download from tensorflow's model zoo.
The network structure from your repo might be slightly different...
@Icycoding
It is not successful if you have unconsumed nodes since it generates a tiny DLC file that is incomplete. Also, you would need to handle the unconsumed nodes.
Rex
@Rex
Hi Rex,
Thanks for your promptly reply.
I noticed the issue that allow unconsumed nodes caused.
How can I get the boxes layer back?
Is there any suggestion?
lcycoding,
I am not sure how to handle the unconsumed nodes. There is no documentation on the process or how to extend it.
Everyday, I log in to see if someone from Qualcomm will address the obvious issue that the latest SNPE (1.12.0) does not actually support SSD.
We used the textbook SSD network from Google and it doesn't work so I am not sure what SSD is actually supported.
Rex
The "MobilenetSSD" chapter under "Model Conversion" in the SDK user's guide provides instructions. It indicates exactly which model SNPE supports (and how to get it), and also the converter command used to convert it.
When running this network using GPU, you need to enable CPU Fallback, so that the unsupported layers can fallback from GPU to the CPU.
Please try that and report back.
Sorry for the delay in responding.
hi jesliger.
i convert sucess with your advise ,and run sucess on host with command "snpe-net-run".
but run failed on snapdragon 625 ,with error message:
"[libprotobuf ERROR /home/host/build/arm-android-gcc4.9/ThirdParty/protobuf-2.6.1/src/protobuf_tp/src/google/protobuf/message_lite.cc:123] Can't parse message of type "dnn_serial2.Model" because it is missing required fields: (cannot determine missing fields for lite message)
error_code=300; error_message=Model parsing has failed.; error_component=Dl Container; line_no=308; thread_id=-210156236"
i made a mistake ,my SNPE lib is SNPE.1.10.0 , after upgrade to SNPE.1.13.0, i can run my own mobilenetSSD on snapdragon 625 with CPU mode .
however if i run GPU mode ,it told me "depth offset must be multiply of 4".
i dont konw how to avoid this error. my input size is 300x300.