Forums - Can not convert tensorflow pb model in to DLC format

24 posts / 0 new
Last post
Can not convert tensorflow pb model in to DLC format
phongnhhn92
Join Date: 27 Oct 17
Posts: 8
Posted: Thu, 2017-11-09 18:45

Hello guys, I am trying to convert my trained MobileNet SSD Models using snpe-tensorflow-to-dlc but I received this error:

phong@storm:~/snpe-1.6.0$ snpe-tensorflow-to-dlc --graph ~/tensorflow/optimized_SSD.pb --input_dim image_tensor 300,300,3 --out_node detection_boxes,detection_scores,detection_classes,num_detections --dlc test2.dlc --allow_unconsumed_nodes --verbose
2017-11-09 11:38:31.302045: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2017-11-09 11:38:31.302430: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1031] Found device 0 with properties: 
name: GeForce GTX 1070 major: 6 minor: 1 memoryClockRate(GHz): 1.7085
pciBusID: 0000:02:00.0
totalMemory: 7.92GiB freeMemory: 7.29GiB
2017-11-09 11:38:31.302456: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:02:00.0, compute capability: 6.1)
2017-11-09 11:38:31,947 - 126 - ERROR - Encountered Error: graph_def is invalid at node u'ToFloat': Input tensor 'image_tensor:0' Cannot convert a tensor of type float32 to an input of type uint8.
Traceback (most recent call last):
  File "/home/phong/snpe-1.6.0/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc", line 116, in main
    model = loader.load(args.graph, in_nodes, in_dims, args.in_type, args.out_node, session)
  File "/home/phong/snpe-1.6.0/lib/python/converters/tensorflow/loader.py", line 67, in load
    graph_def = self.__import_graph(graph_pb_or_meta_path, session, out_node_names)
  File "/home/phong/snpe-1.6.0/lib/python/converters/tensorflow/loader.py", line 128, in __import_graph
    tf.import_graph_def(graph_def, name="")
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/util/deprecation.py", line 316, in new_func
    return func(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/importer.py", line 428, in import_graph_def
    node, 'Input tensor %r %s' % (input_name, te)))
ValueError: graph_def is invalid at node u'ToFloat': Input tensor 'image_tensor:0' Cannot convert a tensor of type float32 to an input of type uint8.
I followd the instruction from Tensorflow API object detection and sucessfully train with my own dataset, then run this command to optimize the model for inference also:
bazel-bin/tensorflow/python/tools/optimize_for_inference \
--input=frozen_inference_graph.pb.pb \
--output=optimized_SSD.pb \
--frozen_graph=True \
--input_names=image_tensor \
--output_names=detection_boxes,detection_scores,detection_classes,num_detections
I dont know what's wrong with my model ? Please help, guys ! 
 
 
 

 

  • Up0
  • Down0
dmarques
Join Date: 15 Sep 17
Posts: 27
Posted: Fri, 2017-11-10 04:08

You seem to have an invalid graph as far as the error shows. Tensorflow itself can not load your graph.

2017-11-09 11:38:31,947 - 126 - ERROR - Encountered Error: graph_def is invalid at node u'ToFloat': Input tensor 'image_tensor:0' Cannot convert a tensor of type float32 to an input of type uint8.

I'd advise you to post the error above to the tensorflow project's forum and get support there.

  • Up0
  • Down0
phongnhhn92
Join Date: 27 Oct 17
Posts: 8
Posted: Mon, 2017-11-13 18:04

Hello, I made a mistake by optimize the trained pb model to floar number input so that's why tensorflow can not open my model. I fixed that and trying to convert my original model to DLC format. As you know the output nodes of FasterRCNN model trained by Tensorflow API is detection_boxes,detection_scores,detection_classes,num_detections but when I tried those output as the output of the conversion command then I received this error:

ERROR - Conversion failed: ERROR_TF_OUTPUT_NODE_NOT_WITHIN_GRAPH: Output node detection_boxes,detection_scores,detection_classes,num_detections not within graph.

This is my convert command:

snpe-tensorflow-to-dlc --graph ~/tensorflow/optimized_FasterRCNN2.pb --input_dim image_tensor 229,299,3 --out_node detection_boxes,detection_scores,detection_classes,num_detections --dlc test2.dlc --allow_unconsumed_nodes --verbose
 

The difference I see from my FasterRCNN model and Inception v3 model is the output of Inception is Softmax op when the output of my Faster RCNN model is 4 Identity op. So I dont know what should I do to make the snpe-tensorflow-to-dlc understand my output ? Please help me !

  • Up0
  • Down0
dmarques
Join Date: 15 Sep 17
Posts: 27
Posted: Tue, 2017-11-14 07:41

You are using the --out_node option wrong. Instead of passing comma separated list of output node names use multiple --out_node <node-name> arguments.

Hope that helps,

  • Up0
  • Down0
phongnhhn92
Join Date: 27 Oct 17
Posts: 8
Posted: Tue, 2017-11-14 23:54

Hello, I changed the command like this as u suggested:

snpe-tensorflow-to-dlc --graph ~/tensorflow/MobileNetSSD.pb --input_dim image_tensor 229,299,3 --out_node detection_boxes --out_node detection_scores --out_node detection_classes --out_node num_detections --dlc test2.dlc

However, I still have errors like this:

Traceback (most recent call last):
  File "/home/phong/snpe-1.6.0/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc", line 119, in main
    converter = DlcConverter(model, not args.allow_unconsumed_nodes)
  File "/home/phong/snpe-1.6.0/lib/python/converters/tensorflow/converter.py", line 234, in __init__
    self._ops = self._get_graph_operations(model)
  File "/home/phong/snpe-1.6.0/lib/python/converters/tensorflow/converter.py", line 432, in _get_graph_operations
    nodes = cls._filter_graph_nodes(model.graph_def.node, i.name, out_node_name)
  File "/home/phong/snpe-1.6.0/lib/python/converters/tensorflow/converter.py", line 478, in _filter_graph_nodes
    cls._create_sorted_node_input_graph(nodes_map, out_node, ordered_nodes_map)
  File "/home/phong/snpe-1.6.0/lib/python/converters/tensorflow/converter.py", line 504, in _create_sorted_node_input_graph
....
File "/home/phong/snpe-1.6.0/lib/python/converters/tensorflow/converter.py", line 500, in _create_sorted_node_input_graph
    if input_node_name not in nodes_map or input_node_name in ordered_nodes_map:
RuntimeError: maximum recursion depth exceeded in cmp

Would you help me to fix this ? 

  • Up0
  • Down0
dmarques
Join Date: 15 Sep 17
Posts: 27
Posted: Wed, 2017-11-15 05:27

This error has to do with the graph being very large. We have addressed this issue and it should be available in a future release.

Unfortunately until then your only option seems to be compressing the graph as much as possible to try working around the issue with graphs as large as this one.

  • Up0
  • Down0
phongnhhn92
Join Date: 27 Oct 17
Posts: 8
Posted: Wed, 2017-11-15 05:39

As i remember, the size of the Inception v3 model used on the tutorial is larger than the model that I am using so i dont think model size is the problem here.

  • Up0
  • Down0
phongnhhn92
Join Date: 27 Oct 17
Posts: 8
Posted: Wed, 2017-11-22 17:52

I am using the lastest version of SNPE 1.8.0 and I dont have the error that I had above with version 1.6. I am running the same command with the same model:

phong@storm:~/Archives/snpe-1.8.0$ snpe-tensorflow-to-dlc --graph ~/tensorflow/optimized_SSD_graph.pb --input_dim image_tensor 300,300,3 --out_node detection_boxes --out_node detection_scores --out_node detection_classes --out_node num_detections --dlc test.dlc --verbose

However, I have this new error:

Traceback (most recent call last):
  File "/home/phong/Archives/snpe-1.8.0/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc", line 120, in main
    converter.convert(args.dlc, args.model_version, converter_command)
  File "/home/phong/Archives/snpe-1.8.0/lib/python/converters/tensorflow/converter.py", line 262, in convert
    self._convert_layers()
  File "/home/phong/Archives/snpe-1.8.0/lib/python/converters/tensorflow/converter.py", line 299, in _convert_layers
    descriptors.extend(self._resolve_descriptors_from_scope(scope.name, scope.child_ops()))
  File "/home/phong/Archives/snpe-1.8.0/lib/python/converters/tensorflow/converter.py", line 368, in _resolve_descriptors_from_scope
    candidate_descriptor = resolver.resolve_layer(scope_name, remaining_ops, self._graph_helper)
  File "/home/phong/Archives/snpe-1.8.0/lib/python/converters/tensorflow/layers/concat.py", line 54, in resolve_layer
    axis = int(axis_tensor.outputs[0].eval()) - 1
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 596, in eval
    return _eval_using_default_session(self, feed_dict, self.graph, session)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 4582, in _eval_using_default_session
    return session.run(tensors, feed_dict)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 889, in run
    run_metadata_ptr)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1120, in _run
    feed_dict_tensor, options, run_metadata)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1317, in _do_run
    options, run_metadata)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1336, in _do_call
    raise type(e)(node_def, op, message)
InvalidArgumentError: You must feed a value for placeholder tensor 'image_tensor' with dtype uint8 and shape [?,?,?,3]
[[Node: image_tensor = Placeholder[dtype=DT_UINT8, shape=[?,?,?,3], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
[[Node: FeatureExtractor/LogicalAnd/_2875 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_58_FeatureExtractor/LogicalAnd", tensor_type=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
 
Caused by op u'image_tensor', defined at:
  File "/home/phong/Archives/snpe-1.8.0/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc", line 131, in <module>
    main()
  File "/home/phong/Archives/snpe-1.8.0/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc", line 116, in main
    model = loader.load(args.graph, in_nodes, in_dims, args.in_type, args.out_node, session)
  File "/home/phong/Archives/snpe-1.8.0/lib/python/converters/tensorflow/loader.py", line 67, in load
    graph_def = self.__import_graph(graph_pb_or_meta_path, session, out_node_names)
  File "/home/phong/Archives/snpe-1.8.0/lib/python/converters/tensorflow/loader.py", line 128, in __import_graph
    tf.import_graph_def(graph_def, name="")
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/util/deprecation.py", line 316, in new_func
    return func(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/importer.py", line 334, in import_graph_def
    op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 3073, in create_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1524, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access
 
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'image_tensor' with dtype uint8 and shape [?,?,?,3]
[[Node: image_tensor = Placeholder[dtype=DT_UINT8, shape=[?,?,?,3], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
[[Node: FeatureExtractor/LogicalAnd/_2875 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_58_FeatureExtractor/LogicalAnd", tensor_type=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Did the new version change the way to use the command or something ? 
  • Up0
  • Down0
dmarques
Join Date: 15 Sep 17
Posts: 27
Posted: Thu, 2017-11-23 04:50

We are aware of this issue with the Mobilenet SSD model. It seems to be graph specific and will be fixed in a future release.

Unfortunately at the moment SNPE has a few limitations which need to be adressed before we can support to Mobilenet SSD.

  • Up0
  • Down0
Rex
Join Date: 8 Aug 15
Posts: 45
Posted: Tue, 2018-01-16 11:59

Same error encountered with SNPE 1.10.1

  • Up0
  • Down0
jesliger
Join Date: 6 Aug 13
Posts: 75
Posted: Tue, 2018-01-16 12:37

Hi Rex.  We are aware of this. SNPE 1.10.1 doesn't support Mobilenet SSD.

 

  • Up0
  • Down0
sam
Join Date: 11 Jan 18
Posts: 8
Posted: Wed, 2018-01-17 01:05

Hi , Even I'm facing the same issue.

ERROR - Encountered Error: You must feed a value for placeholder tensor 'image_tensor' with dtype uint8 and shape [?,?,?,3]
     [[Node: image_tensor = Placeholder[dtype=DT_UINT8, shape=[?,?,?,3], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
 

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'image_tensor' with dtype uint8 and shape [?,?,?,3]
     [[Node: image_tensor = Placeholder[dtype=DT_UINT8, shape=[?,?,?,3], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

 

Is there a way to pass a placeholder for image_tensor or change the input datatype( dtype) from uint8 to float32 ?

  • Up0
  • Down0
harshiddh.mania
Join Date: 4 Jan 18
Posts: 4
Posted: Thu, 2018-02-15 15:15

The inpout node `image_tensor`, accepts a uint8 4-D tensor of shape. So below command is working for me.

snpe-tensorflow-to-dlc --graph /tensorflow/ssd_mobilenet_v1.pb --input_dim image_tensor 512,512,512,3 --dlc ssd_mobilenet_v1.dlc --out_node detection_boxes --out_node detection_scores --out_node detection_classes --out_node num_detections --allow_unconsumed_nodes

However, i am getting another error. Please find below error logs.

2018-02-15 14:46:59,537 - 126 - ERROR - Encountered Error: Retval[0] has already been set.
     [[Node: _retval_Preprocessor/map/while/ResizeImage/ExpandDims_0_0 = _Retval[T=DT_FLOAT, index=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Preprocessor/map/while/ResizeImage/ExpandDims)]]
Traceback (most recent call last):
  File "/home/pragnesh/Android/snpe-1.10.1/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc", line 120, in main
    converter.convert(args.dlc, args.model_version, converter_command)
  File "/home/pragnesh/Android/snpe-1.10.1/lib/python/converters/tensorflow/converter.py", line 262, in convert
    self._convert_layers()
  File "/home/pragnesh/Android/snpe-1.10.1/lib/python/converters/tensorflow/converter.py", line 299, in _convert_layers
    descriptors.extend(self._resolve_descriptors_from_scope(scope.name, scope.child_ops()))
  File "/home/pragnesh/Android/snpe-1.10.1/lib/python/converters/tensorflow/converter.py", line 368, in _resolve_descriptors_from_scope
    candidate_descriptor = resolver.resolve_layer(scope_name, remaining_ops, self._graph_helper)
  File "/home/pragnesh/Android/snpe-1.10.1/lib/python/converters/tensorflow/layers/resize.py", line 47, in resolve_layer
    input_tensor_shape = graph_helper.get_op_output_shape(input_tensor)
  File "/home/pragnesh/Android/snpe-1.10.1/lib/python/converters/tensorflow/util.py", line 161, in get_op_output_shape
    shapes = self._evaluate_tensor_output_shape([tensor])
  File "/home/pragnesh/Android/snpe-1.10.1/lib/python/converters/tensorflow/util.py", line 174, in _evaluate_tensor_output_shape
    outputs = self._session.run(fetches=tensors, feed_dict=input_tensors)
  File "/home/pragnesh/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 895, in run
    run_metadata_ptr)
  File "/home/pragnesh/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1128, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/pragnesh/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1344, in _do_run
    options, run_metadata)
  File "/home/pragnesh/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1363, in _do_call
    raise type(e)(node_def, op, message)
InternalError: Retval[0] has already been set.
     [[Node: _retval_Preprocessor/map/while/ResizeImage/ExpandDims_0_0 = _Retval[T=DT_FLOAT, index=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Preprocessor/map/while/ResizeImage/ExpandDims)]]

Any inputs on what might be causing this errors?

Thanks in advance!

 

  • Up0
  • Down0
349074299
Join Date: 13 Sep 17
Posts: 8
Posted: Sun, 2018-02-25 19:05

i convert MobilenetSSD by snpe_1.12.0 with the follow command,and i meet the same issue.

"./snpe-tensorflow-to-dlc --graph ../../../SSD_project/ssd_mobilenet_TF/models/frozen_inference_graph_face.pb -i inputs 300,300,300,3 --in_type image --dlc ../../../SSD_project/ssd_mobilenet_TF/models/mod.dlc --out_node detection_scores --allow_unconsumed_nodes"
 
 
"
 [[Node: _retval_Preprocessor/map/while/ResizeImage/ExpandDims_0_0 = _Retval[T=DT_FLOAT, index=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Preprocessor/map/while/ResizeImage/ExpandDims)]]
Traceback (most recent call last):
  File "./snpe-tensorflow-to-dlc", line 120, in main
    converter.convert(args.dlc, args.model_version, converter_command)
  File "/mnt/linux/snpe-1.10.1/lib/python/converters/tensorflow/converter.py", line 262, in convert
    self._convert_layers()
  File "/mnt/linux/snpe-1.10.1/lib/python/converters/tensorflow/converter.py", line 299, in _convert_layers
    descriptors.extend(self._resolve_descriptors_from_scope(scope.name, scope.child_ops()))
  File "/mnt/linux/snpe-1.10.1/lib/python/converters/tensorflow/converter.py", line 368, in _resolve_descriptors_from_scope
    candidate_descriptor = resolver.resolve_layer(scope_name, remaining_ops, self._graph_helper)
  File "/mnt/linux/snpe-1.10.1/lib/python/converters/tensorflow/layers/resize.py", line 47, in resolve_layer
    input_tensor_shape = graph_helper.get_op_output_shape(input_tensor)
  File "/mnt/linux/snpe-1.10.1/lib/python/converters/tensorflow/util.py", line 161, in get_op_output_shape
    shapes = self._evaluate_tensor_output_shape([tensor])
  File "/mnt/linux/snpe-1.10.1/lib/python/converters/tensorflow/util.py", line 174, in _evaluate_tensor_output_shape
    outputs = self._session.run(fetches=tensors, feed_dict=input_tensors)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 895, in run
    run_metadata_ptr)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1128, in _run
    feed_dict_tensor, options, run_metadata)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1344, in _do_run
    options, run_metadata)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1363, in _do_call
    raise type(e)(node_def, op, message)
InternalError: Retval[0] has already been set.
[[Node: _retval_Preprocessor/map/while/ResizeImage/ExpandDims_0_0 = _Retval[T=DT_FLOAT, index=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Preprocessor/map/while/ResizeImage/ExpandDims)]]
"
 

and i meet this error too!

  • Up0
  • Down0
349074299
Join Date: 13 Sep 17
Posts: 8
Posted: Mon, 2018-02-26 00:38

i converted sucess with the command(snpe-1.12.0):

./snpe-tensorflow-to-dlc --graph ../../../SSD_project/ssd_mobilenet_TF/models/frozen_inference_graph_face.pb --input_dim image_tensor 300,300,3 --in_type image --dlc ../ssd.dlc --out_node detection_boxes --out_node detection_scores --out_node detection_classes --out_node num_detections --allow_unconsumed_nodes
 
but my dlc file size is 1KB,and the problem is all the layers are not consumed by converter.if i remove command"--allow_unconsumed_nodes", it failed to convert and said "Some operation are not resloved to a layer"!!
 
the models i used is download from "https://github.com/yeephycho/tensorflow-face-detection"
  • Up0
  • Down0
lcycoding
Join Date: 17 Jan 18
Posts: 5
Posted: Mon, 2018-02-26 00:44

hey guys,

I've converted the dlc model successfully by the following script.

snpe-tensorflow-to-dlc --graph your_graph_here.pb -i Preprocessor/sub 300,300,3 --out_node detection_classes --out_node detection_boxes --out_node detection_scores --dlc model_name.dlc --allow_unconsumed_nodes

Please try it yourself.

But you'll still encounter the adreno GPU memory overflow issue.

For now, it doesn't seem to have any solid solution...

If anyone has the solution, please kindly let me know.

Hope this thread helped. Thanks

  • Up0
  • Down0
349074299
Join Date: 13 Sep 17
Posts: 8
Posted: Mon, 2018-02-26 01:28

hi,lcycoding.

your mobilenetssd model input name is Precessor/sub ?My computer crashed with your script!!

  • Up0
  • Down0
lcycoding
Join Date: 17 Jan 18
Posts: 5
Posted: Mon, 2018-02-26 02:09

Sorry, I didn't mention that my pb file is download from tensorflow's model zoo.

The network structure from your repo might be slightly different...

  • Up0
  • Down0
Rex
Join Date: 8 Aug 15
Posts: 45
Posted: Tue, 2018-02-27 08:32

@Icycoding

It is not successful if you have unconsumed nodes since it generates a tiny DLC file that is incomplete. Also, you would need to handle the unconsumed nodes.

Rex

  • Up0
  • Down0
lcycoding
Join Date: 17 Jan 18
Posts: 5
Posted: Tue, 2018-02-27 19:01

@Rex

Hi Rex,

Thanks for your promptly reply.

I noticed the issue that allow unconsumed nodes caused.

How can I get the boxes layer back?

Is there any suggestion?

  • Up0
  • Down0
Rex
Join Date: 8 Aug 15
Posts: 45
Posted: Thu, 2018-03-01 15:12

lcycoding,

I am not sure how to handle the unconsumed nodes. There is no documentation on the process or how to extend it.

Everyday, I log in to see if someone from Qualcomm will address the obvious issue that the latest SNPE (1.12.0) does not actually support SSD.

We used the textbook SSD network from Google and it doesn't work so I am not sure what SSD is actually supported.

Rex

  • Up0
  • Down0
jesliger
Join Date: 6 Aug 13
Posts: 75
Posted: Fri, 2018-03-02 04:42

The "MobilenetSSD" chapter under "Model Conversion" in the SDK user's guide provides instructions.  It indicates exactly which model SNPE supports (and how to get it), and also the converter command used to convert it.

When running this network using GPU, you need to enable CPU Fallback, so that the unsupported layers can fallback from GPU to the CPU.

Please try that and report back.

Sorry for the delay in responding.

  • Up0
  • Down0
349074299
Join Date: 13 Sep 17
Posts: 8
Posted: Tue, 2018-03-06 21:33

hi jesliger.

    i  convert sucess with your advise ,and run sucess on host with command "snpe-net-run".

but run failed on snapdragon 625 ,with error message:

"[libprotobuf ERROR /home/host/build/arm-android-gcc4.9/ThirdParty/protobuf-2.6.1/src/protobuf_tp/src/google/protobuf/message_lite.cc:123] Can't parse message of type "dnn_serial2.Model" because it is missing required fields: (cannot determine missing fields for lite message)

error_code=300; error_message=Model parsing has failed.; error_component=Dl Container; line_no=308; thread_id=-210156236"

 

  • Up0
  • Down0
349074299
Join Date: 13 Sep 17
Posts: 8
Posted: Wed, 2018-03-07 01:57

i made a mistake ,my SNPE lib is SNPE.1.10.0 , after upgrade to SNPE.1.13.0, i can run my own mobilenetSSD on snapdragon 625 with CPU mode .

however if i run GPU mode ,it told me "depth offset must be multiply of 4".

i dont konw how to avoid this error. my input size is 300x300.

  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.