Forums - Error when running my converted tensorflow deeplabv3&mobilenetv2 dlc model

3 posts / 0 new
Last post
Error when running my converted tensorflow deeplabv3&mobilenetv2 dlc model
fangquan_hu
Join Date: 30 Oct 19
Posts: 2
Posted: Wed, 2019-11-13 00:18

My tensorflow model was from the tensorflow/models repository: https://github.com/tensorflow/models/blob/6f1e3b38d80f131e21f0721196df7cfc5ced2b74/research/deeplab/g3doc/model_zoo.md

I have successfully converted the model by using commond below:

cmd = ['snpe-tensorflow-to-dlc',
           '--input_path', os.path.join(tensorflow_dir, pb_filename),
           '--input_dim', 'ImageTensor', '1,1025,2049,3',
           '--out_node', 'Cast',
           '--output_path', os.path.join(dlc_dir, MODEL_DLC_FILENAME),
           '--allow_unconsumed_nodes']

the convertion log is shown below:

INFO: Converting model_pretrained_citys_1025.pb to SNPE DLC format
 
2019-11-13 02:53:06,764 - 175 - WARNING - Option: '--dlc' is DEPRECATED and will be removed in upcoming release. Please use '--output_path', '-o'
 
2019-11-13 02:53:06,764 - 175 - WARNING - Option: '--graph' is DEPRECATED and will be removed in upcoming release. Please use '--input_path', '-i'
 
2019-11-13 02:53:06.766102: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
 
2019-11-13 02:53:08,179 - 411 - WARNING - ERROR_TF_FALLBACK_TO_ONDEMAND_EVALUATION: Unable to resolve operation output shapes in single pass. Using on-demand evaluation!
 
2019-11-13 02:53:08,181 - 170 - INFO - INFO_ALL_BUILDING_NETWORK: 
 
==============================================================
 
Building Network
 
==============================================================
 
2019-11-13 02:53:08,182 - 170 - INFO - INFO_TF_BUILDING_INPUT_LAYER: Building layer (INPUT) with node: ImageTensor, shape [1, 1025, 2049, 3]
 
2019-11-13 02:53:12,172 - 170 - INFO - INFO_DLC_SAVE_LOCATION: Saving model at /root/snpe-1.31.0.522/models/mnv2_dlv3/dlc/model_pretrained_citys_1025.dlc
 
2019-11-13 02:53:12,176 - 170 - INFO - INFO_CONVERSION_SUCCESS: Conversion completed successfully
 
INFO: Setup deeplab v3 completed.

*/

then when I run commond `$SNPE_ROOT/examples/NativeCpp/SampleCode/obj/local/x86_64-linux-clang/snpe-sample -b ITENSOR -d ../dlc/model_pretrained_citys_1025.dlc -i target_raw_list.txt -o output` to test my converted model, I got the error message:

Error while building SNPE object.

Which means the 'setBuilderOptions(container, runtime, runtimeList, udlBundle, useUserSuppliedBuffers, platformConfig, usingInitCaching);' step did not succeed.

I've also tried to run 'snpe-net-run --container ../dlc/model_pretrained_citys_1025.dlc --input_list target_raw_list.txt', which was followed by error message like this:

error_code=403; error_message=No output is defined. Network has empty outputs; error_component=Dl Network; line_no=302; thread_id=139974530858880

I have tested the tutorial of inceptionV3 with the same pipeline, which works well. I'm wondering why the program cannot find my output node since I have specified my output node name when converting the model.

Wish somebody could help me!

*/
  • Up0
  • Down0
gesqdn-forum
Join Date: 4 Nov 18
Posts: 184
Posted: Tue, 2019-11-19 01:02

Hi,
I have tried converting the model you mentioned in the link.
the flag "--allow_unconsumed_nodes", allows you to convert the model to DLC excluding the unsupported layers.

We tried to convert the given model using the below command,

$ snpe-tensorflow-to-dlc   --graph  ./frozen_inference_graph.pb --input_dim ImageTensor 1,1025,2049,3 --out_node 'Cast'  --dlc  out2.dlc

Faced following  error,

2019-11-19 12:35:28.434719: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
W1119 12:35:28.459101 140580433254144 converter.py:372] WARNING_UNSUPPORTED_OPS_FOUND: Some Operations are not supported by converter. Printing the list of operations:
2019-11-19 12:35:28,459 - 372 - WARNING - WARNING_UNSUPPORTED_OPS_FOUND: Some Operations are not supported by converter. Printing the list of operations:
W1119 12:35:28.459275 140580433254144 converter.py:374] WARNING_TF_OP_NOT_SUPPORTED: Operation (Cast) of type (Cast) is not supported by converter.
2019-11-19 12:35:28,459 - 374 - WARNING - WARNING_TF_OP_NOT_SUPPORTED: Operation (Cast) of type (Cast) is not supported by converter.
W1119 12:35:28.459358 140580433254144 converter.py:382] WARNING_UNCONSUMED_LAYERS: The TF Graph has been disconnected due to some unsupported operations. Printing the list of disconnected layers:
2019-11-19 12:35:28,459 - 382 - WARNING - WARNING_UNCONSUMED_LAYERS: The TF Graph has been disconnected due to some unsupported operations. Printing the list of disconnected layers:
W1119 12:35:28.459429 140580433254144 converter.py:385] WARNING_TF_LAYER_NOT_CONSUMED: Layer (Squeeze) of type (Reshape) is not consumed by converter.
2019-11-19 12:35:28,459 - 385 - WARNING - WARNING_TF_LAYER_NOT_CONSUMED: Layer (Squeeze) of type (Reshape) is not consumed by converter.
E1119 12:35:28.459500 140580433254144 snpe-tensorflow-to-dlc:106] Conversion failed: ERROR_TF_OPERATION_NOT_MAPPED_TO_LAYER: Some operations in the Tensorflow graph were not resolved to a layer. You can use --allow_unconsumed_nodes for partial graph resolution!
2019-11-19 12:35:28,459 - 106 - ERROR - Conversion failed: ERROR_TF_OPERATION_NOT_MAPPED_TO_LAYER: Some operations in the Tensorflow graph were not resolved to a layer. You can use --allow_unconsumed_nodes for partial graph resolution!

The error is because the model consists of layers that SNPE doesn't support.  You can find the SNPE supported layer information from the below link
https://developer.qualcomm.com/docs/snpe/network_layers.html

  • Up0
  • Down0
fangquan_hu
Join Date: 30 Oct 19
Posts: 2
Posted: Wed, 2019-11-20 00:52

thank you for your reply!

In fact I have noticed this, and I removed the unsupported operations such as 'toFloat' and 'tf.image.resize with align_corner=True', etc. I have also convert the dtype of original placeholder from uint8 to float32. But I got an even more weird error, which makes me confused again. 

When I convert my new .pb file to .dlc file, the following error log showed up:

==============================================================
   Building Network
   ==============================================================
   2019-11-19 16:33:18,604 - 170 - INFO - INFO_TF_BUILDING_INPUT_LAYER: Building layer (INPUT) with node: ImageTensor, shape [1, 737, 1281, 3]
   [[1], [1]]
   2019-11-19 16:33:22,282 - 165 - ERROR - %
   Traceback (most recent call last):
     File "/root/snpe-1.31.0.522/lib/python/snpe/converters/tensorflow/tf_to_ir.py", line 554, in _create_layer
       layer_builder.build_layer(self.graph, self._context, descriptor, inputs, outputs)
     File "/root/snpe-1.31.0.522/lib/python/snpe/converters/tensorflow/layers/convolution.py", line 372, in build_layer
       descriptor.output_names[0])
     File "/root/snpe-1.31.0.522/lib/python/snpe/converters/common/converter_ir/op_graph.py", line 270, in add
       output_shapes = op.infer_shape(input_shapes, len(output_names))
     File "/root/snpe-1.31.0.522/lib/python/snpe/converters/common/converter_ir/op_adapter.py", line 131, in infer_shape
       return snpe_translation_utils.get_conv_output_shape(self, input_shapes)
     File "/root/snpe-1.31.0.522/lib/python/snpe/converters/common/utils/snpe_translation_utils.py", line 70, in get_conv_output_shape
       input_height = input_shapes[0][2]
   IndexError: list index out of range
   2019-11-19 16:33:22,285 - 165 - ERROR - Conversion failed: build_layer failed with Exception list index out of range in layer ConvolutionLayerBuilder
do you know what's happenning there?
  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.