Forums - Issue converting ssd_mobilenet_v3_small_coco_2020_01_14 from TF

4 posts / 0 new
Last post
Issue converting ssd_mobilenet_v3_small_coco_2020_01_14 from TF
sullivan18
Join Date: 21 Jul 20
Posts: 2
Posted: Fri, 2020-09-25 16:48

Hello-

 

I found that when trying to convert ssd_mobilenet_v3_small_coco_2020_01_14 from TF's object detection model zoo that snpe-tensorflow-to-dlc enters an infinite loop and after many hours of leaking memory eventually crashes with OOM. Below are the steps I used for reproducibility. Any support you can provide on importing this model is highly appreciate, thanks!

 

 

Freezing TF MobileNetV3-SSD:

git clone git at github.com:tensorflow/models.git tensorflow_models                                                                                                             
cd tensorflow_models/                                                                                                                                                        
git fetch tag                                                                                                                                                                
git tag                                                                                                                                                                      
git checkout v1.11                                                                                                                                                           
cd research/                                                                                                                                                                 
pip3 install tensorflow==1.11                                                                                                                                                
mkdir ssd_export                                                                                                                                                             
pushd                                                                                                                                                                        
man push                                                                                                                                                                     
man pushd                                                                                                                                                                    
pushd --help                                                                                                                                                                 
export INPUT_TYPE=image_tensor                                                                                                                                               
export PIPELINE_CONFIG_PATH=/home/user01/org/snpe-1.40.0.2130/models/ssd_mobilenet_v3_small_coco_2020_01_14/pipeline.config                                                  
export TRAINED_CKPT_PREFIX=/home/user01/org/snpe-1.40.0.2130/models/ssd_mobilenet_v3_small_coco_2020_01_14/model.ckpt                                                        
export EXPORT_DIR=./ssd_export                                                                                                                                               
pushd ~/org/tf-models/models/research/                                                                                                                                       
python3.5.9 object_detection/export_inference_graph.py --input_type=${INPUT_TYPE} --pipeline_config_path=${PIPELINE_CONFIG_PATH} --trained_checkpoint_prefix=${TRAINED_CKPT_PREFIX} --output_directory=${EXPORT_DIR}                   

Converting frozen TF graph def. to SNPE DLC format (encounters infinite loop with memory leak):

Quote:
(venv3.5.9) python ./bin/x86_64-linux-clang/snpe-tensorflow-to-dlc --input_network ../tf-models/research/ssd_export/frozen_inference_graph.pb --input_dim Preprocessor/sub 1,300,300,3 --out_node detection_classes --out_node detection_boxes --out_node detection_scores --output_path mobilenetv3_ssd.tf.dlc --allow_unconsumed_nodes

(venv3.5.9) user01@user01-desktop ~/org/snpe-1.40.0.2130 $ python ./bin/x86_64-linux-clang/snpe-tensorflow-to-dlc --input_network ../tf-models/research/ssd_export/frozen_inference_graph.pb --input_dim Preprocessor/sub 1,300,300,3 --out_node detection_classes --out_node detection_boxes --out_node detection_scores --output_path mobilenetv3_ssd.tf.dlc --allow_unconsumed_nodes
WARNING:tensorflow:From ./bin/x86_64-linux-clang/snpe-tensorflow-to-dlc:33: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2020-08-28 13:53:51,193 - 139 - WARNING - From ./bin/x86_64-linux-clang/snpe-tensorflow-to-dlc:33: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

WARNING:tensorflow:From ./bin/x86_64-linux-clang/snpe-tensorflow-to-dlc:33: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

2020-08-28 13:53:51,194 - 139 - WARNING - From ./bin/x86_64-linux-clang/snpe-tensorflow-to-dlc:33: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

2020-08-28 13:53:51.218351: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3392345000 Hz
2020-08-28 13:53:51.219134: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55c9ce74aa60 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-08-28 13:53:51.219169: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
WARNING:tensorflow:From /home/user01/org/snpe-1.40.0.2130/lib/python/qti/aisw/converters/tensorflow/loader.py:146: The name tf.GraphDef is deprecated. Please use tf.compat.v1.GraphDef instead.

2020-08-28 13:53:51,220 - 139 - WARNING - From /home/user01/org/snpe-1.40.0.2130/lib/python/qti/aisw/converters/tensorflow/loader.py:146: The name tf.GraphDef is deprecated. Please use tf.compat.v1.GraphDef instead.

2020-08-28 13:53:53.987529: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at function_ops.cc:69 : Internal: Retval[5524] has already been set.
2020-08-28 13:53:53.987607: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at function_ops.cc:69 : Internal: Retval[5525] has already been set.
2020-08-28 13:53:53.987650: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at function_ops.cc:69 : Internal: Retval[5522] has already been set.
2020-08-28 13:53:53.987669: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at function_ops.cc:69 : Internal: Retval[5523] has already been set.
2020-08-28 13:53:53.987695: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at function_ops.cc:69 : Internal: Retval[5517] has already been set.
2020-08-28 13:53:53.987759: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at function_ops.cc:69 : Internal: Retval[5515] has already been set.
2020-08-28 13:53:53.987782: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at function_ops.cc:69 : Internal: Retval[5536] has already been set.
2020-08-28 13:53:53.987816: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at function_ops.cc:69 : Internal: Retval[5516] has already been set.
2020-08-28 13:53:53.987845: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at function_ops.cc:69 : Internal: Retval[5518] has already been set.
2020-08-28 13:53:53.987869: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at function_ops.cc:69 : Internal: Retval[5520] has already been set.
2020-08-28 13:53:53.987886: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at function_ops.cc:69 : Internal: Retval[5521] has already been set.
2020-08-28 13:53:54,088 - 403 - WARNING - ERROR_TF_FALLBACK_TO_ONDEMAND_EVALUATION: Unable to resolve operation output shapes in single pass. Using on-demand evaluation!
2020-08-28 13:53:54,091 - 171 - INFO - INFO_ALL_BUILDING_NETWORK:
==============================================================
Building Network
==============================================================


2020-08-28 23:56:40,778 - 166 - ERROR - Encountered Error:
Traceback (most recent call last):
  File "./bin/x86_64-linux-clang/snpe-tensorflow-to-dlc", line 37, in main
    ir_graph = converter.convert()
  File "/home/user01/org/snpe-1.40.0.2130/lib/python/qti/aisw/converters/tensorflow/tf_to_ir.py", line 317, in convert
    self._convert_layers()
  File "/home/user01/org/snpe-1.40.0.2130/lib/python/qti/aisw/converters/tensorflow/tf_to_ir.py", line 352, in _convert_layers
    descriptors = self._resolve_descriptors_from_nodes(graph_ops)
  File "/home/user01/org/snpe-1.40.0.2130/lib/python/qti/aisw/converters/tensorflow/tf_to_ir.py", line 491, in _resolve_descriptors_from_nodes
    resolved_descriptors = resolver.resolve_layer(graph_matcher, self._graph_helper)
  File "/home/user01/org/snpe-1.40.0.2130/lib/python/qti/aisw/converters/tensorflow/layers/reshape.py", line 103, in resolve_layer
    _, _, consumed_nodes = graph_helper.get_static_data_info(reshape_input)
  File "/home/user01/org/snpe-1.40.0.2130/lib/python/qti/aisw/converters/tensorflow/util.py", line 456, in get_static_data_info
    queue.extend(head.op.inputs)
MemoryError
  • Up0
  • Down0
neeraj.partha
Join Date: 1 Aug 19
Posts: 4
Posted: Sun, 2020-09-27 01:49

i'm facing the same error. Please let me know if this issue is resolved.

 

  • Up0
  • Down0
v.olshevskyi
Join Date: 17 Aug 20
Posts: 2
Posted: Fri, 2020-10-09 06:44

I have encountered the same issue converting SSD/SSDLite+MobileNetV2 frozen inference graph to DLC.

TensorFlow version: 1.15
SNPE version: 1.42

The command: 
snpe-tensorflow-to-dlc --input_network ssdlite_mobilenet_v2.pb --input_dim Preprocessor/sub 1,320,320,3 --out_node detection_boxes --output_path ssdlite_mobilenet_v2.dlc --allow_unconsumed_nodes
freezes after outputting

==============================================================
Building Network
==============================================================

However, if I set the out_node to some layer before postprocessor, e.g., BoxPredictor_0/BoxEncodingPredictor/BiasAdd, the conversion does work!
 
Is it possible there is a problem with SSD postprocessor conversion?
 
Thank you!
  • Up0
  • Down0
v.olshevskyi
Join Date: 17 Aug 20
Posts: 2
Posted: Tue, 2020-10-13 06:05

SNPE 1.43.0.2307 
TensorFlow 1.15
Python 3.5.2
Network: SSDLite+MobileNetV2 trained with TensorFlow 1.15.
Conversion:
snpe-tensorflow-to-dlc --input_network ${INPUT_MODEL} --input_dim Preprocessor/sub 1,320,320,3 --out_node detection_classes --out_node detection_boxes --out_node detection_scores --output_path ${OUTPUT_MODEL} .dlc --allow_unconsumed_nodes --show_unconsumed_nodes --debug

Output


============================================================== Building Network ============================================================== 2020-10-13 16:01:49,731 - 146 - DEBUG - INFO_TF_BUILDING_INPUT_LAYER: Building layer (INPUT) with node: Preprocessor/sub, shape [1, 320, 320, 3] 2020-10-13 16:01:49,731 - 153 - DEBUG_1 - Added buffer named Preprocessor/sub:0 of shape [1, 320, 320, 3] 2020-10-13 16:03:06,855 - 182 - WARNING - WARNING_TF_OP_NOT_SUPPORTED: Operation (Preprocessor/map/while/LoopCond) of type (LoopCond) is not supported by converter. 2020-10-13 16:03:06,855 - 182 - WARNING - WARNING_TF_OP_NOT_SUPPORTED: Operation (Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/SortByField/Shape) of type (Shape) is not supported by converter.

2020-10-13 16:03:06,855 - 182 - WARNING - WARNING_TF_OP_NOT_SUPPORTED: Operation (Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack/range) of type (Range) is not supported by converter.
2020-10-13 16:03:06,855 - 182 - WARNING - WARNING_TF_OP_NOT_SUPPORTED: Operation (Preprocessor/map/while/Switch) of type (Switch) is not supported by converter.
2020-10-13 16:03:06,855 - 182 - WARNING - WARNING_TF_OP_NOT_SUPPORTED: Operation (Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack/TensorArrayGatherV3) of type (TensorArrayGatherV3) is not supported by converter.
...
2020-10-13 16:03:07,692 - 167 - DEBUG_3 - no validation target specified. Using defaults.
2020-10-13 16:03:07,693 - 177 - INFO - INFO_DLC_SAVE_LOCATION: Saving model at ssdlite_mobilenet_v2_005.dlc
2020-10-13 16:03:07,818 - 177 - INFO - INFO_CONVERSION_SUCCESS: Conversion completed successfully
 
 

The snpe-dlc-info gives for it:

DLC info for: ssdlite_mobilenet_v2.dlc
Model Version: N/A
Model Copyright:N/A
Id,Name,Type,Inputs,Outputs,Out Dims,Runtimes,Parameters
0,Preprocessor/sub:0,data,Preprocessor/sub:0,Preprocessor/sub:0,1x320x320x3,A D G C,input_preprocessing: passthrough
,,,,,,,input_type: default
Note: The supported runtimes column assumes a processor target of Snapdragon 835 (8998)
Key : A:AIP
      D:DSP
      G:GPU
      C:CPU
 
Total parameters: 0 (0 MB assuming single precision float)
Total MACs per inference: 0 (0%)
"Converter command: snpe-tensorflow-to-dlc show_unconsumed_nodes=False allow_unconsumed_nodes=True enable_strict_validation=False disable_batchnorm_folding=False input_type=[] out_node=['detection_classes', 'detection_boxes', 'detection_scores'] model_version=None input_dim=[['Preprocessor/sub', '1,320,320,3']] udo_config_paths=None validation_target=[] debug=0 input_encoding=[] copyright_file=None"
Quantizer command: N/A
DLC created with converter version: 1.43.0.2307
Layers used by DLC: DATA
Est. Steady-State Memory Needed to Run: 1.2 MiB
What could be the problem?
Thanks!
  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.