Forums - All supported Tensorflow2 models

4 posts / 0 new
Last post
All supported Tensorflow2 models
amithgowda.k
Join Date: 30 Aug 21
Posts: 1
Posted: Wed, 2021-09-01 01:06

Hello,

I am looking for a complete list of Tensorflow2 models (preferably from the Tensorflow Model Zoo or others like YOLO) that are fully supported by SNPE. I seem to have some trouble converting a few custom trained Tensorflow2 models to DLC like F-RCNN, YOLOv4 or YOLOv4 tiny. Is there any way to successfully convert a model that may not be fully supported by snpe-tensorflow-to-dlc tool?

Regards,

Amith

  • Up0
  • Down0
ap.arunraj
Join Date: 20 Apr 20
Posts: 21
Posted: Tue, 2021-10-05 18:08

Hello,
We were able convert YOLO V3 model in ONNX (https://github.com/onnx/models/tree/master/vision/object_detection_segme...) to DLC using the following command.
snpe-onnx-to-dlc --input_network yolov3-10.onnx -d input_1 1,416,416,3
For YOLO V3 tiny we used another approach to convert it to DLC. The steps are as follows:

  1. Download the model weights (Darknet) from https://pjreddie.com/darknet/yolo/
  2. Create a Keras (tensorflow 1.13.2 as backend) script defining the tiny YOLO V3 architecture.
  3. Load the darknet model weights in Keras (Make sure model architecture is same). 
  4. Save the model in .pb format
  5. Convert the .pb to dlc using the following command:
    snpe-tensorflow-to-dlc --input_network <path_to_frozen_graph> --input_dim input_1 1,416,416,3 --out_node conv2d_9/BiasAdd --out_node conv2d_12/BiasAdd --output_path <Path_to_DLC>.dlc
  • Up0
  • Down0
1138589626
Join Date: 21 Aug 21
Posts: 1
Posted: Tue, 2021-12-14 18:37

Hello,

When i use the following command ,snpe-onnx-to-dlc --input_network yolov3-10.onnx -d input_1 1,416,416,3

i met the issue 

  File "/home/worker/shared_data/snpe/snpe-1.57.0.3124/lib/python/qti/aisw/converters/onnx/onnx_to_ir.py", line 177, in convert

    self.graph)

  File "/home/worker/shared_data/snpe/snpe-1.57.0.3124/lib/python/qti/aisw/converters/common/converter_ir/translation.py", line 51, in apply_method_to_op

    return translation.apply_method(method_name, *args, **kwargs)

  File "/home/worker/shared_data/snpe/snpe-1.57.0.3124/lib/python/qti/aisw/converters/common/converter_ir/translation.py", line 17, in apply_method

    return self.indexed_methods[method_name](*args, **kwargs)

  File "/home/worker/shared_data/snpe/snpe-1.57.0.3124/lib/python/qti/aisw/converters/common/converter_ir/translation.py", line 120, in add_op

    op = self.extract_parameters(src_op, graph)

  File "/home/worker/shared_data/snpe/snpe-1.57.0.3124/lib/python/qti/aisw/converters/onnx/data_translations.py", line 1239, in extract_parameters

    input_buf = graph.get_buffer(input_name)

  File "/home/worker/shared_data/snpe/snpe-1.57.0.3124/lib/python/qti/aisw/converters/common/converter_ir/op_graph.py", line 1188, in get_buffer

    return self.buffers[buffer_name]

KeyError: 'image_shape'

2021-12-15 10:28:04,482 - 209 - ERROR - Node TFNodes/yolo_evaluation_layer_1/Squeeze: 'image_shape'

So what is your version of onnx and snpe?

  • Up0
  • Down0
weihuan
Join Date: 12 Apr 20
Posts: 270
Posted: Mon, 2021-12-27 17:29

Dear customer,

What's SNPE version you used?

Currently, the SNPE supports TF1.15 only. Definitely, we have supported the environment from TF2. We're recommending to develop and train the model based on TF1.15.

BR.

Wei

  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.