Forums - snpe-tensorflow-to-dlc conversion fails

2 posts / 0 new
Last post
snpe-tensorflow-to-dlc conversion fails
mayu.fujii
Join Date: 10 Dec 20
Posts: 1
Posted: Tue, 2021-10-26 18:47

Hello,

 I am trying to convert a model  from .pb format into .dlc

 

Command Used:

snpe-tensorflow-to-dlc --input_network ./centernet_mobilenetv2_fpn_od/saved_model/saved_model.pb --input_dim input "1,320,320,3" --out_node "output_0" --out_node "output_1" --out_node "output_2" --out_node "output_3" --output_path test.dlc

 
 
2021-10-27 10:02:53.280999: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-10-27 10:02:53.998485: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-10-27 10:02:54.020506: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2599990000 Hz
2021-10-27 10:02:54.021198: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x13f7000 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-10-27 10:02:54.021239: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2021-10-27 10:02:54.022766: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2021-10-27 10:02:54.106843: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-27 10:02:54.107174: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x11aa240 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2021-10-27 10:02:54.107190: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GeForce RTX 2060, Compute Capability 7.5
2021-10-27 10:02:54.107331: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-27 10:02:54.107591: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: 
pciBusID: 0000:01:00.0 name: GeForce RTX 2060 computeCapability: 7.5
coreClock: 1.2GHz coreCount: 30 deviceMemorySize: 5.79GiB deviceMemoryBandwidth: 312.97GiB/s
2021-10-27 10:02:54.107611: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-10-27 10:02:54.108792: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2021-10-27 10:02:54.109840: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2021-10-27 10:02:54.110030: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2021-10-27 10:02:54.111199: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2021-10-27 10:02:54.111846: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2021-10-27 10:02:54.114236: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2021-10-27 10:02:54.114338: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-27 10:02:54.114668: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-27 10:02:54.114911: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2021-10-27 10:02:54.114931: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-10-27 10:02:54.417645: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-10-27 10:02:54.417695: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263]      0 
2021-10-27 10:02:54.417702: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0:   N 
2021-10-27 10:02:54.417868: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-27 10:02:54.418189: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-27 10:02:54.418438: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5258 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2060, pci bus id: 0000:01:00.0, compute capability: 7.5)
2021-10-27 10:02:54,422 - 183 - ERROR - Conversion FAILED!
Traceback (most recent call last):
  File "/home/edgeai/workspace/snpe-1.51.0/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc", line 71, in <module>
    main()
  File "/home/edgeai/workspace/snpe-1.51.0/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc", line 67, in main
    raise e
  File "/home/edgeai/workspace/snpe-1.51.0/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc", line 51, in main
    custom_op_factory=UDOFactory())
  File "/home/edgeai/workspace/snpe-1.51.0/lib/python/qti/aisw/converters/tensorflow/tf_to_ir.py", line 323, in __init__
    saved_model_tag, saved_model_signature_key)
  File "/home/edgeai/workspace/snpe-1.51.0/lib/python/qti/aisw/converters/tensorflow/loader.py", line 60, in __init__
    saved_model_tag, saved_model_signature_key)
  File "/home/edgeai/workspace/snpe-1.51.0/lib/python/qti/aisw/converters/tensorflow/loader.py", line 157, in load
    graph_def = self.__import_graph(graph_path, session, out_node_names, saved_model_tag, saved_model_signature_key)
  File "/home/edgeai/workspace/snpe-1.51.0/lib/python/qti/aisw/converters/tensorflow/loader.py", line 214, in __import_graph
    graph_def = cls.__import_from_frozen_graph(graph_path)
  File "/home/edgeai/workspace/snpe-1.51.0/lib/python/qti/aisw/converters/tensorflow/loader.py", line 265, in __import_from_frozen_graph
    graph_def.ParseFromString(f.read())
google.protobuf.message.DecodeError: Error parsing message
 
 
Any help in solving this issue will be greatly appreciated.
  • Up0
  • Down0
SahilBandar
Join Date: 23 May 18
Posts: 37
Posted: Tue, 2021-11-02 05:21
Hi,
 
The command which you have used the path to the saved_model.pb, by seeing this, it is looking like Tensorflow2 Saved Model.
In Tensorflow 2, Model saving format has been changed to saved_model format, which has content given below,
 
TF2 Saved Model folder contains,
1) The saved_model.pb :  which stores the  graph definition of your model architecture.
2) Assets folder: which is having assets required for model(mostly empty).
3) Variables folder: which holds your trained model weights.
 
Your Folder where you will be saving the model will be considered as your TF2 saved_model.
 
the command you have used is:
snpe-tensorflow-to-dlc --input_network ./centernet_mobilenetv2_fpn_od/saved_model/saved_model.pb --input_dim input "1,320,320,3" --out_node "output_0" --out_node "output_1" --out_node "output_2" --out_node "output_3" --output_path test.dlc
 
Considering you have all required files of model in your saved_model folder, Please change above command with the command given below.
snpe-tensorflow-to-dlc --input_network ./centernet_mobilenetv2_fpn_od/saved_model/ --input_dim input "1,320,320,3" --out_node "output_0" --out_node "output_1" --out_node "output_2" --out_node "output_3" --output_path test.dlc

Hope this will resolve your problem.

Thanks & Regards,
Sahil Bandar
  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.