Forums - tensorflow faster rcnn model conver to dlc failed

1 post / 0 new
tensorflow faster rcnn model conver to dlc failed
yuanhuayong
Join Date: 11 Sep 19
Posts: 1
Posted: Wed, 2019-09-11 04:31

[Issue Description ]:
I'm trying to convert faster_rcnn model to dlc format. This model is from official Tensorflow model zoo.   Download link: http://download.tensorflow.org/models/object_detection/faster_rcnn_incep... . And I use snpe-tensorflow-to-dlc tool to convert pb file to dlc file. But failed.

[Failure Rate in % ]:100%

[Reproduce Step ]:
system information:
OS                           Ubuntu 16.04.6 LTS
Python                     2.7
SNPE                      snpe-1.27.1
Tensorflow(CPU)     1.12.0

1. setup SNPE-1.27.1 follow the official guide.(https://developer.qualcomm.com/docs/snpe/setup.html)
2. setup tensorflow1.12.0 (https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.12.0-cp...)
3. download the pb files (http://download.tensorflow.org/models/object_detection/faster_rcnn_incep...)
4. use the following command to convert:
 ~/work/proj/snpe/snpe-sdk/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc --graph frozen_inference_graph.pb --input_dim image_tensor 1,480,853,3 --out_node detection_boxes --out_node detection_scores --out_node detection_classes --out_node num_detections --dlc test2.dl
------- error message ---
2019-09-11 16:22:29.521309: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
2019-09-11 16:22:38.761646: F tensorflow/compiler/jit/deadness_analysis.cc:639] Check failed: it != predicate_map_.end() _SINK
Aborted (core dumped)

[Initial Analysis ]:
It seems that the tensorflow version1.12.0 dont compatible with this pb file. The official guide suggest  "model training environment" and "converting environment" should be same, it means I must use the same tensorflow version and python version which i used in my training process. But this page (https://github.com/tensorflow/models/blob/master/research/object_detecti...) says: "... Our frozen inference graphs are generated using the v1.12.0 release version of Tensorflow and we do not guarantee that these will work with other versions ...") . That' OK.  We use the same tensorflow version! So I'm puzzled what's going wrong...
 

 

  • Up0
  • Down0

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.