Forums - snpe bug when convert tensorflow faster rcnn model

1 post / 0 new
snpe bug when convert tensorflow faster rcnn model
yuanhuayong
Join Date: 11 Sep 19
Posts: 2
Posted: Tue, 2019-09-24 20:54

[Issue Description ]:

I have post a similar issue on forum before but no reply. I read the sorce code and have new findings.

My goal is to convert faster_rcnn model to dlc format. This model is from official Tensorflow model zoo. [download link]

And I use snpe-tensorflow-to-dlc tool to convert pb file to dlc file. But failed.

 

[Failure Rate in % ]: 100%

 

[System Information]:

OS Ubuntu 16.04.6 LTS

Python 2.7

SNPE snpe-1.27.1

Tensorflow(CPU) 1.12.0

 

[Reproduce Step ]:

1. setup SNPE-1.27.1 follow the official guide

2. setup tensorflow1.12.0 [download]

3. download the pb files [download]()

4. use the following command to convert:

~/work/proj/snpe/snpe-sdk/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc --graph frozen_inference_graph.pb --input_dim image_tensor 1,480,853,3 --out_node detection_boxes --out_node detection_scores --out_node detection_classes --out_node num_detections --dlc test2.dlc ------- error message --- 2019-09-11 16:22:29.521309: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA 2019-09-11 16:22:38.761646: F tensorflow/compiler/jit/deadness_analysis.cc:639] Check failed: it != predicate_map_.end() _SINK Aborted (core dumped)

[Initial Analysis ]:

I have use tf1.12.0 to predict using this pb file successfully. So I think the tf version is compatible with this pb file. The official README also  says: "... Our frozen inference graphs are generated using the v1.12.0 release version of Tensorflow and we do not guarantee that these will work with other versions ...") . So we use the same tensorflow version! The problem is not comes from wrong tf version.

 So I decide to  read the source code of snpe-tensorflow-to-dlc script. The error comes from "lib/python/snpe/converters/tensorflow/util.py" Line: 242:

```

outputs = self._session.run(fetches=requiring_evaluation, feed_dict=input_tensors)
```
The script use input tensor(initialized with 0) to calculate all the intermediate targets in the "requiring_evaluation" list. but failed, reporting the error:
```
2019-09-25 10:02:17.757480: F tensorflow/compiler/jit/deadness_analysis.cc:639] Check failed: it != predicate_map_.end() _SINK
```


But if I change the code like this, say, I only compute the first 10 targets:
```
outputs = self._session.run(fetches=requiring_evaluation[0:10], feed_dict=input_tensors)
```


The error doesn't exist anymore. And I also tried using "requiring_evaluation.sort()" before session.run, It alse runs ok.It is very strange.

I google the issue and found one bug fix in tensorflow: https://github.com/tensorflow/tensorflow/commit/0b3c3c55e177b35d38ba3317...

I dont know if this is relate to the current issue. But may be helpful for you to fix this bug.

  • Up0
  • Down0

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.