Forums - ERROR_TF_OPERATION_NOT_FOUND: Operation with type BiasAdd not found

5 posts / 0 new
Last post
ERROR_TF_OPERATION_NOT_FOUND: Operation with type BiasAdd not found
ghorpadevish
Join Date: 20 Mar 17
Posts: 12
Posted: Tue, 2017-08-08 03:19

Hi ,

I am trying to convert inception-v3 model trained on tensorflow framework to .dlc type.

my system configurations are:

Ubuntu 14.04

Python-2.7

Tensorflow-1.2

I have followed the step for installation of snpe 1.2.2 as mentioned in the setup document.

after training the model i getting the following output logs:

2017-08-08 12:48:16,650 - 388 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Scope (input) operation(s) not consumed by converter: [u'PlaceholderWithDefault'].
2017-08-08 12:48:16,656 - 125 - ERROR - Encountered Error: ERROR_TF_OPERATION_NOT_FOUND: Operation with type BiasAdd not found within [(u'final_training_ops/Wx_plus_b/add', u'Add')]
Traceback (most recent call last):
  File "/home/ubuntu/snpe-1.2.2/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc", line 119, in main
    converter.convert(args.dlc, args.model_version, converter_command)
  File "/home/ubuntu/snpe-1.2.2/lib/python/converters/tensorflow/converter.py", line 324, in convert
    self._convert_layers()
  File "/home/ubuntu/snpe-1.2.2/lib/python/converters/tensorflow/converter.py", line 353, in _convert_layers
    self._resolve_layers_from_scope(scope_name, scope_ops)
  File "/home/ubuntu/snpe-1.2.2/lib/python/converters/tensorflow/converter.py", line 373, in _resolve_layers_from_scope
    candidate_descriptor = resolver.resolve_layer(scope_name, remaining_ops, graph_helper)
  File "/home/ubuntu/snpe-1.2.2/lib/python/converters/tensorflow/layers/fullyconnected.py", line 56, in resolve_layer
    bias_op = GraphHelper.filter_single_op_by_type(output_ops, 'BiasAdd')
  File "/home/ubuntu/snpe-1.2.2/lib/python/converters/tensorflow/util.py", line 169, in filter_single_op_by_type
    code_to_message.get_message('ERROR_TF_OPERATION_NOT_FOUND')(operation_type, operations_message))
OperationNotFoundError: ERROR_TF_OPERATION_NOT_FOUND: Operation with type BiasAdd not found within [(u'final_training_ops/Wx_plus_b/add', u'Add')]
INFO: Creating inception_v3_quantized.dlc quantized model
[INFO] InitializeStderr: DebugLog initialized.
[INFO] Reading DLC: /home/ubuntu/snpe-1.2.2/models/ASR_inception_v3/dlc/inception_v3.dlc
[ERROR] DLC file /home/ubuntu/snpe-1.2.2/models/ASR_inception_v3/dlc/inception_v3.dlc doesn't exist!
[INFO] DebugLog shutting down.
INFO: Setup inception_v3 completed.
 

What could be the possible error cause and how to resolve this.

Please tell me what should be done.

 

thanks and regards

Vishal

  • Up0
  • Down0
moljaca moderator
Join Date: 25 Jul 17
Location: San Diego
Posts: 40
Posted: Tue, 2017-08-08 17:15

Hi Vishal,

Thank you for interest in using Snapdragon NPE.

I have a couple of questions for you.

Is your Inception-v3 model same architecture as the Inception-v3 model described in a tutorial inside SDK documenation: $SNPE_ROOT/doc/html/tutorial_inceptionv3.html?

Were you able to sucessfuly complete above mentioned tutorial? You may want look in the $SNPE_ROOT/models/inception_v3/scripts/setup_inceptionv3.py script mentioned in the tutorial to see how model has been prepared for SNPE. E.g. you may need to run TensorFlow's optimize_for_inference.py script.

Hope this helps.

Thanks

 

  • Up0
  • Down0
ghorpadevish
Join Date: 20 Mar 17
Posts: 12
Posted: Tue, 2017-08-08 20:28

hi moljaca,

Thank you for you quick response.

I had retrained inception-v3 model using a custom dataset.i followed the step for retraining as mentioned here:

https://www.tensorflow.org/tutorials/image_retraining

the model was succesfully trained,so i believe that the architecture remains the same but i think the final layer is a softmax layer.

as mentioned by moljaca to run optimize_for_inference.py , i had run the script same as in sample code in snpe was able get the optimized

model file as *_opt.pb.But after this the snpe convert tensorflow to dlc function is called where i am facing the issues.

the command used for .dlc conversion is:

cmd = ['snpe-tensorflow-to-dlc',
           '--graph', os.path.join(tensorflow_dir, pb_filename),
           '--input_dim', 'Mul', '299,299,3',
           '--out_node', 'final_result',
           '--dlc', os.path.join(dlc_dir, INCEPTION_V3_DLC_FILENAME),
       '--allow_unconsumed_nodes']

befor using "--allow_unconsumed_nodes" i was getting the following error

"

INFO: Converting tensorflow_inception_graph_opt.pb to SNPE DLC format
2017-08-08 12:41:44.849409: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-08 12:41:47,552 - 388 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Scope (input) operation(s) not consumed by converter: [u'PlaceholderWithDefault'].
2017-08-08 12:41:47,552 - 122 - ERROR - Conversion failed: Some nodes in the Tensorflow graph were not resolved to a layer!
"

i hope the above information will help you resolve the issue faced by me

regards

Vishal

  • Up0
  • Down0
ghorpadevish
Join Date: 20 Mar 17
Posts: 12
Posted: Mon, 2017-08-21 04:01

Hi all,

This issue i was able to resolve by doing some minor changes in fullconnected.py file present in SDK at /home/ubuntu/snpe-1.2.2/lib/python/converters/tensorflow/layers/fullyconnected.py"

At line 56- "bias_op = GraphHelper.filter_single_op_by_type(output_ops, 'BiasAdd')"

changed to " bias_op = GraphHelper.filter_single_op_by_type(output_ops, 'add')"

I was able to convert the model to dlc successfully and run the same on CPU and GPU.

But had some error in DSP due to reshape layer being added while retraining.

This is a sort of workaround .hope it helps

 

Regards

  • Up0
  • Down0
ghorpadevish
Join Date: 20 Mar 17
Posts: 12
Posted: Sun, 2017-09-03 21:42

Hi All,

 

I am posting this comment as this issue is still not resolved.

I hope Qualcomm SNPE team could help in this issue.

 

Thanks and Regards

Vishal

  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.