Forums - Wrong output size of Deconvolution Layer when converting from .pb to .dlc

7 posts / 0 new
Last post
Wrong output size of Deconvolution Layer when converting from .pb to .dlc
samuel_stucky
Join Date: 17 May 17
Posts: 1
Posted: Mon, 2017-07-31 07:25

Hello,

In my neural network i'm using a merge layer to combine the outputs from a conv2D layer and a Deconv2D layer. However when i try to convert the model from .pb to .dlc with th snpe-tensorflow-to-dlc script i get the following error:

Input:

python ./bin/x86_64-linux-clang/snpe-tensorflow-to-dlc --graph "model_minimal.pb" -i "input_1" 64,64,3 --out_node "conv2d_15/Sigmoid"

Error Message:

/home/gogol/snpe-1.2.2/lib/python/converters/tensorflow/layers/concat.py:70: RuntimeWarning: error_code=1004; error_message=Layer parameters combination is invalid. Layer concatenate_1/concat: input conv2d_transpose_1/Relu:0 has size 17 along axis 0, should match output dim (16); error_component=Model Validation; line_no=390; thread_id=140217424582400
  descriptor.axis)
 

the line used to create the deconvolution Layer in Keras:

Conv2DTranspose(8, (3, 3), padding='same', strides=(2, 2), activation=activation,
                        kernel_initializer='he_normal')

I was assuming there is a problem with the handling of the padding parameter. The previous Layer has a outputsize of 8x8x8 Since i use padding='same' i would expect the output size to be 16x16x8 instead of 17 with valid padding. Could this error be caused by the conversion script?

When the model is loaded in Android, the application quits with the same error. The .pb file should be correct since i can load and run it with Tensorflows InterferenceInterface on Android.

Thanks for any help!

 

  • Up0
  • Down0
moljaca moderator
Join Date: 25 Jul 17
Location: San Diego
Posts: 40
Posted: Mon, 2017-07-31 19:25

Hi,

Thanks for the interest in Snapdragon NPE.
Would it be possible to send us a test TF model that exhibits this behavior, so we can try reproducing it here?
Best regards.
 
  • Up0
  • Down0
CN
Join Date: 8 Sep 17
Posts: 12
Posted: Fri, 2017-09-08 02:41
I'm facing the same issue.
 
Here is simple TensorFlow code which is stand-alone and re-produces the same issue. It classifies MNIST, data is automatically downloaded. The ProtoBuf graph is automatically created by the code. Here is the code:
 
 
The command I used to create the DLC is:
snpe-tensorflow-to-dlc --graph graph.pb --input_dim input 28,28,1 --out_node output --dlc graph.dlc
The message I get from the tool is:
~/snpe-1.4.0/lib/python/converters/tensorflow/layers/concat.py:71: RuntimeWarning: error_code=1004; error_message=Layer parameters combination is invalid. Layer concat/concat: input input:0 has size 28 along axis 0, should match output dim (29); error_component=Model Validation; line_no=390; thread_id=140200908379904 descriptor.axis)
~/snpe-1.4.0/lib/python/converters/tensorflow/layers/fullyconnected.py:80: RuntimeWarning: error_code=1004; error_message=Layer parameters combination is invalid. Layer fc/fully_connected/MatMul: mismatch between size of input concat/concat:0 (10933) and width of weight matrix (10192); error_component=Model Validation; line_no=647; thread_id=140200908379904
 

 

  • Up0
  • Down0
moljaca moderator
Join Date: 25 Jul 17
Location: San Diego
Posts: 40
Posted: Fri, 2017-09-15 13:58

Hi,

We can confirm this is an issue with the converter and we will address it in a future SNPE SDK release. In case we find a workaround we will post it here.

Thanks

  • Up0
  • Down0
CN
Join Date: 8 Sep 17
Posts: 12
Posted: Wed, 2017-10-11 00:57

Just tested again with the latest SNPE release (1.6.0) and the bug is still there. The output is different now though:

2017-10-11 09:56:21,562 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (upconv/Conv2d_transpose/Shape) not consumed by converter: Shape.
2017-10-11 09:56:21,562 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (upconv/Conv2d_transpose/strided_slice) not consumed by converter: StridedSlice.
2017-10-11 09:56:21,563 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (upconv/Conv2d_transpose/strided_slice_1) not consumed by converter: StridedSlice.
2017-10-11 09:56:21,563 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (upconv/Conv2d_transpose/mul) not consumed by converter: Mul.
2017-10-11 09:56:21,563 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (upconv/Conv2d_transpose/strided_slice_2) not consumed by converter: StridedSlice.
2017-10-11 09:56:21,563 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (upconv/Conv2d_transpose/mul_1) not consumed by converter: Mul.
2017-10-11 09:56:21,563 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (upconv/Conv2d_transpose/stack) not consumed by converter: Pack.
2017-10-11 09:56:21,563 - 123 - ERROR - Conversion failed: Some operations in the Tensorflow graph were not resolved to a layer!
  • Up0
  • Down0
dmarques
Join Date: 15 Sep 17
Posts: 27
Posted: Wed, 2017-10-11 19:10

Can you post the result of using the option --allow_unconsumed_nodes ?

Please post the exact command line used for conversion. We have been able to convert the excerpt graph provided in this thread.

  • Up0
  • Down0
CN
Join Date: 8 Sep 17
Posts: 12
Posted: Fri, 2017-10-13 06:27

With allow_unconsumed_nodes it gives the same warnings but no error anymore:

$ snpe-tensorflow-to-dlc --graph graph.pb --input_dim input 28,28,1 --out_node output --dlc graph.dlc --allow_unconsumed_nodes
2017-10-13 11:29:17.748503: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-10-13 11:29:17.748523: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-10-13 11:29:17.748538: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-10-13 11:29:17.748543: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-10-13 11:29:17.748557: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-10-13 11:29:17,793 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (upconv/Conv2d_transpose/Shape) not consumed by converter: Shape.
2017-10-13 11:29:17,793 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (upconv/Conv2d_transpose/strided_slice) not consumed by converter: StridedSlice.
2017-10-13 11:29:17,794 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (upconv/Conv2d_transpose/strided_slice_1) not consumed by converter: StridedSlice.
2017-10-13 11:29:17,794 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (upconv/Conv2d_transpose/mul) not consumed by converter: Mul.
2017-10-13 11:29:17,794 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (upconv/Conv2d_transpose/strided_slice_2) not consumed by converter: StridedSlice.
2017-10-13 11:29:17,794 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (upconv/Conv2d_transpose/mul_1) not consumed by converter: Mul.
2017-10-13 11:29:17,794 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (upconv/Conv2d_transpose/stack) not consumed by converter: Pack.
Next step is actually testing it on a Snapdragon device. I will let you know if I run into any issues there.
  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.