Hello,
In my neural network i'm using a merge layer to combine the outputs from a conv2D layer and a Deconv2D layer. However when i try to convert the model from .pb to .dlc with th snpe-tensorflow-to-dlc script i get the following error:
Input:
python ./bin/x86_64-linux-clang/snpe-tensorflow-to-dlc --graph "model_minimal.pb" -i "input_1" 64,64,3 --out_node "conv2d_15/Sigmoid"
Error Message:
/home/gogol/snpe-1.2.2/lib/python/converters/tensorflow/layers/concat.py:70: RuntimeWarning: error_code=1004; error_message=Layer parameters combination is invalid. Layer concatenate_1/concat: input conv2d_transpose_1/Relu:0 has size 17 along axis 0, should match output dim (16); error_component=Model Validation; line_no=390; thread_id=140217424582400
descriptor.axis)
the line used to create the deconvolution Layer in Keras:
Conv2DTranspose(8, (3, 3), padding='same', strides=(2, 2), activation=activation, kernel_initializer='he_normal')
I was assuming there is a problem with the handling of the padding parameter. The previous Layer has a outputsize of 8x8x8 Since i use padding='same' i would expect the output size to be 16x16x8 instead of 17 with valid padding. Could this error be caused by the conversion script?
When the model is loaded in Android, the application quits with the same error. The .pb file should be correct since i can load and run it with Tensorflows InterferenceInterface on Android.
Thanks for any help!
Hi,
Hi,
We can confirm this is an issue with the converter and we will address it in a future SNPE SDK release. In case we find a workaround we will post it here.
Thanks
Just tested again with the latest SNPE release (1.6.0) and the bug is still there. The output is different now though:
Can you post the result of using the option --allow_unconsumed_nodes ?
Please post the exact command line used for conversion. We have been able to convert the excerpt graph provided in this thread.
With allow_unconsumed_nodes it gives the same warnings but no error anymore: