Dear SNPE developers,
Recently I tried to convert conv2d_transpose operation, but it always reports error, so I wrote a simple model which contains only a deconv operation:
#!/usr/bin/env python import tensorflow as tf import numpy as np from tensorflow.python.framework import graph_io import freeze_graph import os slim = tf.contrib.slim def deconv_test(): input_size = (1, 20, 20, 3) depth = 8 input_data = np.random.random(input_size) input = tf.placeholder(tf.float32, shape=input_size, name="input") output = slim.conv2d_transpose( input, depth, [2, 2], stride=2 ) #for i in tf.get_default_graph().get_operations(): # print i.name saver = tf.train.Saver() init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) CHECKPOINT_PATH = './deconv_test' saver.save(sess, CHECKPOINT_PATH) # freeze session to a pb file. input_graph_name = "input_graph.pb" output_graph_name = "output_graph.pb" graph_io.write_graph(sess.graph, CHECKPOINT_PATH, input_graph_name) input_graph_path = os.path.join(CHECKPOINT_PATH, input_graph_name) input_saver_def = saver input_binary = False output_node_names = 'Conv2d_transpose/Relu' restore_op_name = "save/restore_all" filename_tensor_name = "save/Const:0" output_graph_path = os.path.join(CHECKPOINT_PATH, output_graph_name) clear_devices = False checkpoint_path =CHECKPOINT_PATH freeze_graph.freeze_graph(input_graph_path, input_saver_def, input_binary, checkpoint_path, output_node_names, restore_op_name, filename_tensor_name, output_graph_path, clear_devices, "") return output if __name__ == '__main__': out = deconv_test()
The operations are printed as follows:
input
Conv2d_transpose/weights/Initializer/random_uniform/shape
Conv2d_transpose/weights/Initializer/random_uniform/min
Conv2d_transpose/weights/Initializer/random_uniform/max
Conv2d_transpose/weights/Initializer/random_uniform/RandomUniform
Conv2d_transpose/weights/Initializer/random_uniform/sub
Conv2d_transpose/weights/Initializer/random_uniform/mul
Conv2d_transpose/weights/Initializer/random_uniform
Conv2d_transpose/weights
Conv2d_transpose/weights/Assign
Conv2d_transpose/weights/read
Conv2d_transpose/biases/Initializer/zeros
Conv2d_transpose/biases
Conv2d_transpose/biases/Assign
Conv2d_transpose/biases/read
Conv2d_transpose/Shape
Conv2d_transpose/strided_slice/stack
Conv2d_transpose/strided_slice/stack_1
Conv2d_transpose/strided_slice/stack_2
Conv2d_transpose/strided_slice
Conv2d_transpose/strided_slice_1/stack
Conv2d_transpose/strided_slice_1/stack_1
Conv2d_transpose/strided_slice_1/stack_2
Conv2d_transpose/strided_slice_1
Conv2d_transpose/strided_slice_2/stack
Conv2d_transpose/strided_slice_2/stack_1
Conv2d_transpose/strided_slice_2/stack_2
Conv2d_transpose/strided_slice_2
Conv2d_transpose/mul/y
Conv2d_transpose/mul
Conv2d_transpose/mul_1/y
Conv2d_transpose/mul_1
Conv2d_transpose/stack/3
Conv2d_transpose/stack
Conv2d_transpose/conv2d_transpose_1
Conv2d_transpose/BiasAdd
Conv2d_transpose/Relu
After pb model is saved, then try to convert it to dlc file with snpe-tensorflow-to-dlc:
./snpe-tensorflow-to-dlc --graph ~/work/Qualcomm_converter/deconv_test/output_graph.pb -i input 20,20,3 --out_node Conv2d_transpose/Relu --dlc ~/work/Qualcomm_converter/deconv_test/output_graph.dlc
And of course it reports error log:
2018-07-13 14:50:20,000 - 365 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (Conv2d_transpose/mul_1) not consumed by converter: Mul.
2018-07-13 14:50:20,001 - 365 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (Conv2d_transpose/strided_slice_2) not consumed by converter: StridedSlice.
2018-07-13 14:50:20,001 - 365 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (Conv2d_transpose/mul) not consumed by converter: Mul.
2018-07-13 14:50:20,001 - 365 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (Conv2d_transpose/strided_slice) not consumed by converter: StridedSlice.
2018-07-13 14:50:20,001 - 365 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (Conv2d_transpose/stack) not consumed by converter: Pack.
2018-07-13 14:50:20,001 - 365 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (Conv2d_transpose/strided_slice_1) not consumed by converter: StridedSlice.
2018-07-13 14:50:20,001 - 123 - ERROR - Conversion failed: ERROR_TF_OPERATION_NOT_MAPPED_TO_LAYER: Some operations in the Tensorflow graph were not resolved to a layer. You can use --allow_unconsumed_nodes for partial graph resolution!
According to SNPE docs, that the deconv operation is supported for conversion, but why it would report error. Did I do something wrong?
Or does there sample code that could be converted sucessfully?
Any comments are appreciated! Thank you guys.
Hi Damon Zhou,
Add "--allow_unconsumed_nodes" option when you convert your tensorflow model using snpe-tensorflow-to-dlc.
Thanks,
Jihoon
Hi guys,
I've also tried with snpe-1.17.0, it reports the same error:
Hi Jihoonk,
Thanks for replying.
I thought that "--allow_unconsumed_nodes" option is used when you do not need the nodes that report error.
But I do need conv2d_transpose stay in the dlc model file. So I shouldn't add "--allow_unconsumed_nodes" option here.
Is that correct?
Dear jihoonk,
Could you help me to confirm that if deconv operation is supported by tensorflow convert tool?
Thank you.
Hi DamonZhou,
Deconvolution layer is definitely supported by SNPE. Refer to below reference page.
https://developer.qualcomm.com/docs/snpe/network_layers.html
Thanks,
Jihoon
Hi jihoonk,
I have checked that deconv operation is supported according to the "Supported network layers", but why do I convert deconv failed?
Does there a debug tool to find out why it convert fail?
Or is there sample code to show how to use deconv operation, then it could be converted sucessfully?
The convert tool only reports error that these operations are not consumed by the convert tool, but no reason.
Thank you for replying!
Hi jihoonk,
I have tried using ‘--allow_unconsumed_nodes’ option, it reports error, when dumping dlc content:
There is a '!' sign before symbol 'conv2d_transpose' , does it mean that conv2d_transpose would be skipped during runtime?
Any follow-ups on this issue? What is the best practice for a deconv layer so that SNPE won't complain?
Many thanks!
I could convert caffemodel which has deconv to dlc file. And the dlc file works well. But I didnot convert tf models.
You can try to use the latest SNPE versoin to re-do this. Maybe the latest version has fixed this issue.