I am attempting to generate a dlc from tensorflow meta files generated by the following example script:
#!/usr/bin/env pythonimport osimport tensorflow as tfclass Network(object):def __init__(self, input_size, learning_rate=0.001):rows, cols, depth = input_sizewith tf.variable_scope('input_layer'):self.input_layer = tf.placeholder(tf.float32,shape=[None, rows, cols, depth],name='Network_Input')with tf.variable_scope('conv_1'):# First hidden layerconv_1 = tf.layers.conv2d(inputs=self.input_layer,filters=16,kernel_size=[8, 8],strides=(4, 4),padding='SAME',activation=tf.nn.relu)with tf.variable_scope('conv_2'):# Second hidden layerconv_2 = tf.layers.conv2d(inputs=conv_1,filters=32,kernel_size=[4, 4],strides=(2, 2),padding='SAME',activation=tf.nn.relu)with tf.variable_scope('conv_2_flat'):conv_2_flat = tf.contrib.layers.flatten(conv_2)with tf.variable_scope('dense_3'):# Third hidden layerdense_3 = tf.layers.dense(inputs=conv_2_flat, units=256, activation=tf.nn.relu)with tf.variable_scope('output_layer'):# Output layerself.output_layer = tf.layers.dense(inputs=dense_3, units=2, activation=None, name='Network_Output')# Do I need to specify shape?self.target = tf.placeholder(tf.float32, name='Network_Target')self.loss = tf.reduce_mean(tf.square(tf.subtract(self.output_layer, self.target)))self.optm = tf.train.RMSPropOptimizer(learning_rate=learning_rate).minimize(self.loss)init = tf.global_variables_initializer()self.saver = tf.train.Saver()self.sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))self.sess.run(init)def save(self, path):save_path_model = self.saver.save(self.sess, os.path.join(path, 'model.ckpt'))print('Model saved in file: {}'.format(save_path_model))return pathnetwork = Network((64, 64, 12))network.save('/home/ubuntu/Desktop/tensorflow/export')
Running snpe-tensorflow-to-dlc on the generated meta file(s), I get the following warnings:
➜ tensorflow snpe-tensorflow-to-dlc --graph ./export/model.ckpt.meta -i "input_layer/Network_Input" 64,64,12 --out_node output_layer/Network_Output/MatMul2018-02-15 10:45:13.231736: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA2018-02-15 10:45:13,413 - 309 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv_2_flat_layer/Flatten/flatten/Shape) not consumed by converter: Shape.2018-02-15 10:45:13,413 - 309 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv_2_flat_layer/Flatten/flatten/strided_slice) not consumed by converter: StridedSlice.2018-02-15 10:45:13,413 - 309 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv_2_flat_layer/Flatten/flatten/Reshape/shape) not consumed by converter: Pack.2018-02-15 10:45:13,413 - 309 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (output_layer/Network_Output/MatMul) not consumed by converter: MatMul.2018-02-15 10:45:13,413 - 123 - ERROR - Conversion failed: ERROR_TF_OPERATION_NOT_MAPPED_TO_LAYER: Some operations in the Tensorflow graph were not resolved to a layer. You can use --allow_unconsumed_nodes for partial graph resolution!
I do not understand what it means that layers are not consumed, especially the flatten layer. The flatten layer (conv_2_flat) is used as an input to the fully connected layer dense_3.
I have a feeling I am using the wrong argument for "--out_node". I write "output_layer/Network_Output/MatMul" instead of just "output_layer/Network_Output" because when I wrote that the program quit with an error saying that "output_layer/Network_Output" does not exist. I printed out all the layers that it was aware if and there were many that used "output_layer/Network_Output" as a prefix (MatMul included) but no layer by itself with that name. I picked one that sounded right (MatMul) and the program got passed that layer, but it feels wrong.
What am I doing wrong?
Thanks
Alexander
any update on this?