Hi,
Recently I tried to convert a tensorflow mode to dlc, but it always report add/sum/max ops not support, so I use a simple tensorflow demo to train a model file, it has only mul and add ops, the source code of test_train.py is listed below:
################################ Start of Code ##############################################
import tensorflow as tf
with tf.device('/cpu:0'):
# Model parameters
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
# Model input and output
x = tf.placeholder(tf.float32, name="input")
with tf.name_scope("result") as scope:
m = tf.mul(W, x)
linear_model = tf.add(m, b, name="add")
y = tf.placeholder(tf.float32, name="output")
for i in tf.get_default_graph().get_operations():
print i.name
# loss
loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares
# optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
# training data
x_train = [1, 2, 3, 4]
y_train = [0, -1, -2, -3]
# training loop
init = tf.initialize_all_variables()
#init = tf.global_variables_initializer()
sess = tf.Session()
saver = tf.train.Saver()
sess.run(init) # reset values to wrong
for i in range(1000):
sess.run(train, {x: x_train, y: y_train})
saver.save(sess, 'test_model', global_step=1000)
# evaluate training accuracy
curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train})
print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))
################################ End of Code ##############################################
After training with command : python test_train.py, it produces files: checkpoint, test_model-1000, test_model-1000.meta, then I convert the meta file with command:
/home/damon/work/snpe-sdk/snpe-1.2.2/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc --graph ~/tensorflow_virt/code/test_model-1000.meta --input_dim "input" 1,1,1 --dlc ~/tensorflow_virt/code/test_model-1000.dlc --out_node "result/add"
But the command reports error like this:
Converted 2 variables to const ops.
2017-09-26 10:31:40,826 - 388 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Scope (result) operation(s) not consumed by converter: [u'Mul', u'Add'].
2017-09-26 10:31:40,827 - 122 - ERROR - Conversion failed: Some nodes in the Tensorflow graph were not resolved to a layer!
Then I tried with parameter: --out_node "result/Mul", the command reports error below:
Converted 1 variables to const ops.
/home/damon/work/snpe-sdk/snpe-1.2.2/lib/python/converters/tensorflow/layers/eltwise.py:105: RuntimeWarning: error_code=1002; error_message=Layer paramter value is invalid. Layer result/Mul: at least two inputs required, have 1; error_component=Model Validation; line_no=582; thread_id=139787782068032
output_name)
Has anyone run into issue like this? Why the convert tool says only 1 input for Layer result/Mul, it has obviously 2 inputs in the source code, isn't it? Or have I done anything wrong?
Please help me to get the right direction, any comment is appreciated! Thank you.
I met the same problem too. It seems that it can't handle the elementwise operation well, like mul, add, maximum. I really don't know what the tensorflow graph compatibility means. I hope that the team of snpe can make it clear and give some example codes.
Bumping this, would like to know what is going on with this error code and see some more examples. I tried reading through the source but it is dense and undocumented.
All Add/Mul operands must come from outputs of a previous layer or must be inputs to the model themselves. Add/Mul operation where one or both operands is a constant or a variable is NOT supported. Also, each add and mul must be within it's own scope.
There were two issues with the converter that prevented the converter from going as planned:
1) Converter does not support the tf.layers.flatten operation. Used tf.resize instead, no errors.
2) Converter does not support "use_bias=False" argument of tf.layers.dense (this wasn't in my example code above but was causing problems). Removing this fixed the "not consumed" warning.