Forums - tensorflow model conversion issue: add/mul not support

6 posts / 0 new
Last post
tensorflow model conversion issue: add/mul not support
zf.africa
Join Date: 15 Jun 17
Posts: 51
Posted: Mon, 2017-09-25 19:58

Hi,

Recently I tried to convert a tensorflow mode to dlc, but it always report add/sum/max ops not support, so I use a simple tensorflow demo to train a model file, it has only mul and add ops, the source code of test_train.py is listed below:

################################ Start of Code ##############################################

import tensorflow as tf
with tf.device('/cpu:0'):
    # Model parameters
    W = tf.Variable([.3], dtype=tf.float32)
    b = tf.Variable([-.3], dtype=tf.float32)
    # Model input and output
    x = tf.placeholder(tf.float32, name="input")
    with tf.name_scope("result") as scope:
        m = tf.mul(W, x)
        linear_model = tf.add(m, b, name="add")
    y = tf.placeholder(tf.float32, name="output")
    for i in tf.get_default_graph().get_operations():
            print i.name
    # loss
    loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares
    # optimizer
    optimizer = tf.train.GradientDescentOptimizer(0.01)
    train = optimizer.minimize(loss)

    # training data
    x_train = [1, 2, 3, 4]
    y_train = [0, -1, -2, -3]
    # training loop
    init = tf.initialize_all_variables()
    #init = tf.global_variables_initializer()
    sess = tf.Session()
    saver = tf.train.Saver()
    sess.run(init) # reset values to wrong
    for i in range(1000):
        sess.run(train, {x: x_train, y: y_train})
    saver.save(sess, 'test_model', global_step=1000)

    # evaluate training accuracy
    curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train})
    print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))

################################ End of Code ##############################################

After training with command : python test_train.py, it produces files: checkpoint, test_model-1000, test_model-1000.meta, then I convert the meta file with command:

/home/damon/work/snpe-sdk/snpe-1.2.2/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc   --graph   ~/tensorflow_virt/code/test_model-1000.meta   --input_dim   "input"   1,1,1   --dlc   ~/tensorflow_virt/code/test_model-1000.dlc   --out_node   "result/add"

But the command reports error like this:

Converted 2 variables to const ops.
2017-09-26 10:31:40,826 - 388 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Scope (result) operation(s) not consumed by converter: [u'Mul', u'Add'].
2017-09-26 10:31:40,827 - 122 - ERROR - Conversion failed: Some nodes in the Tensorflow graph were not resolved to a layer!

 

Then I tried with parameter: --out_node "result/Mul", the command reports error below:

Converted 1 variables to const ops.
/home/damon/work/snpe-sdk/snpe-1.2.2/lib/python/converters/tensorflow/layers/eltwise.py:105: RuntimeWarning: error_code=1002; error_message=Layer paramter value is invalid. Layer result/Mul: at least two inputs required, have 1; error_component=Model Validation; line_no=582; thread_id=139787782068032
  output_name)

 

Has anyone run into issue like this? Why the convert tool says only 1 input for Layer result/Mul, it has obviously 2 inputs in the source code, isn't it? Or have I done anything wrong?

Please help me to get the right direction, any comment is appreciated! Thank you.

  • Up0
  • Down0
zl1994
Join Date: 12 Aug 17
Posts: 8
Posted: Mon, 2017-09-25 23:21

I met the same problem too. It seems that it can't handle the elementwise operation well, like mul, add, maximum. I really don't know what the tensorflow graph compatibility means. I hope that the team of snpe can make it clear and give some example codes.

  • Up0
  • Down0
koumis
Join Date: 21 Sep 17
Posts: 18
Posted: Wed, 2018-02-14 19:38

Bumping this, would like to know what is going on with this error code and see some more examples. I tried reading through the source but it is dense and undocumented.

  • Up0
  • Down0
kirov
Join Date: 28 Dec 17
Posts: 5
Posted: Thu, 2018-02-15 06:17

All Add/Mul operands must come from outputs of a previous layer or must be inputs to the model themselves. Add/Mul operation where one or both operands is a constant or a variable is NOT supported. Also, each add and mul must be within it's own scope.

  • Up0
  • Down0
koumis
Join Date: 21 Sep 17
Posts: 18
Posted: Thu, 2018-02-15 10:53
Thanks for your response kirov, I am getting the warning with the following network script:
 
 
#!/usr/bin/env python
 
import os
import tensorflow as tf
 
class Network(object):
 
    def __init__(self, input_size, learning_rate=0.001):
 
        rows, cols, depth = input_size
 
        with tf.variable_scope('input_layer'):
            self.input_layer = tf.placeholder(tf.float32,
                shape=[None, rows, cols, depth],
                name='Network_Input')
 
        with tf.variable_scope('conv_1'):
            # First hidden layer
            conv_1 = tf.layers.conv2d(
                inputs=self.input_layer,
                filters=16,
                kernel_size=[8, 8],
                strides=(4, 4),
                padding='SAME',
                activation=tf.nn.relu
            )
 
        with tf.variable_scope('conv_2'):
            # Second hidden layer
            conv_2 = tf.layers.conv2d(
                inputs=conv_1,
                filters=32,
                kernel_size=[4, 4],
                strides=(2, 2),
                padding='SAME',
                activation=tf.nn.relu
            )
 
        with tf.variable_scope('conv_2_flat'):
            conv_2_flat = tf.contrib.layers.flatten(conv_2)
 
        with tf.variable_scope('dense_3'):
            # Third hidden layer
            dense_3 = tf.layers.dense(inputs=conv_2_flat, units=256, activation=tf.nn.relu)
 
        with tf.variable_scope('output_layer'):
            # Output layer
            self.output_layer = tf.layers.dense(inputs=dense_3, units=2, activation=None, name='Network_Output')
 
        # Do I need to specify shape?
        self.target = tf.placeholder(tf.float32, name='Network_Target')
 
        self.loss = tf.reduce_mean(tf.square(tf.subtract(self.output_layer, self.target)))
 
        self.optm = tf.train.RMSPropOptimizer(learning_rate=learning_rate).minimize(self.loss)
 
        init = tf.global_variables_initializer()
        self.saver = tf.train.Saver()
        self.sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
        self.sess.run(init)
 
    def save(self, path):
        save_path_model = self.saver.save(self.sess, os.path.join(path, 'model.ckpt'))
        print('Model saved in file: {}'.format(save_path_model))
        return path
 
 
network = Network((64, 64, 12))
network.save('/home/ubuntu/Desktop/tensorflow/export')
 
 
Running snpe-tensorflow-to-dlc, I get the following warnings:
 
➜  tensorflow snpe-tensorflow-to-dlc --graph ./export/model.ckpt.meta -i "input_layer/Network_Input" 64,64,12 --out_node output_layer/Network_Output/MatMul
2018-02-15 10:45:13.231736: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2018-02-15 10:45:13,413 - 309 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv_2_flat_layer/Flatten/flatten/Shape) not consumed by converter: Shape.
2018-02-15 10:45:13,413 - 309 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv_2_flat_layer/Flatten/flatten/strided_slice) not consumed by converter: StridedSlice.
2018-02-15 10:45:13,413 - 309 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv_2_flat_layer/Flatten/flatten/Reshape/shape) not consumed by converter: Pack.
2018-02-15 10:45:13,413 - 309 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (output_layer/Network_Output/MatMul) not consumed by converter: MatMul.
2018-02-15 10:45:13,413 - 123 - ERROR - Conversion failed: ERROR_TF_OPERATION_NOT_MAPPED_TO_LAYER: Some operations in the Tensorflow graph were not resolved to a layer. You can use --allow_unconsumed_nodes for partial graph resolution!
 
 
All the layers in my network appear to be "consumed" so I am not sure what I am doing wrong. How can I fix this?
 
Thanks.
  • Up0
  • Down0
koumis
Join Date: 21 Sep 17
Posts: 18
Posted: Sun, 2018-02-18 21:00

There were two issues with the converter that prevented the converter from going as planned:

1) Converter does not support the tf.layers.flatten operation. Used tf.resize instead, no errors.

2) Converter does not support "use_bias=False" argument of tf.layers.dense (this wasn't in my example code above but was causing problems). Removing this fixed the "not consumed" warning.

  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.