Forums - a really simple custom tf model with ERROR_TF_CONV_RESOLVE_WEIGHTs

4 posts / 0 new
Last post
a really simple custom tf model with ERROR_TF_CONV_RESOLVE_WEIGHTs
tomorrowmlb
Join Date: 5 Jul 18
Posts: 1
Posted: Thu, 2018-07-26 11:41

Hi All,

I trained a really simple two-layered CNN for binary classification.

img_placeholder = tf.placeholder(tf.float32, shape=batch_shape, name='input')
labels = tf.placeholder(tf.float32, shape=(None, 1))


conv = tf.layers.conv2d(img_placeholder, 8, 3, 1, padding='SAME')
fc = tf.layers.flatten(conv)
out = tf.layers.dense(fc, 1)
pred = tf.nn.sigmoid(out, name='output')
cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=out)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    for step in range(20):
        sess.run(optimizer, feed_dict={img_placeholder: x, labels: y})

    graph = tf.get_default_graph()
    graph_def = graph.as_graph_def()
    tf.train.write_graph(graph_def, "./", "test.pb", False)

With the test.pb I run the following command

snpe-tensorflow-to-dlc --graph test.pb -i input "1,480,480,3" --out_node output

The error message is

FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
2018-07-26 10:57:28.997478: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2018-07-26 10:57:29,089 - 388 - WARNING - ERROR_TF_FALLBACK_TO_ONDEMAND_EVALUATION: Unable to resolve operation output shapes in single pass. Using on-demand evaluation!
2018-07-26 10:57:29,102 - 106 - ERROR - Conversion failed: ERROR_TF_CONV_RESOLVE_WEIGHTS: Cannot resolve convolution layer due to missing weights for operation: conv2d/Conv2D
I am using snpe-1.17.0 and tensorflow-1.9.0. I don't know what is the issue, because the model is a really simple CNN without tricky part. Could you share your idea or example code showing how to make it work? Thanks so much! 
  • Up0
  • Down0
798446835
Join Date: 16 Apr 19
Posts: 1
Posted: Fri, 2019-04-19 01:01

Hello, have you solved this problem? I also encounter the convolution failure: ERROR_TF_CONV_RESOLVE_WEIGHTS:"problem. Can you leave a contact information?Thank you.

  • Up0
  • Down0
gesqdn-forum
Join Date: 4 Nov 18
Posts: 184
Posted: Tue, 2019-05-21 05:53

This issue might be due to the procedure followed in converting a tensorflow model to frozen graph.
 
 The below steps are followed to resolve this issue:

   - Saving the model after training using "tf.train.Saver().save()" which inturn generates 3 files -> '.meta .data .index'.
   - The generated .meta file is then used to generate the frozen graph which was not seen or applied in this post.
   - Using the above frozen graph, we can obtain the .dlc file.
 
 Further, as given in the post I have used tf.layers.flatten() to flatten the input tensor.This leads to conversion failure due to unsupported layer in SNPE converter.
 
 But when I have tried with tf.reshape(), the model successfully converted to dlc (tensorflow version 1.13.1, snpe version 1.24.0.256).

  • Up0
  • Down0
gesqdn-forum
Join Date: 4 Nov 18
Posts: 184
Posted: Tue, 2019-05-21 05:53

hi

  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.