Forums - 1D conv ERROR_TF_CONV_RESOLVE_WEIGHTS

10 posts / 0 new
Last post
1D conv ERROR_TF_CONV_RESOLVE_WEIGHTS
vellamike
Join Date: 23 Oct 17
Posts: 11
Posted: Wed, 2017-10-25 02:24

I have built a TF network in Keras and froze it as a pb file but when attempting to convert to DLC I see the following error: 

*/

ERROR - Conversion failed: ERROR_TF_CONV_RESOLVE_WEIGHTS: Cannot resolve convolution layer due to missing weights for operation: inputNode_1/convolution/Conv2D.

 

Has anyone encountered an error like this before?

  • Up0
  • Down0
yuan.wenhua
Join Date: 14 Sep 17
Posts: 8
Posted: Thu, 2017-10-26 00:28

 I also built a TF network in Keras and froze it as a pb file ,when attempting to convert to DLC, My error is similar to yours:

"ERROR - Conversion failed: Cannot resolve BatchNorm layer due to missing variance value."

  • Up0
  • Down0
shiangyong
Join Date: 21 Sep 17
Posts: 15
Posted: Fri, 2017-10-27 11:04

I might be able to help. Could you please run this with your .pb file and share the nodes corresponding to the layer in the error message? (e.g. all the nodes in the "inputNode_1" layer). Will be helpful if you also include the layer just before it.

import tensorflow as tf

def display_nodes(nodes):
    for i, node in enumerate(nodes):
        print('%d %s %s' % (i, node.name, node.op))
        [print(u'└─── Input%d ─ %s' % (i, n)) for i, n in enumerate(node.input)]

        
# read frozen graph and display nodes
graph = tf.GraphDef()
with tf.FastGFile.Open('your_computation_graph.pb', 'rb') as f:
    graph.ParseFromString(f.read())
    
display_nodes(graph.node)

 

 

  • Up0
  • Down0
vellamike
Join Date: 23 Oct 17
Posts: 11
Posted: Mon, 2017-10-30 02:14

Thanks shiangyong,

 

I rebuilt the model in pure tensorflow to eliminate Keras as a possible source of the poblem but am still seeing the following:

(.venv) vagrant@vagrant-ubuntu-trusty-64:~$ snpe-tensorflow-to-dlc --graph /vagrant/vsn.pb  --input_dim Reshape 1,8000,1 --dlc vsn.dlc --out_node output

2017-10-30 09:09:26.556746: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.

2017-10-30 09:09:26.557024: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.

2017-10-30 09:09:26.557187: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.

 

2017-10-30 09:09:26,638 - 123 - ERROR - Conversion failed: ERROR_TF_CONV_RESOLVE_WEIGHTS: Cannot resolve convolution layer due to missing weights for operation: conv1d/convolution/Conv2D

 

I therefore ran the code you suggested against my model and this is the full output:

0 enqueue_input/random_shuffle_queue RandomShuffleQueueV2
1 random_shuffle_queue_DequeueUpTo/n Const
2 random_shuffle_queue_DequeueUpTo QueueDequeueUpToV2
└─── Input0 ─ enqueue_input/random_shuffle_queue
└─── Input1 ─ random_shuffle_queue_DequeueUpTo/n
3 Reshape/shape Const
4 Reshape Reshape
└─── Input0 ─ random_shuffle_queue_DequeueUpTo:1
└─── Input1 ─ Reshape/shape
5 conv1d/kernel Const
6 conv1d/kernel/read Identity
└─── Input0 ─ conv1d/kernel
7 conv1d/bias Const
8 conv1d/bias/read Identity
└─── Input0 ─ conv1d/bias
9 conv1d/convolution/ExpandDims/dim Const
10 conv1d/convolution/ExpandDims ExpandDims
└─── Input0 ─ Reshape
└─── Input1 ─ conv1d/convolution/ExpandDims/dim
11 conv1d/convolution/ExpandDims_1/dim Const
12 conv1d/convolution/ExpandDims_1 ExpandDims
└─── Input0 ─ conv1d/kernel/read
└─── Input1 ─ conv1d/convolution/ExpandDims_1/dim
13 conv1d/convolution/Conv2D Conv2D
└─── Input0 ─ conv1d/convolution/ExpandDims
└─── Input1 ─ conv1d/convolution/ExpandDims_1
14 conv1d/convolution/Squeeze Squeeze
└─── Input0 ─ conv1d/convolution/Conv2D
15 conv1d/BiasAdd BiasAdd
└─── Input0 ─ conv1d/convolution/Squeeze
└─── Input1 ─ conv1d/bias/read
16 conv1d/Relu Relu
└─── Input0 ─ conv1d/BiasAdd
17 max_pooling1d/ExpandDims/dim Const
18 max_pooling1d/ExpandDims ExpandDims
└─── Input0 ─ conv1d/Relu
└─── Input1 ─ max_pooling1d/ExpandDims/dim
19 max_pooling1d/MaxPool MaxPool
└─── Input0 ─ max_pooling1d/ExpandDims
20 max_pooling1d/Squeeze Squeeze
└─── Input0 ─ max_pooling1d/MaxPool
21 conv1d_1/kernel Const
22 conv1d_1/kernel/read Identity
└─── Input0 ─ conv1d_1/kernel
23 conv1d_1/bias Const
24 conv1d_1/bias/read Identity
└─── Input0 ─ conv1d_1/bias
25 conv1d_2/convolution/ExpandDims/dim Const
26 conv1d_2/convolution/ExpandDims ExpandDims
└─── Input0 ─ max_pooling1d/Squeeze
└─── Input1 ─ conv1d_2/convolution/ExpandDims/dim
27 conv1d_2/convolution/ExpandDims_1/dim Const
28 conv1d_2/convolution/ExpandDims_1 ExpandDims
└─── Input0 ─ conv1d_1/kernel/read
└─── Input1 ─ conv1d_2/convolution/ExpandDims_1/dim
29 conv1d_2/convolution/Conv2D Conv2D
└─── Input0 ─ conv1d_2/convolution/ExpandDims
└─── Input1 ─ conv1d_2/convolution/ExpandDims_1
30 conv1d_2/convolution/Squeeze Squeeze
└─── Input0 ─ conv1d_2/convolution/Conv2D
31 conv1d_2/BiasAdd BiasAdd
└─── Input0 ─ conv1d_2/convolution/Squeeze
└─── Input1 ─ conv1d_1/bias/read
32 conv1d_2/Relu Relu
└─── Input0 ─ conv1d_2/BiasAdd
33 max_pooling1d_2/ExpandDims/dim Const
34 max_pooling1d_2/ExpandDims ExpandDims
└─── Input0 ─ conv1d_2/Relu
└─── Input1 ─ max_pooling1d_2/ExpandDims/dim
35 max_pooling1d_2/MaxPool MaxPool
└─── Input0 ─ max_pooling1d_2/ExpandDims
36 max_pooling1d_2/Squeeze Squeeze
└─── Input0 ─ max_pooling1d_2/MaxPool
37 conv1d_2/kernel Const
38 conv1d_2/kernel/read Identity
└─── Input0 ─ conv1d_2/kernel
39 conv1d_2/bias Const
40 conv1d_2/bias/read Identity
└─── Input0 ─ conv1d_2/bias
41 conv1d_3/convolution/ExpandDims/dim Const
42 conv1d_3/convolution/ExpandDims ExpandDims
└─── Input0 ─ max_pooling1d_2/Squeeze
└─── Input1 ─ conv1d_3/convolution/ExpandDims/dim
43 conv1d_3/convolution/ExpandDims_1/dim Const
44 conv1d_3/convolution/ExpandDims_1 ExpandDims
└─── Input0 ─ conv1d_2/kernel/read
└─── Input1 ─ conv1d_3/convolution/ExpandDims_1/dim
45 conv1d_3/convolution/Conv2D Conv2D
└─── Input0 ─ conv1d_3/convolution/ExpandDims
└─── Input1 ─ conv1d_3/convolution/ExpandDims_1
46 conv1d_3/convolution/Squeeze Squeeze
└─── Input0 ─ conv1d_3/convolution/Conv2D
47 conv1d_3/BiasAdd BiasAdd
└─── Input0 ─ conv1d_3/convolution/Squeeze
└─── Input1 ─ conv1d_2/bias/read
48 conv1d_3/Relu Relu
└─── Input0 ─ conv1d_3/BiasAdd
49 conv1d_3/kernel Const
50 conv1d_3/kernel/read Identity
└─── Input0 ─ conv1d_3/kernel
51 conv1d_3/bias Const
52 conv1d_3/bias/read Identity
└─── Input0 ─ conv1d_3/bias
53 conv1d_4/convolution/ExpandDims/dim Const
54 conv1d_4/convolution/ExpandDims ExpandDims
└─── Input0 ─ conv1d_3/Relu
└─── Input1 ─ conv1d_4/convolution/ExpandDims/dim
55 conv1d_4/convolution/ExpandDims_1/dim Const
56 conv1d_4/convolution/ExpandDims_1 ExpandDims
└─── Input0 ─ conv1d_3/kernel/read
└─── Input1 ─ conv1d_4/convolution/ExpandDims_1/dim
57 conv1d_4/convolution/Conv2D Conv2D
└─── Input0 ─ conv1d_4/convolution/ExpandDims
└─── Input1 ─ conv1d_4/convolution/ExpandDims_1
58 conv1d_4/convolution/Squeeze Squeeze
└─── Input0 ─ conv1d_4/convolution/Conv2D
59 conv1d_4/BiasAdd BiasAdd
└─── Input0 ─ conv1d_4/convolution/Squeeze
└─── Input1 ─ conv1d_3/bias/read
60 conv1d_4/Relu Relu
└─── Input0 ─ conv1d_4/BiasAdd
61 dense/kernel Const
62 dense/kernel/read Identity
└─── Input0 ─ dense/kernel
63 dense/bias Const
64 dense/bias/read Identity
└─── Input0 ─ dense/bias
65 dense/Tensordot/Shape Shape
└─── Input0 ─ conv1d_4/Relu
66 dense/Tensordot/Rank Const
67 dense/Tensordot/axes Const
68 dense/Tensordot/GreaterEqual/y Const
69 dense/Tensordot/GreaterEqual GreaterEqual
└─── Input0 ─ dense/Tensordot/axes
└─── Input1 ─ dense/Tensordot/GreaterEqual/y
70 dense/Tensordot/Cast Cast
└─── Input0 ─ dense/Tensordot/GreaterEqual
71 dense/Tensordot/mul Mul
└─── Input0 ─ dense/Tensordot/Cast
└─── Input1 ─ dense/Tensordot/axes
72 dense/Tensordot/Less/y Const
73 dense/Tensordot/Less Less
└─── Input0 ─ dense/Tensordot/axes
└─── Input1 ─ dense/Tensordot/Less/y
74 dense/Tensordot/Cast_1 Cast
└─── Input0 ─ dense/Tensordot/Less
75 dense/Tensordot/add Add
└─── Input0 ─ dense/Tensordot/axes
└─── Input1 ─ dense/Tensordot/Rank
76 dense/Tensordot/mul_1 Mul
└─── Input0 ─ dense/Tensordot/Cast_1
└─── Input1 ─ dense/Tensordot/add
77 dense/Tensordot/add_1 Add
└─── Input0 ─ dense/Tensordot/mul
└─── Input1 ─ dense/Tensordot/mul_1
78 dense/Tensordot/range/start Const
79 dense/Tensordot/range/delta Const
80 dense/Tensordot/range Range
└─── Input0 ─ dense/Tensordot/range/start
└─── Input1 ─ dense/Tensordot/Rank
└─── Input2 ─ dense/Tensordot/range/delta
81 dense/Tensordot/ListDiff ListDiff
└─── Input0 ─ dense/Tensordot/range
└─── Input1 ─ dense/Tensordot/add_1
82 dense/Tensordot/Gather Gather
└─── Input0 ─ dense/Tensordot/Shape
└─── Input1 ─ dense/Tensordot/ListDiff
83 dense/Tensordot/Gather_1 Gather
└─── Input0 ─ dense/Tensordot/Shape
└─── Input1 ─ dense/Tensordot/add_1
84 dense/Tensordot/Const Const
85 dense/Tensordot/Prod Prod
└─── Input0 ─ dense/Tensordot/Gather
└─── Input1 ─ dense/Tensordot/Const
86 dense/Tensordot/Const_1 Const
87 dense/Tensordot/Prod_1 Prod
└─── Input0 ─ dense/Tensordot/Gather_1
└─── Input1 ─ dense/Tensordot/Const_1
88 dense/Tensordot/concat_1/axis Const
89 dense/Tensordot/concat_1 ConcatV2
└─── Input0 ─ dense/Tensordot/ListDiff
└─── Input1 ─ dense/Tensordot/add_1
└─── Input2 ─ dense/Tensordot/concat_1/axis
90 dense/Tensordot/stack Pack
└─── Input0 ─ dense/Tensordot/Prod
└─── Input1 ─ dense/Tensordot/Prod_1
91 dense/Tensordot/transpose Transpose
└─── Input0 ─ conv1d_4/Relu
└─── Input1 ─ dense/Tensordot/concat_1
92 dense/Tensordot/Reshape Reshape
└─── Input0 ─ dense/Tensordot/transpose
└─── Input1 ─ dense/Tensordot/stack
93 dense/Tensordot/transpose_1/perm Const
94 dense/Tensordot/transpose_1 Transpose
└─── Input0 ─ dense/kernel/read
└─── Input1 ─ dense/Tensordot/transpose_1/perm
95 dense/Tensordot/Reshape_1/shape Const
96 dense/Tensordot/Reshape_1 Reshape
└─── Input0 ─ dense/Tensordot/transpose_1
└─── Input1 ─ dense/Tensordot/Reshape_1/shape
97 dense/Tensordot/MatMul MatMul
└─── Input0 ─ dense/Tensordot/Reshape
└─── Input1 ─ dense/Tensordot/Reshape_1
98 dense/Tensordot/Const_2 Const
99 dense/Tensordot/concat_2/axis Const
100 dense/Tensordot/concat_2 ConcatV2
└─── Input0 ─ dense/Tensordot/Gather
└─── Input1 ─ dense/Tensordot/Const_2
└─── Input2 ─ dense/Tensordot/concat_2/axis
101 dense/Tensordot Reshape
└─── Input0 ─ dense/Tensordot/MatMul
└─── Input1 ─ dense/Tensordot/concat_2
102 dense/BiasAdd BiasAdd
└─── Input0 ─ dense/Tensordot
└─── Input1 ─ dense/bias/read
103 dense/Relu Relu
└─── Input0 ─ dense/BiasAdd
104 dropout/dropout/keep_prob Const
105 dropout/dropout/Shape Shape
└─── Input0 ─ dense/Relu
106 dropout/dropout/random_uniform/min Const
107 dropout/dropout/random_uniform/max Const
108 dropout/dropout/random_uniform/RandomUniform RandomUniform
└─── Input0 ─ dropout/dropout/Shape
109 dropout/dropout/random_uniform/sub Sub
└─── Input0 ─ dropout/dropout/random_uniform/max
└─── Input1 ─ dropout/dropout/random_uniform/min
110 dropout/dropout/random_uniform/mul Mul
└─── Input0 ─ dropout/dropout/random_uniform/RandomUniform
└─── Input1 ─ dropout/dropout/random_uniform/sub
111 dropout/dropout/random_uniform Add
└─── Input0 ─ dropout/dropout/random_uniform/mul
└─── Input1 ─ dropout/dropout/random_uniform/min
112 dropout/dropout/add Add
└─── Input0 ─ dropout/dropout/keep_prob
└─── Input1 ─ dropout/dropout/random_uniform
113 dropout/dropout/Floor Floor
└─── Input0 ─ dropout/dropout/add
114 dropout/dropout/div RealDiv
└─── Input0 ─ dense/Relu
└─── Input1 ─ dropout/dropout/keep_prob
115 dropout/dropout/mul Mul
└─── Input0 ─ dropout/dropout/div
└─── Input1 ─ dropout/dropout/Floor
116 logit/kernel Const
117 logit/kernel/read Identity
└─── Input0 ─ logit/kernel
118 logit/bias Const
119 logit/bias/read Identity
└─── Input0 ─ logit/bias
120 logit/Tensordot/Shape Shape
└─── Input0 ─ dropout/dropout/mul
121 logit/Tensordot/Rank Const
122 logit/Tensordot/axes Const
123 logit/Tensordot/GreaterEqual/y Const
124 logit/Tensordot/GreaterEqual GreaterEqual
└─── Input0 ─ logit/Tensordot/axes
└─── Input1 ─ logit/Tensordot/GreaterEqual/y
125 logit/Tensordot/Cast Cast
└─── Input0 ─ logit/Tensordot/GreaterEqual
126 logit/Tensordot/mul Mul
└─── Input0 ─ logit/Tensordot/Cast
└─── Input1 ─ logit/Tensordot/axes
127 logit/Tensordot/Less/y Const
128 logit/Tensordot/Less Less
└─── Input0 ─ logit/Tensordot/axes
└─── Input1 ─ logit/Tensordot/Less/y
129 logit/Tensordot/Cast_1 Cast
└─── Input0 ─ logit/Tensordot/Less
130 logit/Tensordot/add Add
└─── Input0 ─ logit/Tensordot/axes
└─── Input1 ─ logit/Tensordot/Rank
131 logit/Tensordot/mul_1 Mul
└─── Input0 ─ logit/Tensordot/Cast_1
└─── Input1 ─ logit/Tensordot/add
132 logit/Tensordot/add_1 Add
└─── Input0 ─ logit/Tensordot/mul
└─── Input1 ─ logit/Tensordot/mul_1
133 logit/Tensordot/range/start Const
134 logit/Tensordot/range/delta Const
135 logit/Tensordot/range Range
└─── Input0 ─ logit/Tensordot/range/start
└─── Input1 ─ logit/Tensordot/Rank
└─── Input2 ─ logit/Tensordot/range/delta
136 logit/Tensordot/ListDiff ListDiff
└─── Input0 ─ logit/Tensordot/range
└─── Input1 ─ logit/Tensordot/add_1
137 logit/Tensordot/Gather Gather
└─── Input0 ─ logit/Tensordot/Shape
└─── Input1 ─ logit/Tensordot/ListDiff
138 logit/Tensordot/Gather_1 Gather
└─── Input0 ─ logit/Tensordot/Shape
└─── Input1 ─ logit/Tensordot/add_1
139 logit/Tensordot/Const Const
140 logit/Tensordot/Prod Prod
└─── Input0 ─ logit/Tensordot/Gather
└─── Input1 ─ logit/Tensordot/Const
141 logit/Tensordot/Const_1 Const
142 logit/Tensordot/Prod_1 Prod
└─── Input0 ─ logit/Tensordot/Gather_1
└─── Input1 ─ logit/Tensordot/Const_1
143 logit/Tensordot/concat_1/axis Const
144 logit/Tensordot/concat_1 ConcatV2
└─── Input0 ─ logit/Tensordot/ListDiff
└─── Input1 ─ logit/Tensordot/add_1
└─── Input2 ─ logit/Tensordot/concat_1/axis
145 logit/Tensordot/stack Pack
└─── Input0 ─ logit/Tensordot/Prod
└─── Input1 ─ logit/Tensordot/Prod_1
146 logit/Tensordot/transpose Transpose
└─── Input0 ─ dropout/dropout/mul
└─── Input1 ─ logit/Tensordot/concat_1
147 logit/Tensordot/Reshape Reshape
└─── Input0 ─ logit/Tensordot/transpose
└─── Input1 ─ logit/Tensordot/stack
148 logit/Tensordot/transpose_1/perm Const
149 logit/Tensordot/transpose_1 Transpose
└─── Input0 ─ logit/kernel/read
└─── Input1 ─ logit/Tensordot/transpose_1/perm
150 logit/Tensordot/Reshape_1/shape Const
151 logit/Tensordot/Reshape_1 Reshape
└─── Input0 ─ logit/Tensordot/transpose_1
└─── Input1 ─ logit/Tensordot/Reshape_1/shape
152 logit/Tensordot/MatMul MatMul
└─── Input0 ─ logit/Tensordot/Reshape
└─── Input1 ─ logit/Tensordot/Reshape_1
153 logit/Tensordot/Const_2 Const
154 logit/Tensordot/concat_2/axis Const
155 logit/Tensordot/concat_2 ConcatV2
└─── Input0 ─ logit/Tensordot/Gather
└─── Input1 ─ logit/Tensordot/Const_2
└─── Input2 ─ logit/Tensordot/concat_2/axis
156 logit/Tensordot Reshape
└─── Input0 ─ logit/Tensordot/MatMul
└─── Input1 ─ logit/Tensordot/concat_2
157 logit/BiasAdd BiasAdd
└─── Input0 ─ logit/Tensordot
└─── Input1 ─ logit/bias/read
158 output Identity
└─── Input0 ─ logit/BiasAdd

I am not certian that I am using the correct input node but it's not clear that this would be the cause of the problem.

*/
  • Up0
  • Down0
vellamike
Join Date: 23 Oct 17
Posts: 11
Posted: Mon, 2017-10-30 05:12

P.S I am using TensorFlow Conv1D layers as follows:

 

    conv1 = tf.layers.conv1d(
        inputs=input_layer,
        filters=32,
        kernel_size=[11],
        padding="same",
        activation=tf.nn.relu)
 
 
Could the problem be that SNPE does not support 1D convolutions?
  • Up0
  • Down0
shiangyong
Join Date: 21 Sep 17
Posts: 15
Posted: Mon, 2017-10-30 10:52

Hi vellamike,

I think you're right. I didn't see any support for Conv1D on the SNPE docs.

I also noticed other potential issues in your model:

- There are dropout nodes that should have been removed when you do variable freezing

- It does seem strange that your input node is the "Reshape" node. The input node usually has the "Placeholder" op

Shiang Yong

  • Up0
  • Down0
vellamike
Join Date: 23 Oct 17
Posts: 11
Posted: Mon, 2017-10-30 10:58

I think the dropout nodes only get removed when you run the optimize_for_inference tool, I tried doing this but the origingal issue remained.

I think that a conv1d layer in tensorflow is just a convenient wrapper for conv2D so I'm not completely convinced that this is the issue, but I will try and express the model with conv2D and report back with how well it works.

  • Up0
  • Down0
shiangyong
Join Date: 21 Sep 17
Posts: 15
Posted: Tue, 2017-10-31 16:33

I think you are right about the Conv1D acting as a wrapper. Those ExpandDims operations just before the Conv2D are likely part of that.

  • Up0
  • Down0
vellamike
Join Date: 23 Oct 17
Posts: 11
Posted: Thu, 2017-11-02 06:49

I converted my Conv1D layers to be 2D layers and the error appears to have gone away and been replaced by a new error which I will start a new thread about.

  • Up0
  • Down0
vellamike
Join Date: 23 Oct 17
Posts: 11
Posted: Thu, 2017-11-02 06:49

I converted my Conv1D layers to be 2D layers and the error appears to have gone away and been replaced by a new error which I will start a new thread about.

  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.