When converting a custom tensorflow graph I am seeing errors relating to the conversion of a dense layer to DLC:
2017-11-02 13:43:35,260 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (dense/Tensordot/transpose) not consumed by converter: Transpose.2017-11-02 13:43:35,261 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (dense/Tensordot/transpose_1) not consumed by converter: Transpose.2017-11-02 13:43:35,261 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (dense/Tensordot/MatMul) not consumed by converter: MatMul.2017-11-02 13:43:35,261 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (dense/BiasAdd) not consumed by converter: BiasAdd.2017-11-02 13:43:35,261 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (logit/Tensordot/transpose) not consumed by converter: Transpose.2017-11-02 13:43:35,262 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (logit/Tensordot/transpose_1) not consumed by converter: Transpose.2017-11-02 13:43:35,262 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (logit/Tensordot/MatMul) not consumed by converter: MatMul.2017-11-02 13:43:35,262 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (logit/BiasAdd) not consumed by converter: BiasAdd.2017-11-02 13:43:35,263 - 123 - ERROR - Conversion failed: Some operations in the Tensorflow graph were not resolved to a layer!
I am a bit confused by this because the layer is simply a dense layer following a 2D convolutional which I am sure is supported by the SNPE. What is the cause of the error?
The topology of the graph is as follows:
0 input_layer Placeholder1 conv2d/kernel Const2 conv2d/kernel/read Identity└─── Input0 ─ conv2d/kernel3 conv2d/bias Const4 conv2d/bias/read Identity└─── Input0 ─ conv2d/bias5 conv2d/convolution Conv2D└─── Input0 ─ input_layer└─── Input1 ─ conv2d/kernel/read6 conv2d/BiasAdd BiasAdd└─── Input0 ─ conv2d/convolution└─── Input1 ─ conv2d/bias/read7 conv2d/Relu Relu└─── Input0 ─ conv2d/BiasAdd8 max_pooling2d/MaxPool MaxPool└─── Input0 ─ conv2d/Relu9 conv2d_1/kernel Const10 conv2d_1/kernel/read Identity└─── Input0 ─ conv2d_1/kernel11 conv2d_1/bias Const12 conv2d_1/bias/read Identity└─── Input0 ─ conv2d_1/bias13 conv2d_2/convolution Conv2D└─── Input0 ─ max_pooling2d/MaxPool└─── Input1 ─ conv2d_1/kernel/read14 conv2d_2/BiasAdd BiasAdd└─── Input0 ─ conv2d_2/convolution└─── Input1 ─ conv2d_1/bias/read15 conv2d_2/Relu Relu└─── Input0 ─ conv2d_2/BiasAdd16 max_pooling2d_2/MaxPool MaxPool└─── Input0 ─ conv2d_2/Relu17 conv2d_2/kernel Const18 conv2d_2/kernel/read Identity└─── Input0 ─ conv2d_2/kernel19 conv2d_2/bias Const20 conv2d_2/bias/read Identity└─── Input0 ─ conv2d_2/bias21 conv2d_3/convolution Conv2D└─── Input0 ─ max_pooling2d_2/MaxPool└─── Input1 ─ conv2d_2/kernel/read22 conv2d_3/BiasAdd BiasAdd└─── Input0 ─ conv2d_3/convolution└─── Input1 ─ conv2d_2/bias/read23 conv2d_3/Relu Relu└─── Input0 ─ conv2d_3/BiasAdd24 conv2d_3/kernel Const25 conv2d_3/kernel/read Identity└─── Input0 ─ conv2d_3/kernel26 conv2d_3/bias Const27 conv2d_3/bias/read Identity└─── Input0 ─ conv2d_3/bias28 conv2d_4/convolution Conv2D└─── Input0 ─ conv2d_3/Relu└─── Input1 ─ conv2d_3/kernel/read29 conv2d_4/BiasAdd BiasAdd└─── Input0 ─ conv2d_4/convolution└─── Input1 ─ conv2d_3/bias/read30 conv2d_4/Relu Relu└─── Input0 ─ conv2d_4/BiasAdd31 dense/kernel Const32 dense/kernel/read Identity└─── Input0 ─ dense/kernel33 dense/bias Const34 dense/bias/read Identity└─── Input0 ─ dense/bias35 dense/Tensordot/transpose/perm Const36 dense/Tensordot/transpose Transpose└─── Input0 ─ conv2d_4/Relu└─── Input1 ─ dense/Tensordot/transpose/perm37 dense/Tensordot/Reshape/shape Const38 dense/Tensordot/Reshape Reshape└─── Input0 ─ dense/Tensordot/transpose└─── Input1 ─ dense/Tensordot/Reshape/shape39 dense/Tensordot/transpose_1/perm Const40 dense/Tensordot/transpose_1 Transpose└─── Input0 ─ dense/kernel/read└─── Input1 ─ dense/Tensordot/transpose_1/perm41 dense/Tensordot/Reshape_1/shape Const42 dense/Tensordot/Reshape_1 Reshape└─── Input0 ─ dense/Tensordot/transpose_1└─── Input1 ─ dense/Tensordot/Reshape_1/shape43 dense/Tensordot/MatMul MatMul└─── Input0 ─ dense/Tensordot/Reshape└─── Input1 ─ dense/Tensordot/Reshape_144 dense/Tensordot/shape Const45 dense/Tensordot Reshape└─── Input0 ─ dense/Tensordot/MatMul└─── Input1 ─ dense/Tensordot/shape46 dense/BiasAdd BiasAdd└─── Input0 ─ dense/Tensordot└─── Input1 ─ dense/bias/read47 dense/Relu Relu└─── Input0 ─ dense/BiasAdd48 logit/kernel Const49 logit/kernel/read Identity└─── Input0 ─ logit/kernel50 logit/bias Const51 logit/bias/read Identity└─── Input0 ─ logit/bias52 logit/Tensordot/transpose/perm Const53 logit/Tensordot/transpose Transpose└─── Input0 ─ dense/Relu└─── Input1 ─ logit/Tensordot/transpose/perm54 logit/Tensordot/Reshape/shape Const55 logit/Tensordot/Reshape Reshape└─── Input0 ─ logit/Tensordot/transpose└─── Input1 ─ logit/Tensordot/Reshape/shape56 logit/Tensordot/transpose_1/perm Const57 logit/Tensordot/transpose_1 Transpose└─── Input0 ─ logit/kernel/read└─── Input1 ─ logit/Tensordot/transpose_1/perm58 logit/Tensordot/Reshape_1/shape Const59 logit/Tensordot/Reshape_1 Reshape└─── Input0 ─ logit/Tensordot/transpose_1└─── Input1 ─ logit/Tensordot/Reshape_1/shape60 logit/Tensordot/MatMul MatMul└─── Input0 ─ logit/Tensordot/Reshape└─── Input1 ─ logit/Tensordot/Reshape_161 logit/Tensordot/shape Const62 logit/Tensordot Reshape└─── Input0 ─ logit/Tensordot/MatMul└─── Input1 ─ logit/Tensordot/shape63 logit/BiasAdd BiasAdd└─── Input0 ─ logit/Tensordot└─── Input1 ─ logit/bias/read64 output Identity└─── Input0 ─ logit/BiasAdd
Maybe you can try --allow_unconsumed_nodes parameter for snpe-tensorflow-to-dlc command.
What does that command do exactly? I tried running it and the result was a different error so it seems like some progress has been made.
Your graph seems to have Reshape operations to reshape weights? This is not supported and SNPE fails to convert as it thinks you have a reshape layer in the model which is related to weights and not to the graph computation.
Please post a minimal sample graph that reproduces the error if you require further support.
Dear dmarques, I am not sure where the reshape operations are coming from, the graph is defined in python as follows:
We don't currently support tf.layers.dense. It transforms weights and biases in a way that is not currently supported.
I tried using tflearn.layers.fully_connected API in your example and that will work although be aware we don't currently support batch and your fullycoinnected operations are using batches.
Thanks dmarques! Could you post a sample of the code you used to get it to work with tflearn.layers.fully_connected ? I haven't used that API and it would be helpful to me.
You can find samples at http://tflearn.org/getting_started/
@dmarques according to the documentation (v.1.6.0) both dense and reshape layers are supported on all cores?
I'm not sure using tflearn is going to work because I am unable to get it to train using my data (where the target is 2D onehot encoded) and in addition I cannot understand how it would apply the output of a 2D conv layer to a fully connected layer without a reshape or squeeze operation? A tf.Matmul wouldn't work in this case.
The issue here is that there is a reshape operation being applied to wheights (introduced by tf.layer.dense API) and the converter missinterprets it as part of the model execution and hence tries to convert to a layer which it can't since there are no input layers to it.
You can use reshape between a convolution and fully connected to flatten the tensor and it will work fine.