Forums - Softmax in the 3rd dimension on 4D data is unsupported

9 posts / 0 new
Last post
Softmax in the 3rd dimension on 4D data is unsupported
CN
Join Date: 8 Sep 17
Posts: 12
Posted: Fri, 2017-09-08 02:45
A softmax in the 3rd dimension on 4D data seems unsupported.
 
Here is simple TensorFlow code which is stand-alone and produces the issue. It classifies MNIST, data is automatically downloaded. The ProtoBuf graph is automatically created by the code. Here is the code:
 
 
The command I used to create the DLC is:
snpe-tensorflow-to-dlc --graph graph.pb --input_dim input 28,28,1 --out_node output --dlc graph.dlc
The message I get from the tool is:
388 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Scope (softmax) operation(s) not consumed by converter: [u'Shape', u'Sub', u'ConcatV2', u'Shape']
 

 

  • Up0
  • Down0
dmarques
Join Date: 15 Sep 17
Posts: 27
Posted: Fri, 2017-09-15 12:20

Hello CN,

SNPE does not currently support 4D input tensors (batched inputs). You may want to try the following with a single element in the batch dimension:

snpe-tensorflow-to-dlc --graph graph.pb --input_dim input 28,28,1 --out_node output --dlc graph.dlc --allow_unconsumed_nodes

  • Up0
  • Down0
CN
Join Date: 8 Sep 17
Posts: 12
Posted: Tue, 2017-09-19 00:44

Thanks for your reply. The message I get with the new flag is (independent if I set the batch size to 1 or not):

2017-09-19 09:42:54,555 - 388 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Scope (softmax) operation(s) not consumed by converter: [u'Shape', u'Sub', u'ConcatV2', u'Shape'].
2017-09-19 09:42:54,555 - 388 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Scope (softmax/Slice) operation(s) not consumed by converter: [u'Pack', u'Slice'].
~/snpe-1.4.0/lib/python/converters/tensorflow/layers/reshape.py:83: RuntimeWarning: error_code=1004; error_message=Layer parameters combination is invalid. Layer softmax/Reshape: tensor size mismatch between input {28, 28, 1} and output {1}; error_component=Model Validation; line_no=130; thread_id=140105921783552
  output_name)
2017-09-19 09:42:54,559 - 122 - ERROR - Conversion failed: Reshape layer requires at most one input.

 

 

  • Up0
  • Down0
CN
Join Date: 8 Sep 17
Posts: 12
Posted: Tue, 2017-09-19 02:02

To give a bit more background here:

The included example code (although written as an MNIST classification problem) mimics a use-case of semantic segmentation. A typical result of a semantic segmentation will be a 4D output with dimensions (batch, width, height, class). To get a final classification, a softmax is usually employed. This softmax is done only on the last (3rd) dimension (class), not on the batch, width, or height dimension. Tensorflow only has softmax on 2D data implemented, and thus it reshapes from (batch, width, height, class) to (batch * width * height, class) before doing the softmax and then reshapes afterwards. This is what SNPE struggles with.

Note that I am fine with any batch size, but it seems right now the 'pixel batch' of 28x28 pixels causes the problem. Do you perhaps have an example of SNPE-compatible Tensorflow code doing semantic segmentation?

 

  • Up0
  • Down0
CN
Join Date: 8 Sep 17
Posts: 12
Posted: Wed, 2017-10-11 00:48

I have just re-run the original problem with the latest version of SNPE (1.6.0). Also adding the "allow_unconsumed_nodes" flag does not help. Here is the error I now get:

2017-10-11 09:45:13,916 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (softmax/Shape_1) not consumed by converter: Shape.
2017-10-11 09:45:13,916 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (softmax/Sub) not consumed by converter: Sub.
2017-10-11 09:45:13,916 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (softmax/Slice/begin) not consumed by converter: Pack.
2017-10-11 09:45:13,916 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (softmax/Slice) not consumed by converter: Slice.
2017-10-11 09:45:13,916 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (softmax/concat) not consumed by converter: ConcatV2.
2017-10-11 09:45:13,916 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (softmax/Shape) not consumed by converter: Shape.
2017-10-11 09:45:13,916 - 123 - ERROR - Conversion failed: Some operations in the Tensorflow graph were not resolved to a layer!

Did you manage to reproduce the problem given the code I posted?

  • Up0
  • Down0
dmarques
Join Date: 15 Sep 17
Posts: 27
Posted: Wed, 2017-10-11 19:11

Can you post the result of using the option --allow_unconsumed_nodes ?

Please post the exact command line used for conversion. We have been able to convert the excerpt graph provided in this thread.

  • Up0
  • Down0
CN
Join Date: 8 Sep 17
Posts: 12
Posted: Fri, 2017-10-13 02:20

I have just re-done everything to be 100% sure. Here is what I did:

Latest SNPE:
 
$ which snpe-tensorflow-to-dlc
~/snpe-1.6.0/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc
 
First run the snpe_mnist_unsupported_softmax.py script which I linked to on the gist (https://gist.github.com/CNSNPE/400a0d2b67a8729676ff165f99647713), resulting in:
 
$ python snpe_mnist_unsupported_softmax.py
(...)
$ ls -hl
-rw-rw-r-- 1 cnsnpe cnsnpe  33K Oct 13 11:13 graph.pb
 
Regular run results in an error:
 
$ snpe-tensorflow-to-dlc --graph graph.pb --input_dim input 28,28,1 --out_node output --dlc graph.dlc
2017-10-13 11:14:11.080119: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-10-13 11:14:11.080142: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-10-13 11:14:11.080157: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-10-13 11:14:11.080161: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-10-13 11:14:11.080166: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-10-13 11:14:11,110 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (softmax/Shape_1) not consumed by converter: Shape.
2017-10-13 11:14:11,110 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (softmax/Sub) not consumed by converter: Sub.
2017-10-13 11:14:11,110 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (softmax/Slice/begin) not consumed by converter: Pack.
2017-10-13 11:14:11,110 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (softmax/Slice) not consumed by converter: Slice.
2017-10-13 11:14:11,110 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (softmax/concat) not consumed by converter: ConcatV2.
2017-10-13 11:14:11,110 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (softmax/Shape) not consumed by converter: Shape.
2017-10-13 11:14:11,110 - 123 - ERROR - Conversion failed: Some operations in the Tensorflow graph were not resolved to a layer!
 
And now with allow_unconsumed_nodes also results in an error, slightly different:
 
$ snpe-tensorflow-to-dlc --graph graph.pb --input_dim input 28,28,1 --out_node output --dlc graph.dlc --allow_unconsumed_nodes
2017-10-13 11:14:34.225004: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-10-13 11:14:34.225024: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-10-13 11:14:34.225038: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-10-13 11:14:34.225043: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-10-13 11:14:34.225059: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-10-13 11:14:34,256 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (softmax/Shape_1) not consumed by converter: Shape.
2017-10-13 11:14:34,256 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (softmax/Sub) not consumed by converter: Sub.
2017-10-13 11:14:34,256 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (softmax/Slice/begin) not consumed by converter: Pack.
2017-10-13 11:14:34,256 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (softmax/Slice) not consumed by converter: Slice.
2017-10-13 11:14:34,256 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (softmax/concat) not consumed by converter: ConcatV2.
2017-10-13 11:14:34,256 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (softmax/Shape) not consumed by converter: Shape.
2017-10-13 11:14:34,274 - 123 - ERROR - Conversion failed: ERROR_TF_LAYER_INPUT_COUNT_ERROR: Layer Softmax expects 1 input(s), actual 2
Not sure how to continue...
  • Up0
  • Down0
nicolas.dahlquist
Join Date: 4 Dec 17
Posts: 3
Posted: Tue, 2018-02-13 15:50

Bump. We're also affected by this, and would love to see Softmax supported on 3D tensors.

The use case for us is using Softmax in a differentiable depth image-based rendering layer, similar to Deep3D (https://arxiv.org/pdf/1604.03650.pdf). Without Softmax, we have been unable to achieve the same quality of results that we've been able to obtain with it.

  • Up0
  • Down0
nicolas.dahlquist
Join Date: 4 Dec 17
Posts: 3
Posted: Sat, 2018-03-10 17:14

We ended up working around this by using a series of point-wise convolutions as an approximation of softmax.

softmax_disparity_map = Conv2D(1, (1, 1), use_bias=False, weights=[weights],
                               trainable=False, name='softmax_disparity_map')(selection_layer)

with tf.name_scope('softmax_approximation'):
    x = bottleneck
    for layer_width in [16, 8, 8]:
        x = Conv2D(layer_width, (1, 1))(x)
        x = BatchNormalization()(x)
        x = Activation('relu')(x)

    sigmoid_disparity_map = Conv2D(1, (1, 1), activation='sigmoid', name='sigmoid_disparity_map')(x)
    sigmoid_disparity_map = UpSampling2D()(sigmoid_disparity_map)

l1_loss = Lambda(lambda x: K.abs(x[0] - x[1]))([softmax_disparity_map, sigmoid_disparity_map])

 

This approximation converges to an L1 loss of .006 for us, which is accurate enough for our application. 

  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.