Forums - Unable to convert the model which is trained using TensorFlow API containing layers (tf.layers) supported by SNPE SDK

2 posts / 0 new
Last post
Unable to convert the model which is trained using TensorFlow API containing layers (tf.layers) supported by SNPE SDK
gesqdn-forum
Join Date: 4 Nov 18
Posts: 184
Posted: Tue, 2019-11-12 04:56
  1. Trained a deep learning model that uses the tf.layers API from TensorFlow and which is supported by SNPE. 
  2. Later, converted the obtained model checkpoint files into a frozen graph (protobuf) file. 
  3. After that when tried to convert the model into DLC file, none of the Conv2D nodes is getting consumed (converted) in the conversion process.

command used to convert into dlc: snpe-tensorflow-to-dlc  --graph frozen_inference_graph.pb -i "input_image_tensor" 128,128,3 --out_node logits/MatMul --dlc output.dlc

The model architecture :

class LandmarkModel(object):
    def __init__(self, output_size):
        self.output_size = output_size

    def __call__(self, input_tensor):
        # |== Layer 0: input layer ==|
        # Input feature x should be of shape (batch_size, image_width, image_height,
        # color_channels). As we will directly using the decoded image tensor of
        # data type int8, a convertion should be performed.
        inputs = tf.cast(input_tensor, tf.float32)

        # |== Layer 1 ==|

        # Convolutional layer.
        # Computes 32 features using a 3x3 filter with ReLU activation.
        conv1 = tf.layers.conv2d(
            inputs=inputs,
            filters=32,
            kernel_size=[3, 3],
            strides=(1, 1),
            padding='valid',
            activation=tf.nn.relu)

        # Pooling layer.
        # First max pooling layer with a 2x2 filter and stride of 2.
        pool1 = tf.layers.max_pooling2d(
            inputs=conv1,
            pool_size=[2, 2],
            strides=(2, 2),
            padding='valid')

        # |== Layer 2 ==|

        # Convolutional layer
        # Computes 64 features using a 3x3 filter with ReLU activation.
        conv2 = tf.layers.conv2d(
            inputs=pool1,
            filters=64,
            kernel_size=[3, 3],
            strides=(1, 1),
            padding='valid',
            activation=tf.nn.relu)

        # Convolutional layer
        # Computes 64 features using a 3x3 filter with ReLU activation.
        conv3 = tf.layers.conv2d(
            inputs=conv2,
            filters=64,
            kernel_size=[3, 3],
            strides=(1, 1),
            padding='valid',
            activation=tf.nn.relu)

        # Pooling layer
        # Second max pooling layer with a 2x2 filter and stride of 2.
        pool2 = tf.layers.max_pooling2d(
            inputs=conv3,
            pool_size=[2, 2],
            strides=(2, 2),
            padding='valid')

        # |== Layer 3 ==|

        # Convolutional layer
        # Computes 64 features using a 3x3 filter with ReLU activation.
        conv4 = tf.layers.conv2d(
            inputs=pool2,
            filters=64,
            kernel_size=[3, 3],
            strides=(1, 1),
            padding='valid',
            activation=tf.nn.relu)

        # Convolutional layer
        # Computes 64 features using a 3x3 filter with ReLU activation.
        conv5 = tf.layers.conv2d(
            inputs=conv4,
            filters=64,
            kernel_size=[3, 3],
            strides=(1, 1),
            padding='valid',
            activation=tf.nn.relu)

        # Pooling layer
        # Third max pooling layer with a 2x2 filter and stride of 2.
        pool3 = tf.layers.max_pooling2d(
            inputs=conv5,
            pool_size=[2, 2],
            strides=(2, 2),
            padding='valid')

        # |== Layer 4 ==|

        # Convolutional layer
        # Computes 128 features using a 3x3 filter with ReLU activation.
        conv6 = tf.layers.conv2d(
            inputs=pool3,
            filters=128,
            kernel_size=[3, 3],
            strides=(1, 1),
            padding='valid',
            activation=tf.nn.relu)

        # Convolutional layer
        # Computes 128 features using a 3x3 filter with ReLU activation.
        conv7 = tf.layers.conv2d(
            inputs=conv6,
            filters=128,
            kernel_size=[3, 3],
            strides=(1, 1),
            padding='valid',
            activation=tf.nn.relu)

        # Pooling layer
        # Fourth max pooling layer with a 2x2 filter and stride of 2.
        pool4 = tf.layers.max_pooling2d(
            inputs=conv7,
            pool_size=[2, 2],
            strides=(1, 1),
            padding='valid')

        # |== Layer 5 ==|

        # Convolutional layer
        conv8 = tf.layers.conv2d(
            inputs=pool4,
            filters=256,
            kernel_size=[3, 3],
            strides=(1, 1),
            padding='valid',
            activation=tf.nn.relu)

        # |== Layer 6 ==|

        # Flatten tensor into a batch of vectors
        flatten = tf.layers.flatten(inputs=conv8)

        # Dense layer 1, a fully connected layer.
        dense1 = tf.layers.dense(
            inputs=flatten,
            units=1024,
            activation=tf.nn.relu,
            use_bias=True)

        # Dense layer 2, also known as the output layer.
        logits = tf.layers.dense(
            inputs=dense1,
            units=self.output_size,
            activation=None,
            use_bias=True,
            name="logits")
        logits = tf.identity(logits, 'final_dense')

        return logits
The error message:
$ snpe-tensorflow-to-dlc  --graph frozen_inference_graph.pb -i "input_image_tensor" 128,128,3 --out_node logits/MatMul --dlc output.dlc
2019-07-08 20:34:55.930900: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-07-08 20:34:56.009608: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-07-08 20:34:56.010099: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: 
name: GeForce GTX 1050 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.62
pciBusID: 0000:01:00.0
totalMemory: 3.94GiB freeMemory: 3.13GiB
2019-07-08 20:34:56.010135: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-07-08 20:34:56.230162: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-07-08 20:34:56.230199: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 
2019-07-08 20:34:56.230206: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N 
2019-07-08 20:34:56.230378: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2843 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
2019-07-08 20:35:03,713 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d_6/Conv2D) not consumed by converter: Conv2D.
2019-07-08 20:35:03,713 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d_8/BiasAdd) not consumed by converter: BiasAdd.
2019-07-08 20:35:03,713 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d_5/Conv2D) not consumed by converter: Conv2D.
2019-07-08 20:35:03,713 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d_4/BiasAdd) not consumed by converter: BiasAdd.
2019-07-08 20:35:03,713 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d_8/Relu) not consumed by converter: Relu.
2019-07-08 20:35:03,713 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d_5/BiasAdd) not consumed by converter: BiasAdd.
2019-07-08 20:35:03,713 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d_2/Conv2D) not consumed by converter: Conv2D.
2019-07-08 20:35:03,714 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d_5/Relu) not consumed by converter: Relu.
2019-07-08 20:35:03,714 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (max_pooling2d_3/MaxPool) not consumed by converter: MaxPool.
2019-07-08 20:35:03,714 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d/BiasAdd) not consumed by converter: BiasAdd.
2019-07-08 20:35:03,714 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (flatten/strided_slice) not consumed by converter: StridedSlice.
2019-07-08 20:35:03,714 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d/Conv2D) not consumed by converter: Conv2D.
2019-07-08 20:35:03,714 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (flatten/Reshape/shape) not consumed by converter: Pack.
2019-07-08 20:35:03,714 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (flatten/Reshape) not consumed by converter: Reshape.
2019-07-08 20:35:03,714 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (input_to_float) not consumed by converter: Cast.
2019-07-08 20:35:03,714 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d_2/Relu) not consumed by converter: Relu.
2019-07-08 20:35:03,714 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d_4/Conv2D) not consumed by converter: Conv2D.
2019-07-08 20:35:03,714 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d_6/BiasAdd) not consumed by converter: BiasAdd.
2019-07-08 20:35:03,714 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d_3/Conv2D) not consumed by converter: Conv2D.
2019-07-08 20:35:03,714 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d_6/Relu) not consumed by converter: Relu.
2019-07-08 20:35:03,715 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d_2/BiasAdd) not consumed by converter: BiasAdd.
2019-07-08 20:35:03,715 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d_3/BiasAdd) not consumed by converter: BiasAdd.
2019-07-08 20:35:03,715 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d_3/Relu) not consumed by converter: Relu.
2019-07-08 20:35:03,715 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (dense/MatMul) not consumed by converter: MatMul.
2019-07-08 20:35:03,715 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (dense/BiasAdd) not consumed by converter: BiasAdd.
2019-07-08 20:35:03,715 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (dense/Relu) not consumed by converter: Relu.
2019-07-08 20:35:03,715 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d_7/Conv2D) not consumed by converter: Conv2D.
2019-07-08 20:35:03,715 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d_7/BiasAdd) not consumed by converter: BiasAdd.
2019-07-08 20:35:03,715 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d_7/Relu) not consumed by converter: Relu.
2019-07-08 20:35:03,715 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (max_pooling2d_2/MaxPool) not consumed by converter: MaxPool.
2019-07-08 20:35:03,715 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (logits/MatMul) not consumed by converter: MatMul.
2019-07-08 20:35:03,715 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (max_pooling2d_4/MaxPool) not consumed by converter: MaxPool.
2019-07-08 20:35:03,715 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d_4/Relu) not consumed by converter: Relu.
2019-07-08 20:35:03,716 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (Reshape) not consumed by converter: Reshape.
2019-07-08 20:35:03,716 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (max_pooling2d/MaxPool) not consumed by converter: MaxPool.
2019-07-08 20:35:03,716 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d/Relu) not consumed by converter: Relu.
2019-07-08 20:35:03,716 - 351 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (conv2d_8/Conv2D) not consumed by converter: Conv2D.
2019-07-08 20:35:03,716 - 106 - ERROR - Conversion failed: ERROR_TF_OPERATION_NOT_MAPPED_TO_LAYER: Some operations in the Tensorflow graph were not resolved to a layer. You can use --allow_unconsumed_nodes for partial graph resolution!

 

 

  • Up0
  • Down0
gesqdn-forum
Join Date: 4 Nov 18
Posts: 184
Posted: Tue, 2019-11-12 05:11
The above issue is resolved. The reason for this is the cast layer in the model, which won't get converted and hence all the subsequent layers are disconnected from the input and removed when weu use '--allow_unconsumed_nodes'. Thus it will not give the right results. 

 

You can try with the below command which helped to convert the model successfully into dlc :

snpe-tensorflow-to-dlc  --graph <.pb file> -i "input_to_float" 1,128,128,3 --out_node 'logits/BiasAdd' --dlc output.dlc --verbose --allow_unconsumed_nodes

  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.