Forums - qti.aisw.converters.tensorflow.util.ConverterError

3 posts / 0 new
Last post
qti.aisw.converters.tensorflow.util.ConverterError
22imonreal
Join Date: 10 Feb 21
Posts: 80
Posted: Thu, 2021-05-27 06:23

Hello,

I am tring to convert a model to a dlc container and run it on the RB5 DSP or AIP but I keep getting the following error:

qti.aisw.converters.tensorflow.util.ConverterError: build_layer failed with Exception ERROR_TF_LAYER_NO_INPUT_FOUND: Convolution layer conv_deconv_b6_longitude_latitude/efficientnet-b6/block2f_project_conv/Conv2D requires at least one input layer. in layer ConvolutionLayerBuilder

Why would this be and how could I resolve it?

I have read the following:

The reason for the issue is that there is a reshape operation being applied to weights (introduced by tf.layer.dense API). The converter misinterprets it as part of the model execution and hence tries to convert to a layer which it can't since there are no input layers to it.
You can use reshape(tf.reshape API) between a convolution and fully connected to flatten the tensor and it will work fine.
 
Reshape operations to reshape weights are not supported and SNPE fails to convert as it thinks you have a reshape layer in the model which is related to weights and not to the graph computation.We don't currently support tf.layers.dense. It transforms weights and biases in a way that is not currently supported.
I tried using tflearn.layers.fully_connected API in your example and that will work although be aware we don't currently support batchThe issue here is that there is a reshape operation being applied to wheights (introduced by tf.layer.dense API) and the converter missinterprets it as part of the model execution and hence tries to convert to a layer which it can't since there are no input layers to it.
You can use reshape between a convolution and fully connected to flatten the tensor and it will work fine.
 
Thanks


 



 

  • Up0
  • Down0
ap.arunraj
Join Date: 20 Apr 20
Posts: 21
Posted: Sun, 2021-06-06 20:02

Can you provide more details on the model you are trying to convert.

  • Up0
  • Down0
22imonreal
Join Date: 10 Feb 21
Posts: 80
Posted: Tue, 2021-06-08 05:44

Hi,

While I cannot share the model itself, the architecture we tried to convert to dlc is as follows:

The first model architecture is as following:

def get_swish():
    backend = tf.keras.backend
    def swish(x):
        """Swish activation function: x * sigmoid(x).
        Reference: [Searching for Activation Functions](https://arxiv.org/abs/1710.05941)
        """
        if backend.backend() == 'tensorflow':
            try:
                # The native TF implementation has a more
                # memory-efficient gradient implementation
                return backend.tf.nn.swish(x)
            except AttributeError:
                pass
        return x * backend.sigmoid(x)
    return swishdef model1():
    activation1 = get_swish()
    activation2 = tf.keras.layers.LeakyReLU(alpha=0.1)
    input_tensor = tf.keras.layers.Input(shape = (80,40,6))
    x = tf.keras.layers.Conv2D(32,3, activation = activation1, padding = 'same')(input_tensor)
    x = tf.keras.layers.Activation(activation = activation2)(x)    model = tf.keras.models.Model(inputs = input_tensor, outputs = x)
    model.compile()
    model.summary()
    return model

And it gives:

538 - 193 - WARNING - Output node Identity, specified via command line, does not exist in graph.
539 - 183 - ERROR - Conversion FAILED!

ValueError: Exactly one consumer required to squash functional_1/conv2d/IdentityN_noop1 into previous. Got 3

My specific question: why it is not possible to convert binded custom activation to the Activation layer?

The second model's architecturedef model2():
    input_tensor = tf.keras.layers.Input(shape = (80,40,6))
    x = tf.keras.layers.Conv2DTranspose(32, 2)(input_tensor)    model = tf.keras.models.Model(inputs = input_tensor, outputs = x)
    model.compile()
    model.summary()
    return model

563 - 183 - ERROR - Conversion FAILED!

ValueError: Tensor conversion requested dtype int32 for Tensor with dtype float32: <tf.Tensor 'x:0' shape=(None, 80, 40, 6) dtype=float32>

In the latter case why compiling the model containing Conv2DTranspose is not supported, while it is in the list of supported operations for the snpe converter (deconvolution)?

 

  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.