Forums - Does deconv is supported by HTA ?

1 post / 0 new
Does deconv is supported by HTA ?
zhaoyangstar
Join Date: 14 Apr 19
Posts: 23
Posted: Tue, 2019-11-19 05:00

Hi,

Because I need upsampling operation, I used deconvolution layer to release upsampling 2x.
After converting .caffemodel to .dlc, I tried to quantize .dlc using command "snpe-dlc-quantize --input_dlc mnist_deconv.dlc --input_list data/image_list.txt  --output_dlc tmp_quantized.dlc --enable_hta" and got the following output:

[INFO] InitializeStderr: DebugLog initialized.
[INFO] Reading DLC: mnist_deconv.dlc
[INFO] Writing intermediate model
[INFO] *** Loading images from input list: data/image_list.txt***
[WARNING] NetworkTopology::populateNetworkDesc network desc inputs is empty. Does this network have data/input layer?
[INFO] Setting activation for layer: data and buffer: data
[INFO] min: 0.000000, max: 1.000000, delta: 0.003922, offset: 0.000000
[INFO] Setting activation for layer: conv1 and buffer: conv1
[INFO] min: -1.842797, max: 2.315727, delta: 0.016308, offset: -113.000000
[INFO] Setting activation for layer: pool1 and buffer: pool1
[INFO] min: -1.218428, max: 2.312245, delta: 0.013846, offset: -88.000000
[INFO] Setting activation for layer: conv2 and buffer: conv2
[INFO] min: -5.615337, max: 6.623218, delta: 0.047994, offset: -117.000000
[INFO] Setting activation for layer: pool2 and buffer: pool2
[INFO] min: -3.486232, max: 6.615917, delta: 0.039616, offset: -88.000000
[INFO] Setting activation for layer: deconv1 and buffer: deconv1
[INFO] min: -7.844623, max: 5.491236, delta: 0.052297, offset: -150.000000
[INFO] Setting activation for layer: pool3 and buffer: pool3
[INFO] min: -3.034158, max: 5.468154, delta: 0.033342, offset: -91.000000
[INFO] Setting activation for layer: ip1 and buffer: ip1
[INFO] min: -3.558097, max: 4.542927, delta: 0.031769, offset: -112.000000
[INFO] Setting activation for layer: relu1 and buffer: relu1.ip1
[INFO] min: 0.000000, max: 4.539277, delta: 0.017801, offset: 0.000000
[INFO] Setting activation for layer: ip2 and buffer: ip2
[INFO] min: -7.941808, max: 14.559981, delta: 0.088242, offset: -90.000000
[INFO] Writing quantized model to: tmp_quantized.dlc
[INFO] Compiling AIX metadata into DLC.
[INFO] Record Version:: 1.1.0.0
[INFO] Compiler Version:: 1.2.1.0
[INFO] Driver Version:: 1.0.0.0
[WARNING] No manual partitions specified. Resorting to automatic partitioning.
[INFO] HTA Subnet 1: <0, 9>
Don't know how to process layer: deconv1
[ERROR] Failed to generate HTA blob for subnet 0; error = 4
[ERROR] Failed to process subnets to produce HTA metadata.
[ERROR] Couldn't compile HTA metadata into DLC.
[INFO] DebugLog shutting down.

It seems that deconv is not supported on HTA ? I have verified that the fp32 .dlc is right because it can run smoothly on DSP.  Does anyone meet the similar problem? Thanks in advance ^_^

I am using snpe-1.23.1.245 and the .prototxt is :

name: "LeNet"
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/mnist_train_lmdb"
    batch_size: 64
    backend: LMDB
  }
}
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/mnist_test_lmdb"
    batch_size: 100
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "deconv1"
  type: "Deconvolution"
  bottom: "pool2"
  top: "deconv1"
  convolution_param {
    num_output: 50
    pad: 0
    kernel_size: 2
    stride: 2
    weight_filler {
      type: "msra"
    }
  }
}
layer {
  name: "pool3"
  type: "Pooling"
  bottom: "deconv1"
  top: "pool3"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}

layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool3"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "ip2"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}


 

  • Up0
  • Down0

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.