Forums - UDL tutorial produced an error message

2 posts / 0 new
Last post
UDL tutorial produced an error message
yanbophx
Join Date: 16 Oct 17
Posts: 17
Posted: Mon, 2017-11-20 04:35

Hi all,

I encountered a problem when converting caffemodel to dlc using the example provided by UDL tutorial.

F1120 20:23:22.949683 13044 layer_factory.hpp:81] Check failed: registry.count(type) == 1 (0 vs. 1) Unknown layer type: MyCustomScale (known types: AbsVal, Accuracy, ArgMax, BNLL, BatchNorm, BatchReindex, Bias, Concat, ContrastiveLoss, Convolution, Crop, Data, Deconvolution, Dropout, DummyData, ELU, Eltwise, Embed, EuclideanLoss, Exp, Filter, Flatten, HDF5Data, HDF5Output, HingeLoss, Im2col, ImageData, InfogainLoss, InnerProduct, Input, LRN, LSTM, LSTMUnit, Log, MVN, MemoryData, MultinomialLogisticLoss, PReLU, Parameter, Pooling, Power, Python, RNN, ReLU, Reduction, Reshape, SPP, Scale, Sigmoid, SigmoidCrossEntropyLoss, Silence, Slice, Softmax, SoftmaxWithLoss, Split, TanH, Threshold, Tile, WindowData)
*** Check failure stack trace: ***
Aborted (core dumped)

The message above appearred after I typed in the following commands:

:~/snpe-1.6.0/examples/Python/UdlExample$ python snpe-caffe-to-dlc-udl  --caffe_txt $CAFFE_HOME/examples/mnist/mycustomlenet.prototxt --caffe_bin $CAFFE_HOME/examples/mnist/mycustomlenet_iter_10000.caffemodel --dlc mycustomlenet.dlc

Does anyone have any thoughts on how to correct this issue?

Thanks,

 

A complete output log:

-----------------------------------------------------------------------------------------------------------

WARNING: Logging before InitGoogleLogging() is written to STDERR
W1120 20:23:22.592881 13044 _caffe.cpp:139] DEPRECATION WARNING - deprecated use of Python interface
W1120 20:23:22.592921 13044 _caffe.cpp:140] Use this instead (with the named "weights" parameter):
W1120 20:23:22.592923 13044 _caffe.cpp:142] Net('/home/yihangrd/caffe/examples/mnist/mycustomlenet.prototxt', 1, weights='/home/yihangrd/caffe/examples/mnist/mycustomlenet_iter_10000.caffemodel')
I1120 20:23:22.594035 13044 net.cpp:51] Initializing net from parameters:
name: "LeNet"
state {
  phase: TEST
  level: 0
}
layer {
  name: "data"
  type: "Input"
  top: "data"
  input_param {
    shape {
      dim: 64
      dim: 1
      dim: 28
      dim: 28
    }
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "scale"
  type: "MyCustomScale"
  bottom: "ip1"
  top: "scale"
  scale_param {
    bias_term: false
  }
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "scale"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "prob"
  type: "Softmax"
  bottom: "ip2"
  top: "prob"
}
I1120 20:23:22.594142 13044 layer_factory.hpp:77] Creating layer data
I1120 20:23:22.594148 13044 net.cpp:84] Creating Layer data
I1120 20:23:22.594166 13044 net.cpp:380] data -> data
I1120 20:23:22.594178 13044 net.cpp:122] Setting up data
I1120 20:23:22.594185 13044 net.cpp:129] Top shape: 64 1 28 28 (50176)
I1120 20:23:22.594188 13044 net.cpp:137] Memory required for data: 200704
I1120 20:23:22.594203 13044 layer_factory.hpp:77] Creating layer conv1
I1120 20:23:22.594209 13044 net.cpp:84] Creating Layer conv1
I1120 20:23:22.594225 13044 net.cpp:406] conv1 <- data
I1120 20:23:22.594229 13044 net.cpp:380] conv1 -> conv1
I1120 20:23:22.947041 13044 net.cpp:122] Setting up conv1
I1120 20:23:22.947059 13044 net.cpp:129] Top shape: 64 20 24 24 (737280)
I1120 20:23:22.947062 13044 net.cpp:137] Memory required for data: 3149824
I1120 20:23:22.947077 13044 layer_factory.hpp:77] Creating layer pool1
I1120 20:23:22.947087 13044 net.cpp:84] Creating Layer pool1
I1120 20:23:22.947089 13044 net.cpp:406] pool1 <- conv1
I1120 20:23:22.947093 13044 net.cpp:380] pool1 -> pool1
I1120 20:23:22.947103 13044 net.cpp:122] Setting up pool1
I1120 20:23:22.947106 13044 net.cpp:129] Top shape: 64 20 12 12 (184320)
I1120 20:23:22.947108 13044 net.cpp:137] Memory required for data: 3887104
I1120 20:23:22.947110 13044 layer_factory.hpp:77] Creating layer conv2
I1120 20:23:22.947118 13044 net.cpp:84] Creating Layer conv2
I1120 20:23:22.947121 13044 net.cpp:406] conv2 <- pool1
I1120 20:23:22.947124 13044 net.cpp:380] conv2 -> conv2
I1120 20:23:22.947962 13044 net.cpp:122] Setting up conv2
I1120 20:23:22.947973 13044 net.cpp:129] Top shape: 64 50 8 8 (204800)
I1120 20:23:22.947975 13044 net.cpp:137] Memory required for data: 4706304
I1120 20:23:22.947983 13044 layer_factory.hpp:77] Creating layer pool2
I1120 20:23:22.947988 13044 net.cpp:84] Creating Layer pool2
I1120 20:23:22.947991 13044 net.cpp:406] pool2 <- conv2
I1120 20:23:22.947995 13044 net.cpp:380] pool2 -> pool2
I1120 20:23:22.948002 13044 net.cpp:122] Setting up pool2
I1120 20:23:22.948006 13044 net.cpp:129] Top shape: 64 50 4 4 (51200)
I1120 20:23:22.948009 13044 net.cpp:137] Memory required for data: 4911104
I1120 20:23:22.948011 13044 layer_factory.hpp:77] Creating layer ip1
I1120 20:23:22.948016 13044 net.cpp:84] Creating Layer ip1
I1120 20:23:22.948019 13044 net.cpp:406] ip1 <- pool2
I1120 20:23:22.948022 13044 net.cpp:380] ip1 -> ip1
I1120 20:23:22.949455 13044 net.cpp:122] Setting up ip1
I1120 20:23:22.949463 13044 net.cpp:129] Top shape: 64 500 (32000)
I1120 20:23:22.949466 13044 net.cpp:137] Memory required for data: 5039104
I1120 20:23:22.949473 13044 layer_factory.hpp:77] Creating layer relu1
I1120 20:23:22.949478 13044 net.cpp:84] Creating Layer relu1
I1120 20:23:22.949481 13044 net.cpp:406] relu1 <- ip1
I1120 20:23:22.949484 13044 net.cpp:367] relu1 -> ip1 (in-place)
I1120 20:23:22.949654 13044 net.cpp:122] Setting up relu1
I1120 20:23:22.949661 13044 net.cpp:129] Top shape: 64 500 (32000)
I1120 20:23:22.949662 13044 net.cpp:137] Memory required for data: 5167104
I1120 20:23:22.949666 13044 layer_factory.hpp:77] Creating layer scale
F1120 20:23:22.949683 13044 layer_factory.hpp:81] Check failed: registry.count(type) == 1 (0 vs. 1) Unknown layer type: MyCustomScale (known types: AbsVal, Accuracy, ArgMax, BNLL, BatchNorm, BatchReindex, Bias, Concat, ContrastiveLoss, Convolution, Crop, Data, Deconvolution, Dropout, DummyData, ELU, Eltwise, Embed, EuclideanLoss, Exp, Filter, Flatten, HDF5Data, HDF5Output, HingeLoss, Im2col, ImageData, InfogainLoss, InnerProduct, Input, LRN, LSTM, LSTMUnit, Log, MVN, MemoryData, MultinomialLogisticLoss, PReLU, Parameter, Pooling, Power, Python, RNN, ReLU, Reduction, Reshape, SPP, Scale, Sigmoid, SigmoidCrossEntropyLoss, Silence, Slice, Softmax, SoftmaxWithLoss, Split, TanH, Threshold, Tile, WindowData)
*** Check failure stack trace: ***
Aborted (core dumped)
 

  • Up0
  • Down0
howardd
Join Date: 3 Nov 17
Posts: 5
Posted: Mon, 2017-11-27 14:31

Hi yanbophx,

It appears from the stack trace that your version of Caffe doesn't know about "MyCustomScale" layers. If following the UDL tutorial in the SNPE SDK documentation, try repeating the steps under the heading "Basing a UDL on the Scale Layer in Caffe" to ensure that your Caffe installation has been updated with the custome layer type.

  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.