Forums - yolov5 qnn-net-run error

3 posts / 0 new
Last post
yolov5 qnn-net-run error
llliqnp
Join Date: 31 Jan 24
Posts: 2
Posted: Tue, 2024-02-27 23:33

I completed the quantification of the model,but when I run dsp on SM8250,it seems like there is an error.

qnn-net-run pid:7222
[ ERROR ] QnnModel::addNode() validating node _model_23_Slice failed.
[ ERROR ] best_qu.addNode(QNN_OPCONFIG_VERSION_1, "_model_23_Slice", "qti.aisw", "StridedSlice", params__model_23_Slice, 5, inputs__model_23_Slice, 1, outputs__model_23_Slice, 1 ) expected MODEL_NO_ERROR, got MODEL_GRAPH_ERROR
Graph Prepare failure
Segmentation fault

I don't know what is going wrong and there is no error when quantizing the model.
  • Up1
  • Down0
mengweiw
Join Date: 26 Nov 23
Posts: 4
Posted: Sun, 2024-03-03 02:10

Dear customer,

The maximum supported input and output rank for StridedSlice op is 5. pls check this op.

BRs.

 

  • Up0
  • Down0
llliqnp
Join Date: 31 Jan 24
Posts: 2
Posted: Thu, 2024-03-07 01:19

Hi mengweiw,

I'm pretty sure my StridedSlice input and output rank are both less than or equal to 5 and there are no errors when converting the onnx. model.

Here is my StridedSlice model cpp code.

  /* ADDING NODE FOR _model_23_Slice */
  uint32_t dimensions___model_23_Slice_ranges[] = {5, 3};
  int32_t __model_23_Slice_ranges[] = {0, 1, 1, 0, 3, 1, 0, 80, 1, 0, 80, 1, 0, 5, 1};
  Qnn_Param_t params__model_23_Slice[] = {
    {.paramType=QNN_PARAMTYPE_TENSOR,
     .name="ranges",
     {.tensorParam=(Qnn_Tensor_t) {
          .version= QNN_TENSOR_VERSION_1,
          {.v1= {
            .id=0,
            .name= "__model_23_Slice_ranges",
            .type= QNN_TENSOR_TYPE_STATIC,
            .dataFormat= QNN_TENSOR_DATA_FORMAT_FLAT_BUFFER,
            .dataType= QNN_DATATYPE_INT_32,
            .quantizeParams= { QNN_DEFINITION_UNDEFINED,
                               QNN_QUANTIZATION_ENCODING_UNDEFINED,
                               {.scaleOffsetEncoding= {.scale= 0.0000000000000000f, .offset= 0}}},
            .rank= 2,
            .dimensions=dimensions___model_23_Slice_ranges,
            .memType= QNN_TENSORMEMTYPE_RAW,
            {.clientBuf= { .data=(uint8_t*)__model_23_Slice_ranges,
                           .dataSize=60}}}}}}},
    {.paramType=QNN_PARAMTYPE_SCALAR,
     .name="begin_mask",
     {.scalarParam= (Qnn_Scalar_t) {QNN_DATATYPE_UINT_32, {.uint32Value = 0}}}},
    {.paramType=QNN_PARAMTYPE_SCALAR,
     .name="end_mask",
     {.scalarParam= (Qnn_Scalar_t) {QNN_DATATYPE_UINT_32, {.uint32Value = 0}}}},
    {.paramType=QNN_PARAMTYPE_SCALAR,
     .name="new_axes_mask",
     {.scalarParam= (Qnn_Scalar_t) {QNN_DATATYPE_UINT_32, {.uint32Value = 0}}}},
    {.paramType=QNN_PARAMTYPE_SCALAR,
     .name="shrink_axes",
     {.scalarParam= (Qnn_Scalar_t) {QNN_DATATYPE_UINT_32, {.uint32Value = 0}}}}
  };
  const char*  inputs__model_23_Slice[] = {
    "_model_23_Transpose_output_0"
  };
  uint32_t dimensions__model_23_Slice_output_0[] = {1, 3, 80, 80, 5};
  Qnn_Tensor_t outputs__model_23_Slice[] = {
    (Qnn_Tensor_t) {
          .version= QNN_TENSOR_VERSION_1,
          {.v1= {
            .id=0,
            .name= "_model_23_Slice_output_0",
            .type= QNN_TENSOR_TYPE_NATIVE,
            .dataFormat= QNN_TENSOR_DATA_FORMAT_FLAT_BUFFER,
            .dataType= QNN_DATATYPE_UFIXED_POINT_8,
            .quantizeParams= { QNN_DEFINITION_DEFINED,
                               QNN_QUANTIZATION_ENCODING_SCALE_OFFSET,
                               {.scaleOffsetEncoding= {.scale= 0.0795035064220428f, .offset= -167}}},
            .rank= 5,
            .dimensions=dimensions__model_23_Slice_output_0,
            .memType= QNN_TENSORMEMTYPE_RAW,
            {.clientBuf= { .data=nullptr,
                           .dataSize=0}}}}}
  };
  VALIDATE(best_qu.addNode(QNN_OPCONFIG_VERSION_1, // Op_Config_t Version
                           "_model_23_Slice", // Node Name
                           "qti.aisw", // Package Name
                           "StridedSlice", // Qnn Node Type
                           params__model_23_Slice, // Node Params
                           5, // Num Node Params
                           inputs__model_23_Slice, // Input Tensor Names
                           1, // Num Input Tensor Names
                           outputs__model_23_Slice, // Output Tensors
                           1// Num Output Tensors
  ), err);

Is there any tool that can determine if there is an error in my op?

Sincerely

 

 

 

 

 

  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.