Forums - SNPE UDO issue

3 posts / 0 new
Last post
SNPE UDO issue
zhoubobo18
Join Date: 5 Jan 24
Posts: 5
Posted: Sun, 2024-01-07 19:05

Hi, I use snpe-onnx-to-dlc to convert my .onnx model to .dlc model with udo, I follow the usage of snpe-onnx-to-dlc in the https://developer.qualcomm.com/sites/default/files/docs/snpe/usergroup2....

snpe-onnx-to-dlc -i <input-onnx-model>

                 --udo_config_paths <input-model.json>
                 -o <output-model.dlc>

ButI got the ERROR below:

RuntimeError: inferOutputShapes: Please provide the op package library
2024-01-08 10:58:08,782 - 230 - ERROR - Node Range_2682: inferOutputShapes: Please provide the op package library
what should I provide here?  I try --op_package_lib, --converter_op_package_lib with the .so library generated by complilation, but it doesn't work. Any suggestion will help. Thanks in advance.
  • Up0
  • Down0
sanjjey.a.sanjjey
Join Date: 17 May 22
Posts: 55
Posted: Tue, 2024-01-23 23:07

Hi, 

May I know for which op you are written UDO file.

Maximum for doing conversion using onnx, they will be no issues... And also can you share the command you have tried.

Thanks

  • Up0
  • Down0
alexandre.kutner
Join Date: 29 Jan 24
Posts: 1
Posted: Fri, 2024-03-22 03:26
Hi, I have exactly the same problem.
 
The operation is Softplus
 
This is the information about the Softplus operation opened with netron.app
 
NODE PROPERTIES
type : Softplus
module : ai.onnx v1
name : /seed_bin_regressor/_net/_net.3/Softplus
INPUTS
X : name: /seed_bin_regressor/_net/_net.2/Conv_output_0
OUTPUTS
Y : name: /seed_bin_regressor/_net/_net.3/Softplus_output_0
 
This is my Softplus.json file:
 
{
    "UdoPackage_0":
    {
        "Operators": [
            {
            "type": "Softplus",
                "inputs":[
                    {"name":"input", "per_core_data_types": {"CPU":"FLOAT_32", "GPU":"FLOAT_32", "DSP":"UINT_8"}}
                ],
                "outputs":[
                    {"name":"output", "per_core_data_types": {"CPU":"FLOAT_32", "GPU":"FLOAT_32", "DSP":"UINT_8"}}
                ],
                "core_types": ["CPU", "GPU", "DSP"],
                "dsp_arch_types": ["v73"]
            }
        ],
        "UDO_PACKAGE_NAME": "SoftplusUdoPackage"
    }
}
 
As well as the command I use:
 
./2.20.0.240223/bin/x86_64-linux-clang/snpe-onnx-to-dlc 
--input_network Model/Midas_dept_test/midas_dpt_swin2_tiny_256.onnx 
--output_path Model/Midas_dept_test/midas_dpt_swin2_tiny_256_udo.dlc 
-d 'input' 1,3,256,256 
--udo_config_paths Udo/SoftplusUdoPackage/config/Softplus.json
 
Error: 
2024-03-21 16:10:43,666 - 240 - WARNING - ONNX_CUSTOM_OP_INFER_SHAPES: Could not infer shapes for all model tensors. This may cause issues during conversion
2024-03-21 16:10:44,165 - 240 - WARNING - ONNX_CUSTOM_OP_INFER_SHAPES: Could not infer shapes for all model tensors. This may cause issues during conversion
2024-03-21 16:10:44,662 - 240 - WARNING - ONNX_CUSTOM_OP_INFER_SHAPES: Could not infer shapes for all model tensors. This may cause issues during conversion
2024-03-21 16:10:45,155 - 240 - WARNING - ONNX_CUSTOM_OP_INFER_SHAPES: Could not infer shapes for all model tensors. This may cause issues during conversion
2024-03-21 16:10:45,648 - 240 - WARNING - ONNX_CUSTOM_OP_INFER_SHAPES: Could not infer shapes for all model tensors. This may cause issues during conversion
2024-03-21 16:10:46,145 - 240 - WARNING - ONNX_CUSTOM_OP_INFER_SHAPES: Could not infer shapes for all model tensors. This may cause issues during conversion
2024-03-21 16:10:46,150 - 240 - WARNING - Can't simplify the model when custom ops or quantization overrides are specified, converting without simplification.
2024-03-21 16:10:46,196 - 240 - WARNING - Symbolic shape inference Failed. Exception: Onnxruntime package not found in current environment. Symbolic Shape Inference will be skipped.. Running normal shape inference.
.........
.........
.........
Traceback (most recent call last):
  File "/opt/qcom/aistack/snpe/2.20.0.240223/lib/python/qti/aisw/converters/onnx/onnx_to_ir.py", line 355, in convert
    node = self.translations.apply_method_to_op(src_type,
  File "/opt/qcom/aistack/snpe/2.20.0.240223/lib/python/qti/aisw/converters/common/converter_ir/translation.py", line 51, in apply_method_to_op
    return translation.apply_method(method_name, *args, **kwargs)
  File "/opt/qcom/aistack/snpe/2.20.0.240223/lib/python/qti/aisw/converters/common/converter_ir/translation.py", line 18, in apply_method
    return self.indexed_methods[method_name](*args, **kwargs)
  File "/opt/qcom/aistack/snpe/2.20.0.240223/lib/python/qti/aisw/converters/onnx/custom_op_translations.py", line 117, in add_op
    node = graph.add(op, input_names, output_names)
  File "/opt/qcom/aistack/snpe/2.20.0.240223/lib/python/qti/aisw/converters/common/converter_ir/op_graph.py", line 791, in add
    output_shapes = op.infer_shape(input_shapes, input_axis_formats, len(output_names), self.src_axis_order)
  File "/opt/qcom/aistack/snpe/2.20.0.240223/lib/python/qti/aisw/converters/common/converter_ir/op_adapter.py", line 1295, in infer_shape
    return self.infer_shape_c_op_wrapper(input_shapes, input_axis_formats, num_outputs, axis_order)
  File "/opt/qcom/aistack/snpe/2.20.0.240223/lib/python/qti/aisw/converters/common/converter_ir/op_adapter.py", line 259, in infer_shape_c_op_wrapper
    return self.c_op.infer_output_shapes(AxisOrders.python_to_c_axis_orders(axis_order), num_outputs)
RuntimeError: inferOutputShapes: Please provide the op package library
2024-03-21 16:10:52,248 - 230 - ERROR - Node /seed_bin_regressor/_net/_net.3/Softplus: inferOutputShapes: Please provide the op package library
 
 
I also tryed this :
 
./2.20.0.240223/bin/x86_64-linux-clang/snpe-onnx-to-dlc 
--input_network Model/Midas_dept_test/midas_dpt_swin2_tiny_256.onnx 
--output_path Model/Midas_dept_test/midas_dpt_swin2_tiny_256_udo.dlc 
-d 'input' 1,3,256,256 
--udo_config_paths Udo/SoftplusUdoPackage/config/Softplus.json 
--op_package_lib Udo/SoftplusUdoPackage/libs/x86-64_linux_clang/libUdoSoftplusUdoPackageReg.so
 
 
same error.
 
 
Thanks in advance. do not hesitate to ask me for additional information.
  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.