It's a beginner level question.
I just start using SNPE and tring to use a pytorch model () replacing the origin bvlc_alexnet.dlc.
But I found that pytorch model need a input shape as B, C, H, W, where SNPE need it to be B, H, W, C. In tutorial, it use :
snpe-pytorch-to-dlc --input_network resnet18.pt --input_dim input "1,3,224,224" --output_path resnet18.dlc
to convert pytorch .pt model to .dlc. However, it's also B, C, H, W shape, and will lead to the error when running snpe-dlc-run(command is exactily the same as it in alexnet example tuturial, except location of .dlc file):
snpe-pytorch-to-dlc --input_network resnet18.pt --input_dim input "1,224,224,3" --output_path resnet18.dlc
input_shape = [1, 224, 224, 3] input_data = torch.randn(input_shape) script_model = torch.jit.trace(model, input_data)
Dear customer,
Regarding the ONNX conversion and implementation, SNPE will process with NHWC but Pytorch was trained under NCHW. So actually, you need to take a transformer from NHWC to NCHW while you want to compare the final result or do any other accuracy confirmation.
BR.
Wei
Dear wei,
Thank you for replying.
I ran the model from pytorch on SNPE without errors, but the result is totally different from it in pytorch. Seems like there's still something wrong with input dimension.
What do you mean, or how can I [take a transformer from NHWC to NCHW]
Do you mean to change the shape of input imgs before transfer to .raw? I tried, nothing got better.
Or, do you mean to change the dimension of pytorch model? How can I do that? Since changing a trained model's dimension is not supported, can I firstly change it to ONNX and then use snpe-onnx-to-dlc?
Regards.
WZ
Dear customer,
SNPE just recognize the data format with NHWC, But actually, the data format for ONNX is NHWC, So you need to transform the execution results from NHWC(SNPE) to NCHW(ONNX) as you can make a comparsion further.
BR.
Wei