Hi,
I'm trying to convert an ONNX model with DepthToSpace (which is equivalent to torch.nn.PixelShuffle). The snpe-onnx-to-dlc complains that DepthToSpace is not supported. However, the page https://developer.qualcomm.com/sites/default/files/docs/snpe//network_la... says that PixelShuffle is supported which is the exact equivalent of ONNX DepthToSpace.
Can you please fix the converter to recognize DepthToSpace to be the same as PixelShuffle?
Thanks,
Ram
Dear developer,
Thanks for your efforts in our products.
Could you give us more messages for us to more deeply analysis?
1, Which SNPE version you used?
2, Regarding the D2S layer, SNPE has supported now. We will check if SNPE ONNX converter supported in the latest release.
3. Where is the D2S layer issue from, model head or tail?
BR.
Wei
More:
Could you please let me knwo which feature are you working on with current model?
Thanks.
1, Which SNPE version you used?
I am using version snpe-1.55.0.2958.
3. Where is the D2S layer issue from, model head or tail?
The model tail.
This is part of a segmentation network.
Thanks.
Would you mind provide your model, we can have a try locally?
This shall be supported, but need to check if there are other issues.
Thanks.
I couldn't find a way to upload the onnx file. However, I have outlined here the steps to reproduce the problem:
1. Use the following code to generate the onnx file called super_resolution.onnx.
<code>
</code>
2. Save the above code to a file, say hurr.py
3. Make sure you have pytorch installed.
4. Run <code>python hurr.py</code> to produce super_resolution.onnx
5. Run <code>snpe-onnx-to-dlc --input_network super_resolution.onnx --output_path super_resolution.dlc </code> to see the error
Dear customer,
Thanks for detailed update.
We will try internally for this issue.
Thanks.