I have generated a model using PyTorch with conv1d layers of different shapes and group parameters. I attempted to export the model in both TorchScript and ONNX formats. However, when converting the model to qnn format with fp16 precision and running it on the htp backend, I noticed that the majority of the results were incorrect. Most of the rows in the output were filled with zeros. The conv1d layers were converted to DepthwiseConv2d.
I tried expanding the group and using standard conv1d, which resulted in correct outputs. However, in my scenario, the number of channels is too large, requiring a significant number of conv1d layers, which negatively impacts execution efficiency.
Dear developer,
What's QNN verison you used for your model?
Per my understanding, QNN supported deconv already.
BR.
Wei
With this model, the problem can be perfectly reproduced. Kernel size is 21, and the input is a tensor filled with all ones.
version: v2.10.0.230425122932_54038