Forums - qnn2.10.0 conv1d with group parameter, executed in fp16 precision on the SA8295 htp backend, resulted in an error.

3 posts / 0 new
Last post
qnn2.10.0 conv1d with group parameter, executed in fp16 precision on the SA8295 htp backend, resulted in an error.
xiangtao.gu
Join Date: 25 Oct 21
Posts: 6
Posted: Fri, 2023-07-07 02:15

I have generated a model using PyTorch with conv1d layers of different shapes and group parameters. I attempted to export the model in both TorchScript and ONNX formats. However, when converting the model to qnn format with fp16 precision and running it on the htp backend, I noticed that the majority of the results were incorrect. Most of the rows in the output were filled with zeros. The conv1d layers were converted to DepthwiseConv2d.

I tried expanding the group and using standard conv1d, which resulted in correct outputs. However, in my scenario, the number of channels is too large, requiring a significant number of conv1d layers, which negatively impacts execution efficiency.

  • Up0
  • Down0
weihuan
Join Date: 12 Apr 20
Posts: 270
Posted: Sat, 2023-07-08 08:21

Dear developer,

What's QNN verison you used for your model?

Per my understanding, QNN supported deconv already.

BR.
Wei

  • Up0
  • Down0
xiangtao.gu
Join Date: 25 Oct 21
Posts: 6
Posted: Tue, 2023-07-11 00:34

With this model, the problem can be perfectly reproduced. Kernel size is 21, and the input is a tensor filled with all ones.

version: v2.10.0.230425122932_54038

 

import torch
import torch.nn as nn
import numpy as np
 
torch.backends.cudnn.enabled = False
 
class test_conv1d(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1d = nn.Conv1d(256, 256, 21,padding=10,groups=256,bias=False)
        
    def forward(self,feature):
        out = self.conv1d(feature)
        return out 
 
 
torch.set_printoptions(threshold=np.inf,sci_mode=False,linewidth=1000000)
 
 
model = test_conv1d()
model.to(torch.device('cpu'))
model.eval()
 
feature = torch.ones(1,256,40)
out = model(feature)
print('input:',feature)
print('out shape:',out.shape)
print('out value:',out)
 
scripted_model = torch.jit.trace(model,(feature))
scripted_model.save("test_conv1d.pt")
 
  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.