Hi.
I am trying to convert a certain nn from onnx to dlc so it will get a batch sized 2 as input.
The original input is 1x128x256x3 and I want to convert it so that the input will be 2x128x256x3.
The original output is 1x512 and the new output is 2x512.
For a given image, I executed the original network and got feature vector and then tried to concatenate the image twice and pass it
as input for the new network. The output feature vectors are different. I expected that both feature vectors from the new network will be
the same as the feature vector from the original network.
Should it really be the same feature vectors?
Thanks!
HI,all
I am also concerned about this issue.
My device is XIAOMI 11 with Snapdragon 888 devices. But the android app provided by SNPE example can run model in the DSP or AIP. I also can not find related information in the SNPE release notes.
geometry dash
the test step looks good. yes, you should get same output.
I guess the issue should come from HTP core. In graph optimization stage, the compiler may wrong for special op.
As I known, 2 known bugs for batch > 1.
For v66 arch (8250,8150), a known bug in Resize optimization and issue fixed in latest SNPE SDK.
For v68(888, 8 Gen 1/1+), a known bug in bert-like models in batch > 1 and issue fixed in latest SNPE SDK.
They may some corner case for batch > 1, we need to look into the model.