Forums - snpe-pytorch-to-dlc needs --output_layout flag; unusable on GPU otherwise

2 posts / 0 new
Last post
snpe-pytorch-to-dlc needs --output_layout flag; unusable on GPU otherwise
gershon
Join Date: 24 Jul 22
Posts: 5
Posted: Mon, 2022-08-08 02:03

snpe-pytorch-to-dlc inserts additional permute level before each output node. This is done to convert from DLC NHWC layout to PyTorch NCHW layout. If I actually want to get the output as NHWC - there is no way to do this. It is a real issue because the NCHW tensors are not aplicable for GPU runtime:

/home/gershon/git/snapdragon-poc/tools/snpe/lib/python/qti/aisw/converters/backend/ir_to_dlc.py:1049: RuntimeWarning: info_code=802; message=Layer parameter value is invalid in GPU. Layer permute_1 : output width = 266, depth = 476 width * depth (packed) = 31654 exceeds maximum image width 16384 for Adreno A650; component=GPU Runtime; line_no=1095; thread_id=140190701469888
 
In other words as long as this bug is not fixed we cannot run models produced by snpe-pytorch-to-dlc on GPU.
 
The same problem exists for the input nodes; however the snpe-pytorch-to-dlc script has --input_layout argument that an be used to tell the script that an input is already NHWC and should not be permuted.
 
Similar argument is badly needed for eliminating the output permutes. It would be natural to call this argument --output_layout. The script should drop (not insert) the permute layer before an output if --output_layout <output-name> NHWC is set.
  • Up0
  • Down0
sanjjey.a.sanjjey
Join Date: 17 May 22
Posts: 55
Posted: Mon, 2022-08-29 23:42

Hi,

In SNPE, the image must be presented in a tensor shape (NHWC), where channel is the fastest-changing dimension. (NOTE:This is the default arrangement for SNPE).

If a tensor layout of NCHW is selected, then the data and/or tensor parameters may need to be reshaped to SNPE default.

Thanks.

  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.