Forums - 16-bits weights quantization not supported

2 posts / 0 new
Last post
16-bits weights quantization not supported
gershon
Join Date: 24 Jul 22
Posts: 5
Posted: Sun, 2023-07-30 02:15

snpe-dlc-quantize documentation claims support of 16-bit weights quantization.

However snpe-quantize fails when running with --weights_bitwidth=16 parameter:

gershon@demolap3:~/Downloads/playground/model_for_qc_26_07_23$ which snpe-dlc-quantize
/opt/qcom/aistack/snpe/2.12.0.230626//bin/x86_64-linux-clang/snpe-dlc-quantize
gershon@demolap3:~/Downloads/playground/model_for_qc_26_07_23$ snpe-dlc-quantize --input_dlc=model.dlc --input_list=input_list.txt --act_bitwidth=16 --weights_bitwidth=16
[INFO] InitializeStderr: DebugLog initialized.
[INFO] Processed command-line arguments
[ERROR] IrQuantizer: Unsupported weight bitwidth: 16
[INFO] DebugLog shutting down.
 

 

  • Up0
  • Down0
yunxqin
Join Date: 2 Mar 23
Posts: 44
Posted: Tue, 2023-08-01 02:07
Dear developer,
8w/16a is only supported by the HTA currently.
BR.
Yunxiang
  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.