Forums - Why are half-precision floating point (FP16) operations not supported by DSP(NPU)?

1 post / 0 new
Why are half-precision floating point (FP16) operations not supported by DSP(NPU)?
yanzehang
Join Date: 17 Feb 19
Posts: 1
Posted: Thu, 2020-04-30 00:05

In many applications, such as HDR and night-time mode of cameras, deep learning methods are preferred than the traditional. However, model quantization, which is neccessary when using DSP(NPU) runtime, inevitably introduces undesirable image artifacts because of precision loss. 

As far as I know, Huawei's NPU of Kirin 990 supports both half-precision floating point operations and integer-precision calculations. Why does Qualcomm prefer int8 to fp16 ? Is there any solution for supporting fp16 operations on DSP(NPU) ? Anticipate for replies. Thank you.

 

  • Up0
  • Down0

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.