In many applications, such as HDR and night-time mode of cameras, deep learning methods are preferred than the traditional. However, model quantization, which is neccessary when using DSP(NPU) runtime, inevitably introduces undesirable image artifacts because of precision loss.
As far as I know, Huawei's NPU of Kirin 990 supports both half-precision floating point operations and integer-precision calculations. Why does Qualcomm prefer int8 to fp16 ? Is there any solution for supporting fp16 operations on DSP(NPU) ? Anticipate for replies. Thank you.