Forums - Quantized model push causes Snapdragon to go offline

1 post / 0 new
Quantized model push causes Snapdragon to go offline
emily.dunkel
Join Date: 8 Jan 21
Posts: 2
Posted: Thu, 2021-06-24 15:26

Hello,

I have a TensorFlow v1.6 model, that I am able to successfully convert to a DLC model (although some layers do not have GPU support). Conversion of this DLC model to a quantized model seems to work just fine. But, when I push the quantized model to the Snapdragon, the device goes offline. This happens each time I've tried pushing this model (I've tried pushing to multiple file locations). I've successfully pushed other quantized models to my Snapdragon that are similar in size, so it appears to be a problem with this specific model. Any idea what might be going on? I'm able to run the non-quantized DLC model on the DSP using run-time quantization.

Thank you,

Emily

  • Up0
  • Down0

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.