Hello,
I have a TensorFlow v1.6 model, that I am able to successfully convert to a DLC model (although some layers do not have GPU support). Conversion of this DLC model to a quantized model seems to work just fine. But, when I push the quantized model to the Snapdragon, the device goes offline. This happens each time I've tried pushing this model (I've tried pushing to multiple file locations). I've successfully pushed other quantized models to my Snapdragon that are similar in size, so it appears to be a problem with this specific model. Any idea what might be going on? I'm able to run the non-quantized DLC model on the DSP using run-time quantization.
Thank you,
Emily