Forums - Quantized Mobilenet dlc conversion error

3 posts / 0 new
Last post
Quantized Mobilenet dlc conversion error
ghorpadevish
Join Date: 20 Mar 17
Posts: 12
Posted: Tue, 2018-03-13 21:13

Hi All,

I was trying to convert TF8 quantized mobilenet model to dlc using SNPE.

But i faced the follwoing error:

"978 - 123 - ERROR - Conversion failed: ERROR_TF_CONV_RESOLVE_WEIGHTS: Cannot resolve convolution layer due to missing weights for operation: MobilenetV1/MobilenetV1/Conv2d_9_depthwise/depthwise
terminate called after throwing an instance of 'std::runtime_error"

I used the follwoing command to quantize my frozen_graph.pb :

bazel-bin/tensorflow/tools/graph_transforms/transform_graph --in_graph=
frozen_mobilenet_v1_1.0_224.pb --out_graph=frozen_mobilenet_v1_1.0_224_quantized.pb --
inputs='input' --outputs='MobilenetV1/Predictions/Reshape_1' --transforms=' add_default_attributes
strip_unused_nodes(type=float, shape="1,224,224,3") remove_nodes(op=Identity,
op=CheckNumerics, op=Squeeze) fold_constants(ignore_errors=true) fold_batch_norms
fold_old_batch_norms quantize_weights quantize_nodes strip_unused_nodes
sort_by_execution_order’

 

Also when i try to use the non-quantized model i.e frozen_graph.pb , i am able to convert the model to dlc.

but on DSP runtime the inference result wrong,whereas CPU,GPU runtime hae good inference results.

If anyone else has encountered this issue and was able to solve please help in resolving the same

Thanks and Regards

Vishal

 

  • Up0
  • Down0
jesliger
Join Date: 6 Aug 13
Posts: 75
Posted: Wed, 2018-03-14 04:45

SNPE doesn't support models that are quantized by tensorflow tools.  You can use the float version.  If you want to quantize it, use snpe-dlc-quantize after you have converted the model to DLC.

The DSP supports the operations in Mobilenet.  However, due to the nature of the network, it doesn't perform well (accuracy-wise) when quantized.

This happens in tensorflow as well.   It's not SNPE DSP specific.  Quantize it in tensorflow (be sure to quantize everything.. the weights, the ops and the activations) and run it in tensorflow and you'll see it gets incorrect results.

There is research being done on this, and some papers on this topic.  More are coming as well.  As it stands, you cannot run mobilenet on any quantized runtime, without it being retrained to be quantization friendly.

 

  • Up0
  • Down0
ghorpadevish
Join Date: 20 Mar 17
Posts: 12
Posted: Wed, 2018-03-14 23:01

Hi Jesliger,

Thank you for the update.

Can you share the details of getting a quantized friendly mobilenet, and the steps which you have followed for getting results on DSP runtime.

It would be helpful .

Thanks and Regards

  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.