Hi All,
I was trying to convert TF8 quantized mobilenet model to dlc using SNPE.
But i faced the follwoing error:
"978 - 123 - ERROR - Conversion failed: ERROR_TF_CONV_RESOLVE_WEIGHTS: Cannot resolve convolution layer due to missing weights for operation: MobilenetV1/MobilenetV1/Conv2d_9_depthwise/depthwise
terminate called after throwing an instance of 'std::runtime_error"
I used the follwoing command to quantize my frozen_graph.pb :
bazel-bin/tensorflow/tools/graph_transforms/transform_graph --in_graph=
frozen_mobilenet_v1_1.0_224.pb --out_graph=frozen_mobilenet_v1_1.0_224_quantized.pb --
inputs='input' --outputs='MobilenetV1/Predictions/Reshape_1' --transforms=' add_default_attributes
strip_unused_nodes(type=float, shape="1,224,224,3") remove_nodes(op=Identity,
op=CheckNumerics, op=Squeeze) fold_constants(ignore_errors=true) fold_batch_norms
fold_old_batch_norms quantize_weights quantize_nodes strip_unused_nodes
sort_by_execution_order’
Also when i try to use the non-quantized model i.e frozen_graph.pb , i am able to convert the model to dlc.
but on DSP runtime the inference result wrong,whereas CPU,GPU runtime hae good inference results.
If anyone else has encountered this issue and was able to solve please help in resolving the same
Thanks and Regards
Vishal
SNPE doesn't support models that are quantized by tensorflow tools. You can use the float version. If you want to quantize it, use snpe-dlc-quantize after you have converted the model to DLC.
The DSP supports the operations in Mobilenet. However, due to the nature of the network, it doesn't perform well (accuracy-wise) when quantized.
This happens in tensorflow as well. It's not SNPE DSP specific. Quantize it in tensorflow (be sure to quantize everything.. the weights, the ops and the activations) and run it in tensorflow and you'll see it gets incorrect results.
There is research being done on this, and some papers on this topic. More are coming as well. As it stands, you cannot run mobilenet on any quantized runtime, without it being retrained to be quantization friendly.
Hi Jesliger,
Thank you for the update.
Can you share the details of getting a quantized friendly mobilenet, and the steps which you have followed for getting results on DSP runtime.
It would be helpful .
Thanks and Regards