I trained a model with 'slim.batch_norm', when I converted the pb file to dlc, the following occurs. Does anybody knows?
2018-06-12 18:12:27,304 - 126 - ERROR - Encountered Error: operands could not be broadcast together with shapes (0,) (64,)
Traceback (most recent call last):
File "/home/user/Android/snpe-1.15.0/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc", line 120, in main
converter.convert(args.dlc, args.model_version, converter_command)
File "/home/user/Android/snpe-1.15.0/lib/python/converters/tensorflow/converter.py", line 304, in convert
self._convert_layers()
File "/home/user/Android/snpe-1.15.0/lib/python/converters/tensorflow/converter.py", line 340, in _convert_layers
descriptors = self._resolve_descriptors_from_nodes(graph_ops)
File "/home/user/Android/snpe-1.15.0/lib/python/converters/tensorflow/converter.py", line 439, in _resolve_descriptors_from_nodes
resolved_descriptors = resolver.resolve_layer(graph_matcher, self._graph_helper)
File "/home/user/Android/snpe-1.15.0/lib/python/converters/tensorflow/layers/batchnorm.py", line 312, in resolve_layer
beta=beta))
File "/home/user/Android/snpe-1.15.0/lib/python/converters/tensorflow/layers/batchnorm.py", line 55, in __init__
scaled_stddev = stddev * scale
ValueError: operands could not be broadcast together with shapes (0,) (64,)
Hi, i had the exact same issue with my own model, make sure you supply training=False to all your batch_norm_layers. Sadly now im facing a different issue where the conversion takes forever exhaust all my pc resources and eventually fails.
hi guys:
why should i set training=false,when i use the batch_norm layers,and last,have you sove the problem? looking for your replay,thanks a lot
Hi,
The batch normalization layer (slim.batch_norm) you are using is not supported in SNPE.
can you try tf.layers.batch_normalization which is SNPE supported layer.
For more details on usage of tf.layers.batch_normalization layer click here
To know details on supported layers kindly check SNPE Supported layer details