Bringing up TensorFlow frameworks on the Qualcomm Neural Processing SDK for AI
Using MobileNet SSD model for object detection
It is necessary to convert TensorFlow-based models supported by the Snapdragon® Mobile Platform .dlc (Deep Learning Container) format before running them on the Qualcomm® Neural Processing SDK for AI. To understand some of the issues around that conversion process, consider the MobileNet SSD architecture.
MobileNet SSD and its layers
MobileNet SSD (Single Shot MultiBox Detector) is a small, lightweight neural network architecture for object detection. It has already been implemented in both TensorFlow and Caffe. Understanding the layers and other features of each framework is useful when running them on the Qualcomm Neural Processing SDK.
The Qualcomm® Neural Processing Engine (NPE) in the SDK supports a number of network layer types on the CPU, Qualcomm® Adreno™ GPU and Qualcomm® Hexagon™ DSP. Layers that are part of the TensorFlow or Caffe frameworks but are not supported by the SNPE will cause problems during the attempt to convert them to the .dlc format.
Bringing up MobileNet SSD on TensorFlow using SNPE
To convert TensorFlow-trained models, SNPE requires three pieces of information:
- Input layer name
- Output layer name
- Input shape
This table provides an example:
Layers | Shape | Model Size |
---|---|---|
Input Layer: Preprocessor/sub | 1x300x300x3 | 29.1 MB |
Output Layer: 1. detection_classes 1. detection_boxes 3. detection_scores | 1. 1x100 2. 1x100 3. 1x100 |
Follow these steps to convert a TensorFlow MobileNet SSD pre-trained model to run on SNPE:
1. Run the following commands in the terminal to download and extract the model:
$ wget
$ tar xzvf ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.tar.gz
2. Convert the model to .dlc format, passing the input layer, output layer and input shapes:
$ snpe-tensorflow-to-dlc –graph
Snapdragon Processing Engine, Snapdragon, Qualcomm Adreno, Qualcomm Hexagon and Qualcomm Neural Processing SDK for AI are products of Qualcomm Technologies, Inc. and/or its subsidiaries.