Snapdragon Neural Processing Engine SDK
Reference Guide
|
This example imports a pretrained VGG model from the ONNX framework, and demonstrates inference.
This example shows how to run an ONNX model using the SNPE SDK. We will perform the following steps:
First set up ONNX environment
cd $SNPE_ROOT source bin/envsetup.sh -o $ONNX_DIR where $ONNX_DIR is the path to the ONNX installation.
The script sets up the following environment variables.
SNPE_ROOT: root directory of the SNPE SDK installation ONNX_HOME: root directory of the ONNX installation provided
The script also updates PATH, LD_LIBRARY_PATH, and PYTHONPATH.
You should be able to run snpe-onnx-to-dlc -h without error if the environment is set correctly.
Download the VGG pretrained model
Download the ONNX pretrained VGG model from here.
cd $SNPE_ROOT/models/VGG wget https://s3.amazonaws.com/onnx-model-zoo/vgg/vgg16/vgg16.onnx
You can find more information about the ONNX VGG model here
Download a sample image, and the label file for the model.
cd $SNPE_ROOT/models/VGG/data wget https://s3.amazonaws.com/model-server/inputs/kitten.jpg wget https://s3.amazonaws.com/onnx-model-zoo/synset.txt
The size of the input image is not limited.
Preprocess the image and convert it into a raw file.
cd $SNPE_ROOT/models/VGG/ python scripts/create_VGG_raws.py -i data/ -d data/cropped/
If you see this message, it means the image is preprocessed successfully.
Preprocessed successfully!
Convert the ONNX model into SNPE DLC format.
cd $SNPE_ROOT/models/VGG snpe-onnx-to-dlc -i onnx/vgg16.onnx -o dlc/vgg16.dlc
You should see the following messages:
INFO - INFO_DLC_SAVE_LOCATION: Saving model at dlc/vgg16.dlc INFO - INFO_CONVERSION_SUCCESS: Conversion completed successfully
usage: $SNPE_ROOT/models/VGG/scripts/setup_VGG.py [-h] -a ASSETS_DIR [-d] Prepares the VGG assets for tutorial examples. required arguments: -a ASSETS_DIR, --assets_dir ASSETS_DIR directory containing the VGG assets optional arguments: -d, --download Download VGG assets to VGG example directory
View your DLC model using snpe-dlc-info Execute snpe-dlc-info -i dlc/vgg16.dlc
and you will see each layer information in detailed.
Here is a quick snapshot:
DLC info for: $SNPE_ROOT/models/VGG/dlc/vgg16.dlc Model Version: N/A Model Copyright:N/A ----------------------------------------------------------------------------------------------------------------------------------------------------------- | Id | Name | Type | Inputs | Outputs | Out Dims | Runtimes | Parameters | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | 0 | data | data | data | data | 1x224x224x3 | A D G C | input_preprocessing: passthrough | | | | | | | | | input_type: default | | 1 | vgg0_conv0_fwd | convolutional | data | vgg0_conv0_fwd | 1x224x224x64 | A D G C | padding x: 1 | | | | | | | | | padding y: 1 | | | | | | | | | padding mode: zero | | | | | | | | | stride x: 1 | | | | | | | | | stride y: 1 | | | | | | | | | num filters: 64 | | | | | | | | | kernel: 3x3 | | | | | | | | | param count: 1k (0.0013%) | | | | | | | | | MACs per inference: 86M (0.56%) | | 2 | vgg0_relu0_fwd | neuron | vgg0_conv0_fwd | vgg0_relu0_fwd | 1x224x224x64 | A D G C | a: 0 | | | | | | | | | b: 0 | | | | | | | | | min_clamp: 0 | | | | | | | | | max_clamp: 0 | | | | | | | | | func: relu | . . . | 39 | flatten_70 | reshape | vgg0_dropout1_fwd | flatten_70 | 1x4096 | A D G C | | | 40 | vgg0_dense2_fwd | fully_connected | flatten_70 | vgg0_dense2_fwd | 1x1000 | A D G C | param count: 4M (2.96%) | | | | | | | | | MACs per inference: 4M (0.0265%) | ----------------------------------------------------------------------------------------------------------------------------------------------------------- Note: The supported runtimes column assumes a processor target of Snapdragon 835 (8998) Key : A:AIP D:DSP G:GPU C:CPU Total parameters: 138348355 (527 MB assuming single precision float) Total MACs per inference: 15471M (100%) Converter command: snpe-onnx-to-dlc input_encoding=[] copyright_file=None disable_batchnorm_folding=False dry_run=None model_version=None input_type=[] validation_target=[] debug=-1 enable_strict_ validation=False DLC created with converter version: X.Y.Z Layers used by DLC: CONVOLUTIONAL, DATA, FULLY_CONNECTED, NEURON, PERMUTE, POOLING, RESHAPE Est. Steady-State Memory Needed to Run: 965.9 MiB
This tool shows the name, dimensions and important parameters of each layer. Additionally, it shows enabled runtimes.
Run inference: snpe-net-run loads a DLC file, loads the data for the input tensor(s), and executes the network on the specified runtime.
cd $SNPE_ROOT/models/VGG/data snpe-net-run --input_list raw_list.txt --container ../dlc/vgg16.dlc --output_dir ../output
You will see the following:
------------------------------------------------------------------------------- Model String: N/A SNPE vX.Y.Z ------------------------------------------------------------------------------- Processing DNN input(s): kitten.raw
Postprocess the result for prediction
cd $SNPE_ROOT/models/VGG/ python scripts/show_vgg_classifications.py -i data/raw_list.txt -o output/ -l data/synset.txt
You will see the following, and it means the example ran successfully!
Classification results probability=0.351833 ; class=n02123045 tabby, tabby cat probability=0.315166 ; class=n02123159 tiger cat probability=0.313086 ; class=n02124075 Egyptian cat probability=0.012995 ; class=n02127052 lynx, catamount probability=0.003528 ; class=n02129604 tiger, Panthera tigris