Snapdragon Neural Processing Engine SDK
Reference Guide
Android Tutorial

Prerequisites

Introduction

This tutorial walks through the process of integrating the SNPE and snpe-platform-validator Java APIs within an Android application.

The SNPE and Platform Validator Java APIs are made available as an Android Archive (AAR) file which application developers include as a dependency of their applications.

Gradle project dependency

allprojects {
repositories {
...
flatDir {
// Marks the directory as a repository for
// dependencies. Place the snpe-release.aar
// in the directory below.
dirs 'libs'
}
}
}
...
dependencies {
...
// This adds the SNPE SDK as a project dependency
compile(name: 'snpe-release', ext:'aar')
// This adds the Platform Validator tool (optional) as a project dependency
compile(name: 'platformvalidator-release', ext:'aar')
}

In case both the archives are required in a project, "pickFirst" need to be used in gradle to avoid library conflicts.

android {
...
packagingOptions {
pickFirst 'lib/arm64-v8a/libc++_shared.so'
pickFirst 'lib/armeabi-v7a/libsnpe_adsp.so'
pickFirst 'lib/arm64-v8a/libsnpe_dsp_domains_skel.so'
pickFirst 'lib/armeabi-v7a/libsnpe_dsp_skel.so'
pickFirst 'lib/armeabi-v7a/libSNPE.so'
pickFirst 'lib/arm64-v8a/libsnpe_adsp.so'
pickFirst 'lib/arm64-v8a/libsnpe_dsp_domains.so'
pickFirst 'lib/armeabi-v7a/libc++_shared.so'
pickFirst 'lib/arm64-v8a/libsnpe_dsp_skel.so'
pickFirst 'lib/armeabi-v7a/libsnpe_dsp_domains.so'
pickFirst 'lib/arm64-v8a/libSNPE.so'
pickFirst 'lib/armeabi-v7a/libsnpe_dsp_domains_skel.so'
pickFirst 'lib/armeabi-v7a/libsymphony-cpu.so'
pickFirst 'lib/arm64-v8a/libsymphony-cpu.so'
}

Platform Validator Java API Overview

Once the optional dependency of platformform validator is added, the Platform Validator classes under the com.qualcomm.qti.platformvalidator package will be available in the application classpath.

All applications will first create an object of Platform Validator with required runtime and then use that object to call validation APIs as described below.

Using Platform Validator

//Platform validator class for object creation
//available runtimes are defined in this class
...
//This create platform validator object for GPU runtime class
PlatformValidator pv = new PlatformValidator(PlatformValidatorUtil.Runtime.GPU);
// To check in general runtime is working use isRuntimeAvailable
boolean check = pv.isRuntimeAvailable(getApplication());
// To check SNPE runtime is working use runtimeCheck
boolean check = pv.runtimeCheck(getApplication());
//To get core version use libVersion api
String str = pv.coreVersion(getApplication());
//To get core version use coreVersion api
String str = pv.coreVersion(getApplication());
//List of available runtimes
PlatformValidatorUtil.Runtime.CPU
PlatformValidatorUtil.Runtime.GPU
PlatformValidatorUtil.Runtime.DSP
PlatformValidatorUtil.Runtime.GPU_FLOAT16
PlatformValidatorUtil.Runtime.AIP

SNPE Java API Overview

Once the dependency is added, the SNPE classes under the com.qualcomm.qti.snpe package will be available in the application classpath.

Most applications will follow the following pattern while using a neural network:

  1. Select the neural network model and runtime target
  2. Create one or more input tensor(s)
  3. Populate one or more input tensor(s) with the network input(s)
  4. Forward propagate the input tensor(s) through the network
  5. Process the network output tensor(s)

The sections below describe how to implement each step described above.

Configuring a Neural Network

The code excerpt below illustrates how to configure and build a neural network using the JAVA APIs.

final SNPE.NeuralNetworkBuilder builder = new SNPE.NeuralNetworkBuilder(application)
// Allows selecting a runtime order for the network.
// In the example below use DSP and fall back, in order, to GPU then CPU
// depending on whether any of the runtimes is available.
.setRuntimeOrder(DSP, GPU, CPU)
// Loads a model from DLC file
.setModel(new File("<model-path>"));
final NeuralNetwork network = builder.build();
...
// Calling release() once the application no longer needs the network instance
// is highly recommended as it releases native resources. Alternatively the
// resources will be released when the instance is garbage collected.
network.release();

Multiple ways to load a model

The SDK currently supports loading a model from a java.io.File within the Android device or from an java.io.FileInputStream.

Creating an Input Tensor

The code excerpt below illustrates how to create an input tensor and fill it with the input data.

final FloatTensor tensor = network.createFloatTensor(height, width, depth);
float[] input = // input data from application...
// Fills the tensor fully
tensor.write(input, 0, input.length);
// Fills the tensor at a specific position
tensor.write(input[0], y, x, z);
// Fills the input tensors map which will be passed to execute()
final Map<String, FloatTensor> inputsMap;
inputsMap.put(/*network input name*/, tensor);

Notes about tensors

  • Reuse of input tensors
    Developers are encouraged to re-use the same input tensor instance across multiple calls to NeuralNetwork.execute(..). Tensors are memory bound types and the effect of creating new instances for every execute call may have an impact in the application responsiveness.
  • Batch write to tensor
    Tensors are backed by native memory and writing multiple values at once, if possible, will reduce the overhead of crossing the Java and Native boundaries.

Propagate Input Tensors Through the Network

The excerpt of code below shows how to propagate input tensors through the neural network.

final Map<String, FloatTensor> outputsMap = network.execute(inputsMap);
for (Map.Entry<String, FloatTensor> output : outputsMap.entrySet()) {
// An output tensor for each output layer will be returned.
}

Process the Neural Network Output

The excerpt of code below shows how to read the output tensor of an output layer.

final Map<String, FloatTensor> outputsMap = network.execute(inputsMap);
for (Map.Entry<String, FloatTensor> output : outputsMap.entrySet()) {
final FloatTensor tensor = output.getValue();
final float[] values = new float[tensor.getSize()];
tensor.read(values, 0, values.length);
// Process the output ...
}

Release Input and Output Tensors

Tensors are encouraged to be reused to reduce the application overhead. However, once the application no longer needs the input and/or output tensors, it is highly recommended to call release() on them to release native resources. This is particularly important for multi-threaded applications.

// Release input tensors
for (FloatTensor tensor: inputsMap) {
tensor.release();
}
// Release output tensors
for (FloatTensor tensor: outputsMap) {
tensor.release();
}

Android Sample Application

The SNPE Android SDK includes a sample application that showcases the SDK features. The application source code is in:

  • $SNPE_ROOT/examples/android/image-classifiers

Here is a screenshot of the sample:

SNPE


Note that SNPE provides the following AAR file which include necessary binaries:

  • snpe-release.aar: Native binaries compiled with clang using libc++ STL

Please set environment variable SNPE_AAR to this AAR file.

To build this sample, include the SNPE SDK AAR as described above and build with the following commands.

export SNPE_AAR=snpe-release.aar
cd $SNPE_ROOT/examples/android/image-classifiers
bash ./setup_alexnet.sh
bash ./setup_inceptionv3.sh
cp ../../../android/$SNPE_AAR app/libs/snpe-release.aar
./gradlew assembleDebug

Note:

  • To build the sample, import the network model and sample images by invoking the setup_models.sh script as described above.
  • If building produces the error gradle build failure due to "SDK location not found", set the environment variable ANDROID_HOME to point to your sdk location.
  • Building the sample code with gradle requires java 8.

After the build successfully completes, the output APK can be found in the application build folder:

  • $SNPE_ROOT/examples/android/image-classifiers/app/build/outputs/apk