This guide demonstrates how to get started with the Qualcomm® Neural Processing SDK. Starting from a clean Ubuntu installation, this tutorial is designed to provide you with the steps to install the dependencies, setup the SDK tools, download and prepare some example neural network models, and finally build the example Android APP that you can use for your solutions that use artificial intelligence (AI).
Longer form documentation about this process is available in the SDK documentation contained in the /doc/html folder; open index.html to begin learning.
We recommended performing the following operations on a dedicated machine, to better understand the SDK dependencies:
- Install Ubuntu 14.04 (recommended), for example on a virtual machine.
- Install the latest Android Studio.
- Install the latest Android SDK, from Android Studio or stand-alone.
- Install the latest Android NDK, from the Android Studio SDK Manager or stand-alone.
- Install Caffe (installation instructions, git revision d8f79537 recommended with this SDK).
- Optional: install TensorFlow (installation instructions, version 1.0 recommended).
# this will build Caffe (and the pycaffe bindings) from source - see the official instructions for more information
sudo apt-get install cmake git libatlas-base-dev libboost-all-dev libgflags-dev libgoogle-glog-dev libhdf5-serial-dev libleveldb-dev liblmdb-dev libopencv-dev libprotobuf-dev libsnappy-dev protobuf-compiler python-dev python-numpy
git clone https://github.com/BVLC/caffe.git ~/caffe; cd ~/caffe; git reset --hard d8f79537
mkdir build; cd build; cmake ..; make all -j4; make install
# this will download and install TensorFlow in a virtual environment - see the official instructions for more information
sudo apt-get install python-pip python-dev python-virtualenv
mkdir ~/tensorflow; virtualenv --system-site-packages ~/tensorflow; source ~/tensorflow/bin/activate
pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0-cp27-none-linux_x86_64.whl
Setup the SDK
This step allows the Qualcomm Neural Processing SDK to communicate to the Caffe and Tensorflow frameworks via the python APIs. To setup the SDK on Ubuntu 14.04, proceed as follows:
- Make sure you have installed the Android NDK, Caffe (here assumed in ~/caffe) and optionally TensorFlow (here assumed in ~/tensorflow) before proceeding.
- Download the latest Qualcomm Neural Processing SDK.
- Unpack the .zip file to an appropriate location (here assumed in the ~/snpe-sdk folder).
- Install missing system packages:
- Initialize Qualcomm Neural Processing SDK environment on the current console; in the future, repeat this for every new console:
# install a few more SDK dependencies, then perform a comprehensive check
sudo apt-get install python-dev python-matplotlib python-numpy python-protobuf python-scipy python-skimage python-sphinx wget zip
source ~/snpe-sdk/bin/dependencies.sh # verifies that all dependencies are installed
source ~/snpe-sdk/bin/check_python_depends.sh # verifies that the python dependencies are installed
# initialize the environment on the current console
export ANDROID_NDK_ROOT=~/Android/Sdk/ndk-bundle # default location for Android Studio, replace with yours
source ./bin/envsetup.sh -c ~/caffe
source ./bin/envsetup.sh -t ~/tensorflow # optional for this guide
The initialization will set or update $SNPE_ROOT, $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, $CAFFE_HOME, $TENSORFLOW_HOME, and in addition it will copy the Android NDK libgnustl_shared.so library locally, and update the Android AAR archive.
Download the ML Models and convert them to .DLC
The Qualcomm Neural Processing SDK does not bundle model files which are publicly available, but contains scripts to download some popular models and convert them to the Deep Learning Container ("DLC") format. Scripts are located in the /models folder, which will also contains the DLC models.
- Download and convert a pre-trained Alexnet example in Caffe format:
- Optional: download and convert a pre-trained "inception_v3" example in Tensorflow format:
python ./models/alexnet/scripts/setup_alexnet.py -a ./temp-assets-cache -d
Tip: take a look at the setup_alexnet.py script, which performs the conversion to DLC. You will likely perform the same operations for your Caffe models conversion.
python ./models/inception_v3/scripts/setup_inceptionv3.py -a ./temp-assets-cache -d
Tip: take a look at the setup_inceptionv3.py script, which also performs quantization on the model, for a 75% size reduction (91MB → 23MB).
Build the Example Android APP
The Android APP combines the Snapdragon NPE runtime (provided by the /android/snpe-release.aar Android library) and the DLC model generated by the Caffe Alexnet example described above.
- Prepare the APP by copying the runtime and the model
- Option A: build the Android APK from Android studio:
- Launch Android Studio.
- Open the project in the ~/snpe-sdk/examples/android/image-classifiers folder.
- Accept the Android Studio suggestions to upgrade build system components, if offered.
- Press the "Run app" button to build and run the APK.
- Option B: build the Android APK from the Command line:
cp ../../../android/snpe-release.aar ./app/libs # copies the NPE runtime library
bash ./setup_models.sh # packages the Alexnet example (DLC, labels, imputs) as an Android resource file
sudo apt-get install libc6:i386 libncurses5:i386 libstdc++6:i386 lib32z1
libbz2-1.0:i386 # Android SDK build dependencies on ubuntu
./gradlew assembleDebug # build the APK
The command above will likely require ANDROID_HOME and JAVA_HOME to be set to location of the Android SDK and the JRE/JDK in your system.
Congratulations, you just made you first App using the Qualcomm Neural Processing SDK. It's time to get started creating your own AI solutions!
The source code of the example Android APP demonstrates how to use the SDK correctly. ClassifyImageTask.java is a good starting point to start learning. API documentation, tutorials, and architectural details are available in the documentation bundled with the SDK. Point your browser to /doc/html/index.html to start learning. Answers to frequent questions can be found in the Forums, where you can also talk "APIs" with our experts.