Qualcomm Neural Processing SDK for AI

A Product of Qualcomm Technologies, Inc.

The Qualcomm® Neural Processing SDK for artificial intelligence (AI) is engineered to help developers save time and effort in optimizing performance of trained neural networks on devices with Snapdragon.

Qualcomm Neural Processing SDK for AI

Premium tier Snapdragon® mobile platforms have extensive heterogeneous computing capabilities that are engineered to allow the running of trained neural networks on device without a need for connection to the cloud. The Qualcomm Neural Processing SDK is designed to help developers run one or more neural network models trained in Caffe/Caffe2, ONNX, or TensorFlow on Snapdragon mobile platforms, whether that is the CPU, GPU or DSP.

The Qualcomm Neural Processing SDK provides tools for model conversion and execution as well as APIs for targeting the core with the power and performance profile to match the desired user experience. The Qualcomm Neural Processing SDK supports convolutional neural networks and custom layers.

The Qualcomm Neural Processing SDK does a lot of the heavy lifting needed to run neural networks on Snapdragon mobile platforms, which can help provide developers with more time and resources to focus on building new and innovative user experiences.

What's in the SDK?

  • Android and Linux runtimes for neural network model execution
  • Acceleration support for Qualcomm® Hexagon™ DSPs, Qualcomm® Adreno™ GPUs and Qualcomm® Kryo™, CPUs1
  • Support for models in Caffe, Caffe2, ONNX, and TensorFlow formats2
  • APIs for controlling loading, execution and scheduling on the runtimes
  • Desktop tools for model conversion
  • Performance benchmark for bottleneck identification
  • Sample code and tutorials
  • HTML Documentation

To make the AI developer's life easier, the Qualcomm Neural Processing SDK does not define yet another library of network layers; instead it gives developers the freedom to design and train their networks using familiar frameworks, with Caffe/Caffe2, ONNX, and TensorFlow being supported at launch. The development workflow is the following:

NPE SDK workflow

After designing and training, the model file needs to be converted into a ".dlc" (Deep Learning Container) file to be used by the Snapdragon NPE runtime. The conversion tool will output conversion statistics, including information about unsupported or non-accelerated layers, that the developer can use to adjust the design of the initial model.

Is the Qualcomm Neural Processing SDK Right for You?

Developing for artificial intelligence using the Qualcomm Neural Processing SDK does require a few prerequisites before you can get started creating solutions.

  • You need to run a convolutional model in one or multiple verticals, including mobile, automotive, IoT, AR, drones, and robotics
  • You know how to design and train a model or already have a pre-trained model file
  • Your framework of choice is Caffe/Caffe2, ONNX, or TensorFlow
  • You make JAVA APPs for Android or native applications for Android or Linux
  • You have an Ubuntu 14.04 development environment
  • You have a supported device to test your application on

For different usages or needs, please reach out to us in the support Forum.

Forums and Feedback

We welcome your feedback and questions on the Qualcomm Neural Processing SDK. Check out the Qualcomm Neural Processing SDK Forum to read answers to the frequently asked questions, register to post a new topic, keep track of the updated threads, and answer to fellow machine learning enthusiasts.

  1. The Qualcomm Neural Processing SDK supports numerous Snapdragon CPUs
  2. Due to the fast evolution of networks and layers, acceleration support is partial and will be expanded through time.