Qualcomm Neural Processing SDK
The Qualcomm® Neural Processing SDK is engineered to help developers save time and effort in optimizing performance of trained neural networks on devices with Qualcomm® AI products. And as part of our Qualcomm® AI Stack, it can help developers deploy AI models fast and run entirely on-device on Qualcomm® AI products.
Our products have extensive AI processing capabilities that are engineered to allow the running of trained neural networks on device without a need for connection to the cloud. The Qualcomm Neural Processing SDK is designed to help developers run one or more neural network models trained in TensorFlow, PyTorch, Keras and ONNX on Qualcomm® platforms, whether that is the CPU, GPU or Qualcomm® Hexagon™ Processor.
The Qualcomm Neural Processing SDK provides tools for model conversion and execution as well as APIs for targeting the core with the power and performance profile to match the desired user experience. The Qualcomm Neural Processing SDK supports convolutional neural networks, custom layers and more.
The Qualcomm Neural Processing SDK does a lot of the heavy lifting needed to run neural networks on Qualcomm® platforms, which can help provide developers with more time and resources to focus on building new and innovative user experiences.
What's in the SDK?
- Android and Linux runtimes for neural network model execution
- Acceleration support for Qualcomm® Hexagon™ Processor, Qualcomm® Adreno™ GPUs and Qualcomm® Kryo™, CPUs1
- Support for models in TensorFlow, PyTorch, Keras and ONNX formats2
- APIs for controlling loading, execution and scheduling on the runtimes
- Desktop tools for model conversion
- Performance benchmark for bottleneck identification
- Sample code and tutorials
- HTML Documentation
To make the artificial intelligence developer's life easier, the Qualcomm Neural Processing SDK does not define yet another library of network layers; instead it gives developers the freedom to design and train their networks using familiar frameworks, with TensorFlow, PyTorch, Keras and ONNX being supported at launch. The development workflow is the following:
After designing and training, the model file needs to be converted into a ".dlc" (Deep Learning Container) file to be used by the Snapdragon® Neural Processing Engine (NPE) runtime. The conversion tool will output conversion statistics, including information about unsupported or non-accelerated layers, that the developer can use to adjust the design of the initial model.
Is the Qualcomm Neural Processing SDK Right for You?
Developing for artificial intelligence using the Qualcomm Neural Processing SDK does require a few prerequisites before you can get started creating solutions.
- You need to run a convolutional model in one or multiple verticals, including mobile, compute, automotive, IoT, AR, drones, and robotics
- You know how to design and train a model or already have a pre-trained model file
- Your framework of choice is TensorFlow, PyTorch, Keras and ONNX
- You make JAVA APPs for Android or native applications for Android, Windows, or Linux
- You have an Ubuntu 20.04 or WSL2 on Windows with an Ubuntu 20.04 development environment
- You have a supported device to test your application on
For different usages or needs, please reach out to us in the support Forum.
Forums and Feedback
We welcome your feedback and questions on the Qualcomm Neural Processing SDK. Check out the Qualcomm Neural Processing SDK Forum to read answers to the frequently asked questions, register to post a new topic, keep track of the updated threads, and answer to fellow machine learning enthusiasts.
- The Qualcomm Neural Processing SDK supports numerous Snapdragon CPUs
- Due to the fast evolution of networks and layers, acceleration support is partial and will be expanded through time.
Snapdragon and Qualcomm branded products are products of Qualcomm Technologies, Inc. and/or its subsidiaries.