Run your ONNX AI Models Faster on Snapdragon

Friday 4/20/18 08:10am
|
Posted By Enrico Ros
  • Up0
  • Down0

Snapdragon and Qualcomm branded products are products of
Qualcomm Technologies, Inc. and/or its subsidiaries.

Are you still running your machine learning inference workloads in the cloud? Why put up with latency, privacy risks and slow model processing when you can run those neural networks right on the mobile device?

Since last summer we’ve been posting about on-device AI and how you can use the Snapdragon® Neural Processing Engine (NPE) SDK to run your Caffe/Caffe2 and TensorFlow models directly on Snapdragon devices.

And starting now — not tomorrow, not next Tuesday — you can also use the NPE SDK to run ONNX models on that same Snapdragon mobile platform because we have just announced a major contribution to the ONNX mission by allowing mobile application and IoT developers to run accelerated ONNX 1.0.1 models on Snapdragon® mobile platforms, starting with Snapdragon 450.

If you’re new to developing applications centered on machine learning, you’ve got plenty of reasons to look at ONNX, the Open Neural Network Exchange format: ONNX is simplifying AI choices by defining a standard which is here to stay, and is usable anywhere from small mobile devices to large server farms, across chipsets and vendors, and with extensive runtimes and tools support. ONNX reduces the friction of moving trained AI models among your favorite tools and frameworks and platforms.

First-mover advantage

Since the introduction of ONNX in late 2017, we have been committed to support the format. So have several hardware, software and cloud companies like Amazon, AMD, ARM, Facebook, Huawei, IBM, Intel, Microsoft, Mediatek and NVIDIA. Unlike other model formats that are tied to a specific AI framework, ONNX is the first, open format with the support of the important companies who are building their business around AI. The format has attracted big fans and established working groups quickly.

ONNX is young but growing quickly.

Our first product with ONNX support is the Snapdragon NPE SDK version 1.14.0, our mobile inferencing SDK released on March 31, 2018. This image shows you the developer workflow to run accelerated ONNX models on device with the NPE SDK:

What can you do with ONNX?

You can import and export ONNX AI models among deep learning tools and frameworks like Caffe2, Chainer, Cognitive Toolkit, MXNet and PyTorch. ONNX also supports conversion of models trained using CoreML and TensorFlow.

You can make your own AI application using ONNX; for example, from the ONNX Models repository on GitHub you can download models such as AlexNet, Inception, ResNet and VGG to implement techniques like object recognition, image classification and features extraction for face identification. With those techniques you can build applications to understand and augment the world as seen through the eye of the camera. Or, you can invent new applications and use cases by using and remixing ONNX models.

The ONNX community is expanding beyond techniques for vision, to include models for applications like language modeling.

How can you use the NPE SDK to run ONNX models on Snapdragon right now?

ONNX version 1.0, focused on image applications, was released in December 2017, and version 1.0.1 was released in March 2018. The Snapdragon NPE SDK supports both and, better yet, now accelerates selected ONNX operators on our mobile platforms. Here is a quick start guide to get your feet wet:

Not only is the NPE SDK designed so you can run ONNX models on Snapdragon platforms, but it gives you the freedom to choose the acceleration core — DSP, GPU, or even CPU — based on the needs of your specific use case.

Next Steps

Don’t let any grass grow under your feet. Be the first to launch a new application that delights your users and is powered by accelerated ONNX AI, with high-fps experiences that are low-latency, private, and reliable.

With the resources in this blog post, you can have your models and the ones in the ONNX model repository up and running and accelerated in no time. A few weeks ago, that wasn’t possible, but ONNX is opening the door to a whole, new community of machine learning developers and the NPE SDK is rolling out a red carpet to workloads that run directly on the device.

That can mean a big competitive advantage for you. Download the NPE SDK and get started.

Comments