QTI Announces Support for ONNX, Simplifying AI Choices for Developers

Tuesday 10/10/17 09:49am
|
Posted By Gary Brotman
  • Up0
  • Down0

So you’ve started working with neural networks and artificial intelligence (AI), but did you find it hard to choose one machine learning framework over another - like Caffe2, Caffe, TensorFlow, Microsoft Cognitive Toolkit or PyTorch? Whether you’re training your own models or using freely available ones, you’ll want to choose a framework that you stick with all the way through production.

Man holding tablet & pointing at woman holding mobile device

Qualcomm Technologies, Inc. (QTI) Joins ONNX

In September, Facebook and Microsoft introduced the Open Neural Network Exchange (ONNX) format. ONNX is an interchange format intended to make it possible to transfer deep learning models between the frameworks used to create them.

With ONNX, Facebook can acquire a trained model created elsewhere with PyTorch, for example, and use it with Caffe2 - Facebook’s preferred framework - for the inference stage of machine learning. Microsoft, in turn, has announced that its own Cognitive Toolkit will support ONNX, initially for inference.

This initiative aligns with our goal of supporting neural network execution on Qualcomm® Snapdragon™ mobile platforms that can be accelerated by the Snapdragon Neural Processing Engine (NPE) SDK. As such, QTI welcomes the opportunity to participate in ONNX, which helps make it easier for developers to utilize multiple frameworks.

What ONNX Means for OEMs, ODMs and Developers

Openness generally works in favor of the developer, so there are several ways that your AI applications can benefit from ONNX:

  • If you’ve started using the NPE SDK to run applications like object tracking, natural language processing and speaker recognition on mobile devices at the network edge, ONNX is designed to give you more latitude in the models you can deploy.
  • If you’ve optimized the performance of your neural networks and want to share them, ONNX helps widen the field of AI developers who can use them. It also helps widen the field of models you can borrow from other developers.
  • Let’s face it: every time a new technology opens up, developers have to make platform choices. Programming language, architecture, development environment, machine learning tool . . . you’re always making a choice that you and your team have to live with for a long time. ONNX helps you reduce the risk of painting yourself and your app into a corner because of the machine learning framework you chose.

ONNX holds the promise of smoothing the path between research and production.

Next Steps

Curious? You’ll find ONNX source code, documentation, binaries, Docker images and tutorials available right now on GitHub. Test-drive ONNX. We’ll keep you posted on progress of ONNX integration into the Snapdragon portfolio.