Welcome to the QDN blog. We invite you to respond to our thoughts and share with us some of your own. If you'd like to post a comment, you need to follow our Posting Guidelines.
Friday 4/20/18 08:10am
Posted By Enrico Ros
  • Up1
  • Down0

Run your ONNX AI Models Faster on Snapdragon

Are you still running your machine learning inference workloads in the cloud? Why put up with latency, privacy risks and slow model processing when you can run those neural networks right on the mobile device?

Since last summer we’ve been posting about on-device AI and how you can use the Qualcomm® Snapdragon™ Neural Processing Engine (NPE) SDK to run your Caffe/Caffe2 and TensorFlow models directly on Snapdragon devices.


Showing 5 - 8 out of 195