What would you do with 7 teraOPS of compute available for running your machine learning models on a mobile device?
That would really open up your options in robotics, edge computing, connected cameras and always-on applications, wouldn't it? Hardware acceleration can speed up neural network execution by orders of magnitude in use cases like image classification, object detection, face detection and speech recognition. It can pave the way for new, real-time use cases that otherwise wouldn't be possible, like live video enhancements and fast body tracking for games.
And if you got all that compute with extremely low power consumption and without bogging down your CPU and GPU, then so much the better.
With our latest processor, the Qualcomm® Snapdragon™ 855 mobile platform, you can run inference workloads directly on the Qualcomm® Artificial Intelligence (AI) Engine — dedicated hardware and software designed to accelerate on-device AI. The Qualcomm® Neural Processing SDK for AI, our software-accelerated runtime for the execution of deep neural networks, lets you program the Qualcomm AI Engine. Together, the engine and the SDK allow you to squeeze up to 7 teraOPS (7 trillion calculations per second) out of AI processing out of the Snapdragon 855, with massive acceleration for your on-device AI applications.
How much acceleration?
In this video, we are going to show you a reference benchmarking application we created for internal development purposes to help illustrate just how much acceleration you could see utilizing the Snapdragon 855 mobile platform.
Inference in the cloud or on the device?
A lot of developers are writing mobile and IoT apps around functions like image classification, object detection and face detection. A few years ago, they had to perform both training and inference in the cloud. But as mobile processors have increased in power, developers have started separating training from inference. After training machine learning models in the cloud, they're moving the inference workloads down to the mobile device.
Why should you run your machine learning models on the device rather than in the cloud? For one thing, you don't want the latency of going to the cloud and back. Also, you can keep user data on the device, which is an advantage for privacy. And you don't want your app to be at the mercy of the network connection. In short, you can take machine learning into new industries and enrich users with better mobile experiences.
But on-device processing demands high compute. Otherwise, the device becomes the bottleneck. As the video above suggests, if the device takes, say, one-half or one-third of a second to run the image from your camera through the AI model, you won't have very good performance in a real-time application.
AI processing power on the device
That's why we've designed the Qualcomm AI Engine in the Snapdragon 855 with the capacity for mind-blowing performance on machine learning models. The engine provides high capacity for matrix multiplication on both the Qualcomm® Hexagon™ Vector eXtensions (HVX) and the Hexagon Tensor Accelerator (HTA). With enough on-device processing power to run more than 140 inferences per second on the Inception-v3 neural network, your app could classify or detect dozens of objects in just a few milliseconds and with high confidence.
In the next video you'll see a reference benchmarking application we created for internal development that shows the Qualcomm AI Engine performing inference to classify multiple objects at once. Start thinking about how you could use this kind of AI to transform industries and enrich lives in areas like robotics, IoT, VR/AR and connected car applications.
The combination of the Qualcomm Neural Processing SDK and the Qualcomm AI Engine is the latest step on our path to giving you the tools to offer new user experiences through mobile AI.
Want the competitive advantage of running your machine learning models on dedicated hardware for high performance and low power consumption? Download the Qualcomm Neural Processing SDK for AI now and start testing on a commercial device running the Snapdragon 855.
Qualcomm Snapdragon, Qualcomm AI Engine, Qualcomm Neural Processing SDK and Qualcomm Hexagon are products of Qualcomm Technologies, Inc. and/or its subsidiaries.