Artificial Intelligence

AI is changing everything. Combined with powerful, energy efficient processors and ubiquitous connectivity to the wireless edge, intelligence is moving to more devices, changing industries, and inventing new experiences.

On-device AI allows for real-time responsiveness, improved privacy, and enhanced reliability along with better overall performance and with or without a network connection. Our Qualcomm Artificial Intelligence (AI) Engine along with our AI Software and Hardware tools as outlined below, are designed to accelerate your on-device AI-enabled applications and experiences.


The Qualcomm Artificial Intelligence (AI) Engine is available on supported Qualcomm® Snapdragon™ 855, 845, 835, 821, 820 and 660 mobile platforms, and with cutting-edge on-device AI processing found in the Snapdragon 855.

Snapdragon core hardware architectures – Qualcomm® Hexagon™ Vector eXtensions (HVX), Qualcomm® Adreno™ GPU and Qualcomm® Kryo™ CPU – are supported within the AI Engine, so your AI applications can run quickly and efficiently on smartphones and other edge devices. This heterogeneous computing approach makes it easy for you to choose the optimal Snapdragon core for your target performance, thermal, and power efficiency requirements. 


We have a number of resources available to help you create and optimize your AI and Machine Learning applications and solutions for rich, on-device experiences

  • Neural Network Optimization: Qualcomm Neural Processing SDK is designed to help you save time and effort in optimizing the performance of trained neural networks on devices powered by Snapdragon mobile platforms. Deep Learning algorithms are computationally intensive, so having this dedicated tool helps you determine how best to run your applications on device, without a connection to the cloud.
    The Neural Processing SDK supports the following frameworks:
  • App Performance Optimization: Snapdragon Developer Tools to help you optimize your applications running on Snapdragon mobile platforms. It includes the Snapdragon Profiler, Snapdragon Power Optimization SDK, and Snapdragon Heterogeneous Compute SDK.
  • Specialized Core Optimization: SDKs for specific processor cores are also available
    • Adreno GPU SDK - excels at repeating similar computations on large quantities of data, as in graphics processing and machine learning.
    • Hexagon DSP SDK - suited to processing digital signals from the outside world in real time, like those generated by a smartphone camera and microphone.
  • Smart Camera Solutions: From social media apps to robotics solutions, the following SDKs help you utilize the full capabilities of QTI processors for Smart Camera and other vision based solutions you create:
    • Machine Vision SDK - engineered to supply cutting-edge computer vision algorithms for localization, feature recognition, and obstacle detection on Qualcomm processors.
    • FastCV SDK - offers a mobile-optimized computer vision (CV) library that includes the most frequently used vision processing functions and helping you to add new user experiences into your camera-based apps like gesture recognition, face detection, tracking, text recognition, and augmented reality (AR).


We have a number of resources available to help you create AI and Machine Learning applications and resources for different types of devices


Qualcomm AI Research works to advance AI and make its core capabilities – perception, reasoning, and action – ubiquitous across devices. The goal is to make breakthroughs in fundamental AI research and scale them across industries. By bringing together some of the best minds in the field, we’re pushing the boundaries of what’s possible and shaping the future of AI.

If you need the QAST dataset that was used to support the experiments in our workshop paper at ICLR 2019: Simulating Execution Time of Tensor Programs Using Graph Neural Networks, check out our QAST Project Page. We hope this new dataset will benefit the graph research community and raise interest in Optimizing Compiler research.

Open sourcing the AI Model Efficiency Toolkit
This OnQ blog highlights how the AI Model Efficiency Toolkit has been open-sourced on GitHub to collaborate with other leading AI researchers and to provide a simple library plugin for AI developers to utilize for state-of-the-art model efficiency performance.


Explore these links about the latest trends and key features of AI and Machine Learning for more ideas and insight into your own projects:




Here you will find a number of engineering sourced resources to help you with your artificial intelligence development using the Qualcomm Neural Processing SDK.


If you are new to AI, check our eBook, A Developer’s Guide to Artificial Intelligence (AI), for a primer including its Machine Learning (ML) and Deep Learning (DL) subsets.


If you are looking for a deeper dive into some key areas of artificial intelligence, we collaborated with MoorInsights for a new white paper, Qualcomm: Ubiquitous AI For 5G. Topics include distributed intelligence, importance of on-device AI, AI engine, IoT and auto, Cloud AI 100 and the importance of bridging device to cloud with 5G.


If you are looking for inspiration on how other developers are implementing AI within their solutions, we encourage you to check out our AI Projects to see how the community is working with AI.