The Expanding World of AI: New Areas for Developers

Tuesday 7/24/18 09:00am
Posted By Enrico Ros
  • Up0
  • Down0

Snapdragon and Qualcomm branded products are products of
Qualcomm Technologies, Inc. and/or its subsidiaries.

Artificial Intelligence (AI) is revolutionizing mobile and IoT experiences like never before. With the growth and expansion of AI into new areas of development, chances are you have either started integrating AI into your work or are considering how best to do so. As a developer, it’s important to be familiar with how AI can impact different areas of development, as you may discover areas being reinvented by AI that you hadn’t initially considered.

Qualcomm Technologies, Inc. (QTI) has been working with AI for close to a decade, and we’ve seen the ecosystem for AI grow and expand. This ecosystem consists of AI technologies and companies including hardware vendors, cloud services, algorithms vendors, data vendors, and open and closed frameworks that support developers for utilizing AI.

In this blog we’ll take a quick look at some of these new development areas utilizing parts of the AI ecosystem, and share a few examples of how developers are expanding this space. And, if you need a quick primer on AI, check out our eBook.

Face and Body Identification

Object identification, in particular for the human face and body, is an application of AI that includes functionality such as detection, features recognition, comparison, searching and matching, to power use cases such as face unlock, posture detection for gaming, or face beautification. Such use cases are examples of AI functionality built on pre-trained “neural networks” which are collections of nodes that have learned, through examples, what certain features (e.g., distance of the eyes, shape of the nose, presence of freckles) look like.

This process of “training” such networks often involves “deep learning” to populate them with abstract representations of the data. This often consists of multiple phases which further abstract the data so that distinct features such as edges, facial features, etc., can be identified. Once the network has been trained, a process called “inference” is then used to apply the pre-trained network to any face to generate the quantitative data that will ultimately help to drive the use case.

What would an implementation of this process look like? Well, SenseTime is using deep learning to provide an API for advanced vision processing. Their SenseTotem solution performs facial recognition and comparisons to find a face in a database of images. QTI has been collaborating with SenseTime on a “chip + algorithm” initiative to utilize our hardware strengths and their machine learning models and algorithms.

Another example to look at for inspiration would be Face++, a company that also provide APIs for facial and body detection. Their Body Detection solution identifies the outlines of people in images for a variety of uses including surveillance. Face++ has optimized their solutions for our Snapdragon® mobile platforms which are powered by our AI Engine.

User Input

You might not think of user input as an AI-driven task, but it’s being used to drive new and creative ways for user interaction with devices.

Touch-free, gesture-based control such as that being developed by Elliptic Labs, allows users to control their devices with motions such as hand waves, without the need to have physical contact with their device. The solution utilizes the machine learning and optimization tools of our Qualcomm® Neural Processing SDK, which is used to create these virtual sensors.

Using the speakers and microphone found in mobile devices, this solution emits ultrasound waves through the speaker which bounce off of users. The reflected ultrasound waves are recorded by the microphone and interpreted by a mix of pre-trained neural networks and signal processing algorithms running on the device as described here.

AI at the Edge

Devices, such as those powered by our Snapdragon platform, are more capable than ever of performing AI tasks such as inference, that were once relegated to powerful cloud servers. One area where we’ve utilized “on-device” AI is with virtual assistants, making them more ubiquitous, personalized, and human like.

Our Voice Activation technology found in many of our mobile platforms, supports Amazon Alexa, Baidu DUEROS, Microsoft Cortana, and Google Assistant, and utilizes continuous, on-device learning to provide an intelligent and personalized experience. Machine learning is used to classify and identify acoustic signals, and our solutions have evolved to use data from other sources such as sensors to provide further contextual information.

On-device AI is an area in which we’re excited to work with ecosystem players. One example is our collaboration with Baidu in which our Qualcomm® Artificial Intelligence (AI) Engine is being used to drive conversion and application of Baidu’s PaddlePaddle open-source deep learning framework models on our Snapdragon platforms through the Open Neural Network Exchange (ONNX) interchange format.


As the AI ecosystem continues to grow, you will likely use AI in your development. Now it’s up to you to look for innovative and unique ways to incorporate it. If those ideas require on-device AI, be sure to check our Qualcomm Neural Processing SDK which supports accelerated inference on pre-trained neural networks, including many popular model architectures and formats. If you need any help or want to bounce some ideas around our community, please feel free to visit our AI forum.