Snapdragon and Qualcomm branded products are products of
Qualcomm Technologies, Inc. and/or its subsidiaries.
When you look at your robotics application, do you think of it as an “intelligent-edge use case?” Probably not, but that’s where it plays.
The intelligent edge of the network offers your devices great advantages in areas like latency, speed, bandwidth, scale, reliability, cost, privacy, and security. That’s why opportunities for robotics in retail, manufacturing, healthcare, smart city, warehouse, and logistics are growing so fast.
I recently conducted a webinar called “Revolutionizing Robotics with Computer Vision, AI, and Heterogeneous Computing.” It describes the role of those three technologies from the developer’s perspective, then walks through an intelligent-edge use case in a hotel waiting lounge.
I’ll highlight some of the main topics to give you a taste of what I covered in my webinar, but if you have 30 minutes, I encourage you to click the link above and see for yourself.
As devices become increasingly connected and intelligent, the limits of continually shuttling data to and from the cloud are becoming painfully obvious. That model doesn’t scale well with the growth in data volume and the number of connected devices.
The way forward is from cloud-centric intelligence to distributed intelligence, with multiple devices connected to the edge cloud, each capable of on-device learning.
An example is autonomous mobile robots (AMR), which must perceive, reason and act. Their visual perception features must collect, transfer and annotate high volumes of data. Their artificial intelligence (AI) workloads are compute-intensive and always on, and they often run concurrently. They operate with tight constraints on design, battery, memory, and storage, but must be power-efficient and not overheat.
In hardware, our approach to the intelligent edge includes heterogeneous computing with specialization, as shown below.
Each of those processing units — CPU, GPU, DSP, NPU, and CV-ISP — is dedicated to a specific computing task in an edge device such as a robot. Developers and manufacturers use tools designed by Qualcomm Technologies to assign each task to the optimal processing unit.
In software, the Qualcomm AI Stack matches developer tools and SDKs to those processing units to provide support to all popular AI frameworks.
Example: The robot in the hotel lounge
Consider a typical hotel waiting lounge.
A concierge or bellboy can walk into this environment and immediately understand the entire context by taking it in. Similarly, the goal in robotics development is to enable the robot to take it in through high-fidelity sensing, computer vision, machine learning (ML), real-time control, and navigation. You achieve that goal by enabling robots and drones to answer several important questions:
- Where am I?
- Where are objects in my surroundings?
- How do I understand my surroundings?
- How do I safely move around?
- How can I help people?
Here are the steps toward achieving that goal:
- Determining position and location
First, the robot needs to determine its own position and location within the environment. It does that with different sets of sensors and computer vision technologies, including location and motion estimation using vision-based simultaneous localization and mapping (SLAM) algorithms and optical flow. Also, multiple sensor inputs such as wheel odometry, height sensors, inertial measurement unit (IMU), GPS, and Wi-Fi are fused with that visual data for greater accuracy.
- Seeing surroundings in 3D
Once the robot figures out its location, it starts to explore the environment, trying to build its surroundings in 3D. Options for depth-sensing sensors include a structured light camera, time-of-flight camera, stereo cameras, LIDAR, and SONAR.
- Creating a 3D map of surroundings
Having collected all the 3D information about the environment, the robot applies additional computing to convert the 3D information into a scene it can use.
- Recognizing objects and people
The robot uses on-device ML to recognize objects and people in its surroundings. Its perceptual tasks extend to identifying sounds like the elevator notification. It tries to identify people in the scene and to recognize speech.
- Understanding scenes, activities, and context
With its on-device ML engine, the robot continues trying to understand the scene and activities around it for overall context. Here, it understands that it’s in a hotel lounge with people around, some of whom are ready to leave while others wait for coffee.
- Navigating safely to achieve goals
The robot sets a high-level goal and executes on it. It breaks down the goal into the tasks of scene understanding, path planning, real-time control, and motion estimation, all of which require powerful, on-device AI performance. With perception, sensory AI, and computer vision, the robot can navigate around the room and achieve the goal of offering people any help they may need.
Watch the “Revolutionizing Robotics with Computer Vision, AI, and Heterogeneous Computing” webinar for more ideas about developing robotics applications at the intelligent edge of the network.
The Qualcomm Robotics RB5 development kit is hardware equipped with computer vision, AI, and heterogeneous computing that you can use in your own intelligent-edge use cases. It supports the software we make available in the Qualcomm AI Stack for developing applications with the most commonly used ML frameworks. Plus, you can extend the kit through mezzanine cards for vision, motor control, sensors, communication, and industrial protocols.
With Qualcomm Robotics RB5 and many other robotics development platforms, you can start building computer vision, AI, and heterogeneous computing into your own innovations. If you are interested in viewing additional Industrial IoT webinars, we have several more you can watch at your convenience.
Qualcomm Robotics RB5 and Qualcomm AI Stack are products of Qualcomm Technologies, Inc. and/or its subsidiaries.