Snapdragon and Qualcomm branded products are products of
Qualcomm Technologies, Inc. and/or its subsidiaries.
Are you looking for ways to start your company’s robotics development? Or are you already developing robots, and you’re looking for ways to make them smarter?
Either way, I have three words for you: Autonomous Mobile Robots
Autonomous Mobile Robots (AMRs) are robots in that they augment human effort with machinery. They are autonomous in that they are designed to perform useful work without continuous human intervention. And they are mobile in that they can easily move themselves from one place to another to perform work.
We hosted a webinar recently called Design Considerations for Autonomous Mobile Robots, which aligned with the developer’s perspective on the features common to AMRs. It covers the relationships among sensors, intelligence and compute engines that go into designing AMRs in general, but specifically in the fast-growing application sector of ecommerce fulfillment.
I’ll highlight some of the main topics covered to give you a taste of what you can learn from watching the 30-minute webinar.
Ecommerce fulfillment AMRs
By 2025, there could be as many as 4 million commercial robots operating in 50,000 warehouses worldwide. They will perform work like receiving, picking, sorting, packaging and simply moving goods autonomously around a warehouse from point A to point B.
Performing that work depends on four primary functions:
- Sensing
- Thinking
- Acting
- Communicating
Of course, you can take those functions for granted with human warehouse workers. But with a robot, you have to build them in, with features like sensors, algorithms, computer vision and artificial intelligence. Plus the heterogeneous computing to run it all smoothly.
The AMR in the image below, commonly used in e-commerce fulfillment, is a case in point.
Breaking autonomy down into tasks
Now think about the tasks an AMR needs to execute and how it will use those features and heterogeneous computing to execute them.
See surroundings in 3D
The AMR uses sensors and cameras to not only perceive nearby objects but also to understand the physical relationships among them. That means handling feeds from components like these:
- Structured light camera — Decodes a projected pattern of pixels on the scene
- Time-of-flight camera — Measures the distance that light travels
- Stereo cameras — Capture multiple pictures from different cameras
- LIDAR — Illuminates a target with a laser and analyzes the reflected light
- SONAR — Emits pulses of sound and listens for echoes
Create 3D map of surroundings
From the camera feeds, the AMR uses simultaneous localization and mapping (SLAM) to construct a 3D map of its environment as it moves around.
Figure out where it is on the map
Next comes localization, in which the AMR determines where it is on the 3D map. It combines motion data from the camera feeds with inertial data from sensors and the wheel encoder to better estimate motion and improve the accuracy of localization.
There are two approaches to SLAM:
- Visual SLAM — uses a camera paired with an inertial measurement unit (IMU)
- LIDAR SLAM — uses a laser sensor paired with IMU; more accurate in one dimension but tends to be more expensive
Note that 5G plays a role in localization. Private 5G networks in warehouses and fulfillment centers can augment the on-board approaches to SLAM.
Navigation
Once the AMR has a map and knows where it is on the map, it can navigate in its environment. Navigation involves:
- Scene understanding — Using depth sensors and machine learning to build a spatial and semantic model of the environment
- Path planning — Finding the optimal path through the environment and satisfying high-level goals while avoiding obstacles
- Real-time control — Implementing a motion plan by translating desired speed and direction into motor commands
- Motion estimation — Estimating change in position on the map. With a new location and environment, the AMR updates the planned path.
Navigation includes adapting to changes in environment elements like people, shelves and walls. AMRs rely on LIDAR to detect changes and they use machine learning to refine navigation goals. They can also take advantage of Indoor Precise Positioning, using 5G Transmission Points/Reception Points (TRP) to plot a grid they can use for centimeter-level accuracy on x-, y- and z-axes.
Recognize objects and avoid obstacles
AMRs have to recognize and interact with objects and get around obstacles. That means they must rely heavily on computer vision and artificial intelligence, because they’re constantly learning to recognize objects. High-performing AMRs execute those functions on the device, instead of shuttling data to the cloud and back.Next steps
Watch our webinar Design Considerations for Autonomous Mobile Robots for more ideas you can innovate around. It’s a good, methodical view of the tasks your AMRs will have to accomplish.
And have a look at the Qualcomm Robotics RB5 AMR reference design. It’s suited to high-compute, low-cost terrestrial robots in applications like manufacturing, mapping, logistics, delivery, health care and retail, and it ships configured for industrial warehouse settings.
Qualcomm Robotics RB5 AMR Reference Design is a product of Qualcomm Technologies, Inc. and/or its subsidiaries.