|Skill Level||Area of Focus||Operating System||Platform/Hardware||Cloud Services/Platform|
|Intermediate||Computer Vision, Healthcare, Robotics||Linux||RB3 Robotics Dev Kit||Amazon AWS IoT|
In this project, the human is represented by a toy figure and the robot is detecting whether they are fine (standing) or in need of assistance (fallen down). The robot moves closer to the fallen person and could be extended to incorporate Alexa to be able to “speak” with the fallen person and take action accordingly.
- The Robot Operating System (ROS) application, which encompasses alwaysAI computer vision application, running on Qualcomm Robotics RB3 Development Kit uses a local model, trained in AWS Sagemaker, to identify humans in video frames, develop situational awareness about the human, and offer assistance as needed.
- The person detection model and the pose estimation models are pre-trained in AWS Sagemaker and downloaded to the Qualcomm Robotics RB3 platform.
- The alwaysAI application is designed to run object detection and pose estimation on frames received from the camera. The results are used to decide any action needed and are displayed on a streamer service. ROS application reads the decision to move the robot and operates the wheels accordingly.
- The robot detects human activity by analyzing the sequence of poses and classifies the human’s situation as safe or in need of help. In near future, the application will be extended to send a message to AWS IoT MQTT topic. The ROS application will integrate with Amazon Kinesis Video Streams ROS extension to capture live video feeds. We plan to use AWS Greengrass for deploying models on the robots. We also plan to integrate Alexa to initiate a conversation with the human under distress, call 911, etc.
Qualcomm Robotics RB3 is a product of Qualcomm Technologies, Inc. and/or its subsidiaries.