|Skill Level||Area of Focus||Operating System||Cloud Service/Platform|
|Beginner||Alexa Voice Service,|
Computer Vision, Education,
|Linux||Amazon AWS IoT|
AlexaGo is designed to mount Amazon Alexa voice-assistive technology on a mobile robot exoskeleton. This allows AlexaGo to respond to your voice commands through physical actions: say fetch a cup of water or light your way down a dark hallway. The backend was implemented on the DragonBoard™ 410c development platform from Arrow Electronics, which acts as the gateway for the IoT endpoints.
Voice assistive technologies are changing the way we interact with our world. They have the potential to become our companions and loyal friends in times of need. But in its current form of only being able to act in the form of voice, these devices are fundamentally limited - unable to interact with us where it matters most, in the physical world. We look to change that through AlexaGo. The AlexaGo was designed with the elderly as a specific target group, with our device configurable to many mobility-based applications.
Build / Assembly Instructions
Materials Required / Parts List / Tools
Source Code / Source Examples / Application Executable
- Project posted on Devpost
- Project website
- DragonBoard 410c Quick Start Guide
- How to Flash the DragonBoard 410c
- Tutorial Videos
Build / Assembly
To build AlexaGo, we first built a robotic exoskeleton which allowed for 2D motion and had an attached robot arm. Then, we programmed an Amazon Alexa to be able to translate our voice commands into physical commands in the Arduino programming language, which were then executed on the robot through an Arduino board. In order to analyze the surrounding environment, we used Darknet: a deep learning image recognition platform.