Deploying robotics applications — Using a Docker container for cross-compilation

Tuesday 8/4/20 09:04am
|
Posted By Ramya Kanthi Polisetti
  • Up0
  • Down0

Snapdragon and Qualcomm branded products are products of
Qualcomm Technologies, Inc. and/or its subsidiaries.

How are you trying to innovate with the Qualcomm® Robotics RB3 Development Kit? Create a fleet of robots? Deploy applications to multiple robots in the fleet? Keep track of application versions?

The development environment in AWS RoboMaker is designed for compiling and building applications for robots, such as the Turtlebot3 Burger. AWS RoboMaker is the next step in your robotics development.

(Note: This article explores the Docker container used for cross-compilation in the learning resource Robotic Application (ROS) Deployments from AWS RoboMaker. Developers who have successfully followed those instructions to deploy the applications can find more details here.)

Overview and objective

Since the environment in AWS RoboMaker is x86, it is necessary to cross-compile to generate executables for ARM64, the architecture of the Qualcomm® Robotics RB3 Development Kit.

AWS RoboMaker uses Docker containers for cross-compilation. It is also possible to use a Docker container to install a board-specific cross-compilation tool chain in the environment of AWS RoboMaker. You can then use that tool chain to build applications inside the container.

The container, built by running a Dockerfile on an Ubuntu-16.04 ARM64 Docker image, emulates the physical robot build environment. The main AWS RoboMaker blog post Deploy Robotic Applications Using AWS RoboMaker assumes an armhf environment (as in Raspberry Pi 3). The following explanation uses the armhf Dockerfile as a reference for creating an ARM64 Dockerfile.

How the x86 machine runs ARM64 Docker containers

Our goal is to create a cross-compilation Docker container that represents a new machine that suits and emulates the environment (including OS, packages, build tools and processor) of the physical robot. That way, any available OS libraries or shared libraries inside the Docker container will determine the architecture of the application.

Architecture differences

The following illustrates how the x86 machine runs multi-architecture (ARM64) Docker containers.

On the x86 desktop, you run the file command:

ubuntu:~/environment $ file /usr/bin/vim.basic

/usr/bin/vim.basic: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/l, for GNU/Linux 2.6.32, BuildID[sha1]=1742b79934b3c8c4fe5cf2d5a46df429adec83ad, stripped

On the Qualcomm Robotics RB3, the file command command shows this:

root@7950c1bcc37f:/# file /usr/bin/vim.basic
/usr/bin/vim.basic: ELF 64-bit LSB shared object, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-, for GNU/Linux 3.7.0, BuildID[sha1]=a6e5f41d9e036fd390a458463f30c9d02316e655, stripped

The architecture information in both cases is highlighted in yellow above. Obviously, each executable is compiled for a different processor architecture using its instruction set, so it does not run on the other architecture. However, that does not necessarily mean that there is no way for compiled binaries and executables to run across architectures.

User space emulation

Most Linux-based environments allow for configuring interpreters for other architectures. That so-called user space emulation means, for example, that an interpreter for ARM64->x86 executes ARM64 instructions on x86. Every time the interpreter encounters a system call in the guest executable, it maps the call and its parameters to the local (host) x86 system call.

QEMU is one such popular interpreter. qemu-arm-static is a statically built program that interprets arm32->x86. qemu-aarch64-static similarly interprets arm64->x86.

binfmt_misc is a utility for configuring an interpreter. Installing interpreters through apt usually handles the work of configuration. Executables originally built for different architectures from the host will then run on the host machine.

So, use the following commands to run vim (the executable copied above) on the x86 desktop:

ubuntu:~/environment $sudo apt install qemu-user-static //installs and configures qemu-aarch64-static as interpreter for ARM64 executables through binfmt_misc

ubuntu:~/environment $./vim //copied vim available in the local directory runs fine as ARM64->x86 interpreter is installed and configured

That background leads into the Dockerfile and its components.

The Dockerfile

Below is the listing of the ARM64 Dockerfile used in the QDN learning resource Robotic Application (ROS) Deployments from AWS RoboMaker, with the addition of comments in yellow highlight:

ARG ROS_VERSION=kinetic
ARG UBUNTU_VERSION=xenial
#we start at ubuntu-16.04 (xenial) with ROS Kinetic preinstalled as our base image
FROM arm64v8/ros:kinetic-ros-base-xenial

ENV PATH="/root/.local/bin:${PATH}"

# Copy qemu-aarch64-static from the host machine. This will be our interpreter for anything and everything that runs inside this container. As our host is x86 we need ARM64->x86 interpreter
COPY qemu-aarch64-static /usr/bin

RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections

# Package manager automatically installs xenial ARM64 packages (base image effect)
# Installed for convenience
RUN apt-get update && apt-get install -y vim

# Add raspicam_node sources to apt
RUN apt-get update && apt-get install -y apt-transport-https \
&& echo "deb https://packages.ubiquityrobotics.com/ubuntu/ubiquity xenial main" > /etc/apt/sources.list.d/ubiquity-latest.list \
&& apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key C3032ED8


# Install Python and colcon
RUN apt-get update && apt-get install -y \
python \
python3-apt \
curl \
&& curl -O https://bootstrap.pypa.io/get-pip.py \
&& python3 get-pip.py \
&& python2 get-pip.py \
&& python3 -m pip install -U colcon-ros-bundle

# Add custom rosdep rules
COPY custom-rosdep-rules/raspicam-node.yaml /etc/ros/rosdep/custom-rules/raspicam-node.yaml
#RUN echo "yaml file:/etc/ros/rosdep/custom-rules/raspicam-node.yaml" > /etc/ros/rosdep/sources.list.d/22-raspicam-node.list \
RUN echo "yaml https://s3-us-west-2.amazonaws.com/rosdep/python.yaml" > /etc/ros/rosdep/sources.list.d/18-aws-python.list \
&& rosdep update

# Add custom pip rules
COPY custom-pip-rules.conf /etc/pip.conf

# Add custom apt sources for bundling
COPY xenial-sources-arm64.yaml /opt/cross/apt-sources.yaml

Based on that Dockerfile, the following command builds the ARM64 Docker image with all necessary packages to build ROS applications:

sudo docker build -t ros-cross-compile:arm64.

The name of the resulting Docker image is ros-cross-compile and its version is arm64. The Dockerfile should be in the same folder from which the command is executed.

It is now possible to launch the container and build for ARM64:

sudo docker run -v $(pwd):/ws -it ros-cross-compile:arm64

Colcon bundling

AWS RoboMaker uses colcon to bundle ROS applications before deployment. The colcon bundle contains everything the bundled application needs to run: dependent packages, environments such as python, ROS installation and other dependencies and packages specific to ROS applications.

The way to specify those packages and dependencies is through the xenial-sources-arm64.yaml script used in the QDN learning resource Robotic Application (ROS) Deployments from AWS RoboMaker. The robot application is intended for the ARM64-based Qualcomm Robotics RB3 development kit, so bundling ARM64 packages is necessary. The corresponding colcon bundle command is:

colcon bundle --apt-sources-list /opt/cross/apt-sources.yaml

Note: xenial-sources-arm64.yaml is copied into the container as apt-sources.yaml. The command above is run inside the Docker container spawned for cross-compilation.

Listing of xenial-sources-arm64.yaml:

# ARM Support

deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports/ xenial main restricted universe multiverse
deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports/ xenial-updates main restricted universe multiverse
deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports/ xenial-backports main restricted
deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports/ xenial-security main restricted universe multiverse

# ROS
deb [arch=arm64] http://packages.ros.org/ros/ubuntu xenial main

Deployment and validation

The colcon bundle structure is quite simple. All the dependencies, packages and application code along with installation scripts are compressed into an output.tar. Commands inside scripts refer to utilities inside the bundle (i.e., relative, uncompressed folder structures). Once the bundle is transferred (deployed) to the target physical robot, the AWS Greengrass Lambda function unzips the bundle and executes the installation scripts inside the bundle. Successful execution of these scripts is interpreted by AWS Greengrass.

Next steps

As mentioned above, this article has examined the Docker container used for cross-compilation in the learning resource Robotic Application (ROS) Deployments from AWS RoboMaker. You’re now more than ready to use AWS RoboMaker to deploy ROS applications to the Qualcomm Robotics RB3 and drive the Turtlebot 3 Burger.

And if you’re still curious, read more about colcon bundles and user space emulation for running executables on different architectures.

Qualcomm Robotics RB3 is a product of Qualcomm Technologies, Inc. and/or its subsidiaries.