Amazon Boto3 Integration on QCS610

Building Python-based apps for Amazon services on TurboX C610 board

Boto3 is an official Python SDK for Amazon Web Services (AWS). The Boto3 library is designed to help developers build Python-based applications in the AWS cloud. It includes service-specific features that make development easier. It supports all current AWS cloud services, many of which you can run on the TurboX C610, an open-kit target board from Thundercomm powered by the Qualcomm® QCS610.

For more information, including the Quick Start Guide and API Reference, refer to Boto3 documentation.

Integrate Boto3 to TurboX C610 target board
To install the Boto3 package on the TurboX C610, you need to build the following python3 packages on the Yocto host system and install them to the target TurboX C610 board:

  • python3-boto3
  • python3-botocore
  • python3-jmespath
  • python3-s3transfer

The bitbake files for those python3 packages are taken from the github repository at https://github.com/intel-iot-devkit/meta-iot-cloud. Put the bitbake files (.inc and .bb files) in the poky/meta-openembedded/meta-python/recipes-devtools/python/ directory.

Follow these steps to build the packages:

  1. Change to the yocto working directory.$ cd <yocto working directory>
  2. Run source to set the environment.$ source poky/qti-conf/set_bb_env.sh
  3. A pop-up menu opens for available machines; select qcs610-odk, then OK. Another window pops up for distribution; select qti-distro-fullstack-perf, then OK.
  4. Run bitbake for installing packages.$ bitbake <package-name>

    Once the build is complete, the shared library and include file will be available in ./tmp-glibc/sysroots-components/armv7ahf-neon/<package-name>/usr

  5. Connect the Yocto host system to the TurboX C610 board using a USB-C cable.
  6. For the C/C++ library, push the <package-name> shared library (.so files) to the target board.$ cd ./tmp-glibc/sysroots-components/armv7ahf-neon/<package-name>/usr/
    $ adb push lib/ /data/boto3/

    If <package-name> contains any include files, then push them into the /usr/include/ directory.

    $ adb push include/* /usr/include/

    For python3 packages, push the files in the lib/python3.5/site-packages directory to /data/boto3/lib/ (bitbake file of python3 packages starts with 'span class="txt-courier">python3-')

    $ adb push lib/python3.5/site-packages/* /data/boto3/lib/

Notes:

  1. For more information, refer to the QCS610/QCS410 Linux Platform Development Kit Quick Start Guide.
  2. Some of the required inbuilt python3 packages such as html, dateutils, multiprocessing and concurrent are not available by default on the TurboX C610 board. You can find any missing python3 libraries in the ./tmp-glibc/sysroots-components/armv7ahf-neon/python3/usr/lib/python3.5/ directory of the yocto build system. Push the libraries to the /usr/lib/python3.5/ directory of the TurboX C610 board.

Set the environment on the TurboX C610 to run Python scripts for Boto3 services

  1. Access the TurboX C610 board through adb.$ adb root
    $ adb remount
    $ adb shell mount -o remount
  2. Push the config file with the AWS credentials, including aws_access_key_id, aws_security_access_key_id, instance information, service ID, etc.$ adb push config.py /data/boto3/
    $ adb shell
  3. Enable Wi-Fi on the target board.$ wpa_supplicant -Dnl80211 -iwlan0 -c /etc/misc/wifi/wpa_supplicant.conf -ddddt & dhcpcd wlan0
  4. Export the shared library to LD_LIBRARY_PATH.$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/data/boto3/lib/
    $ cd data/boto3/

The TurboX C610 is now ready to execute the sample Python scripts for Boto3 services.
Note: Python version 3.5.6 is pre-installed on TurboX C610.

Boto3 services
With the Boto3 Python SDK integrated to the TurboX C610 board, you can invoke and run most Amazon cloud services, including these:

  • Identity and Access Management (IAM) Access Analyzer
  • IAM
  • Simple Storage Service (S3)
  • LexModelBuildingService
  • LexRuntimeService
  • Polly
  • Transcribe
  • Elastic Compute Cloud (EC2)
  • IoT
  • Rekognition

Identity and Access Management (IAM) Access Analyzer
AWS IAM Access Analyzer helps to identify the resources in current AWS accounts, such as S3 buckets or IAM roles. Access Analyzer generates a finding, including details of access, whenever the resource instance is shared outside of your AWS account. You can review and monitor the finding to determine whether the access is intended and safe, or unintended and a security risk.

To start using Access Analyzer, you must first create an analyzer following the steps in https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html.

Here is an example:

import sys
import config
sys.path.append("./lib")
import boto3
client = boto3.client('accessanalyzer', aws_access_key_id = config.aws_access_key_id, aws_secret_access_key = config.aws_secret_access_key, region_name = config.region)

You can use a callback such as apply_archive_rule():

response = client.apply_archive_rule(
 analyzerArn='string',
 clientToken='string',
 ruleName='string'
)

IAM
AWS IAM is a service for securely controlling access to AWS services. Using IAM, you can centrally manage users, permissions and security credentials like access keys.

Create and delete policies and attach and detach the role policies using the IAM service and the following methods of the IAM client class:

  • create_policy
  • get_policy
  • attach_role_policy
  • detach_role_policy

Here is an example showing the get_policy method:

import sys
import config
sys.path.append("./lib")
import boto3
iam = boto3.client('iam', aws_access_key_id = config.aws_access_key_id, aws_secret_access_key = config.aws_secret_access_key, region_name = config.region)

# Get a policy
response = iam.get_policy(
 PolicyArn='arn:aws:iam::aws:policy/AWSLambdaExecute'
)
print(response['Policy'])

Simple Storage Service (S3)
S3, the highly available and durable object storage service offered by AWS, is a common way to store videos, images and data. You can combine S3 with other services to build infinitely scalable applications.

To create an S3 bucket, follow the steps in https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html.

Here is an example of creating an S3 bucket:

import sys
import config
sys.path.append("./lib")
import boto3

s3_client = boto3.client('s3', aws_access_key_id = config.aws_access_key_id, aws_secret_access_key = config.aws_secret_access_key, region_name = config.region)

s3_client.upload_file('image.jpg', config.bucket_name , 'image_upload.jpg')

The image.jpg file will be saved in the image_upload.jpg S3 bucket.


LexModelBuildingService
Use Amazon Lex to make interactive voice- and text-based conversational interfaces. The service allows you to easily update, create and delete conversational bots.

To create a lex bot, follow the steps in https://alexaworkshop.com/en/custom-skill/1.create-lex.html.

To start using LexModel, first create a lex-models instance, as in this example:

import sys
import config
sys.path.append("./lib")
import boto3
iam = boto3.client('lex-models', aws_access_key_id = config.aws_access_key_id, aws_secret_access_key = config.aws_secret_access_key, region_name = config.region)

Then, create a slot_type named enable that contains the slot value turn on:

response = client.put_slot_type(name = 'enable',
  description = ' ',
  enumerationValues=[{
  "value":"turn on",
  "synonyms":["turn on"]
  }],
  checksum='1',
  valueSelectionStrategy='ORIGINAL_VALUE'|'TOP_RESOLUTION')

LexRuntimeService
Amazon Lex provides both build and runtime endpoints. Within each endpoint, a set of APIs are provided to carry out the required operations. The runtime API is used by the conversational bot to understand user utterances (user input text or voice).

To create a lex bot, follow the steps in https://alexaworkshop.com/en/custom-skill/1.create-lex.html.

To start using LexRuntime, first create a lex-runtime instance, as in this example:

import sys
import config
sys.path.append("./lib")
import boto3
lex_client = boto3.client('lex-runtime', aws_access_key_id = config.aws_access_key_id, aws_secret_access_key = config.aws_secret_access_key , region_name= config.region)

def post_text(text, request_attributes = None, session_attributes = None):
  request_attributes = {}
  session_attributes = {}
  response = lex_client.post_text(
  botName = config.botName,
  botAlias = config.botAlias,
  userId = config.userId,
  sessionAttributes= session_attributes,
  requestAttributes= request_attributes,
  inputText=text
  )
  return response

response = post_text("capture the video")
if(response['message'] == "record video"):
  print("recording the video")

Polly
Amazon Polly provides the option to easily synthesize speech from text. It also provides APIs to produce high-quality speech from plain text and Speech Synthesis Markup Language (SSML). Polly also manages pronunciation lexicons and intonation that best approximate human speech.

To start using Polly, first create a polly instance, as in this example:

import sys
import config
sys.path.append("./lib")
import boto3
lex_client = boto3.client('polly', aws_access_key_id = config.aws_access_key_id, aws_secret_access_key = config.aws_secret_access_key , region_name= config.region)

response = client.synthesize_speech(
  LexiconNames = [],
  OutputFormat = "pcm",
  SampleRate = "8000",
  Text = "hello how are you",
  TextType = "text",
  VoiceId = "Joanna",
  LanguageCode = "en-US",
)

newfile = open('output.mp3', 'wb')
newfile.write(response['AudioStream'].read())
newfile.close()

The output will be stored in output.mp3.


Transcribe
Amazon Transcribe is an automatic speech recognition (ASR) service based on the deep learning model that converts speech to text. Applications of Amazon Transcribe include transcription of customer service calls, automated subtitling and generation of metadata for media assets to simplify search.

To create a transcription job, follow the steps in https://docs.aws.amazon.com/transcribe/latest/dg/getting-started-asc-console.html.

To start using Transcribe, first create a transcribe instance, as in this example:

import sys
import config
sys.path.append("./lib")
import time
import boto3
transcribe = boto3.client('transcribe', aws_access_key_id = config.aws_access_key_id, aws_secret_access_key = config.aws_secret_access_key , region_name= config.region)

job_name = "job name"
job_uri = "config.WavUrl"
transcribe.start_transcription_job(
  TranscriptionJobName=job_name,
  Media={'MediaFileUri': job_uri},
  MediaFormat='wav',
  LanguageCode='en-US'
)
while True:
 status = transcribe.get_transcription_job(TranscriptionJobName=job_name)
 if status['TranscriptionJob']['TranscriptionJobStatus'] in ['COMPLETED', 'FAILED']:
  break
 time.sleep(5)
print(status)

Elastic Compute Cloud (EC2)
Use EC2 for efficient and scalable computation in the AWS cloud. You can manage storage, configure security and networking, and launch as many virtual services as needed.

To launch an EC2 instance, follow the steps in https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html

To start using EC2, first create an ec2 instance, as in this example:

import sys
import config
sys.path.append("./lib")
import boto3
client = boto3.client('ec2', aws_access_key_id = config.aws_access_key_id, aws_secret_access_key = config.aws_secret_access_key , region_name= config.region)

response = client.describe_instances()
print(response)

That step prints all instances with their current state. To run instances, use the script below:

status = client.run_instances(InstanceType='t2.micro',
  MaxCount=1,
  MinCount=1,
  ImageId=config.imageid)

print(status)

IoT
The IoT service helps in connecting Internet of Things devices easily and securely to cloud applications and devices.

To set up the AWS IoT, follow the steps in https://docs.aws.amazon.com/iot/latest/developerguide/iot-quick-start.html. (In Step 2.“Create a thing object”, choose Python as the AWS IoT Device SDK.)

The code snippet below uses Boto3 to send messages to an MQTT topic in AWS IoT. The application posts to a topic named after the thing name, which you can find in AWS IoT console.

import sys
Import config
sys.path.append("./lib")
import boto3
import json
client = boto3.client('iot-data', aws_access_key_id = config.aws_access_key_id, aws_secret_access_key = config.aws_secret_access_key , region_name= config.region)

response = client.publish(
  topic = config.topicname,
  qos = 1,
  payload = json.dumps({"value":"true"})
  )

Rekognition
Amazon Rekognition offers computer vision technology for image and video analysis. The service is highly scalable and requires no knowledge of machine learning. It provides APIs that use deep learning models to find faces and objects of interest then conduct analysis on them. See https://docs.aws.amazon.com/rekognition/latest/dg/detect-labels-console.html.

To start using Rekognition, first create a rekognition instance, as in this example:

import sys
import config
sys.path.append("./lib")
import boto3
import json
client = boto3.client('rekognition', aws_access_key_id = config.aws_access_key_id, aws_secret_access_key = config.aws_secret_access_key , region_name= config.region)

Bucket_Name = s3bucketName
Sample = "Image.jpg"

response = rekognition.detect_labels(
  Image={
   "S3Object": {
    "Bucket": Bucket_Name,
    "Name": Sample,
   }
  },
  MaxLabels=10,
  MinConfidence=85,
)
print(response['Labels'])

Qualcomm QCS610 is a product of Qualcomm Technologies, Inc. and/or its subsidiaries.