Distributed Computing Across the Edge and the Cloud

Wednesday 11/2/22 09:33am
Posted By Ramya Kanthi Polisetti
  • Up0
  • Down0

Qualcomm products mentioned within this post are offered by
Qualcomm Technologies, Inc. and/or its subsidiaries.

Today, edge devices are everywhere, with tens of billions more expected to be deployed in the coming years. They are used for everything from measuring temperatures and tracking assets to controlling production processes across many verticals including IoT, robotics, automotive, and healthcare. Data, the by-product, is becoming increasingly important to organizations as these IoT devices acquire more of it. However, the exponential growth of devices and the data produced, brings new challenges around deployment, management, and security of the assets.

The name of the game in today’s device deployments is distributed computing which is really just a mix of edge compute and cloud services. To understand how to optimally utilize this, let’s start by re-examining what exactly edge and cloud mean and then take a closer look at some of their key attributes, benefits, and considerations.

Edge Computing
Edge computing generally refers to bringing compute and data storage closer to the sources of data, which is designed to improve response times and reduce bandwidth. Edge computing infrastructure often communicates across an organization’s private core network and devices may physically exist on or across the organization’s premises. Edge deployments can also make use of public infrastructure like 5G cellular networks.

Edge devices are front and center in edge computing, and are generally built for some productive purpose like measurement, assembly, etc. Nowadays, many edge devices have onboard compute capacity, meaning they can perform compute workloads right at the source where the work is performed or the data is collected. Mobile SoCs like our Snapdragon mobile platforms have brought powerful yet efficient compute right to the edge to run advanced processes like machine learning (ML) and signal processing.

Edge servers complement edge devices by performing I.T. workloads at or close to the edge, generally to reduce latency. Edge server hardware takes on various forms including racks of server blades, industrial PCs, etc. They are often allocated to focused tasks such as monitoring data collected from edge devices.

Further complementing this is the network edge where devices and the local network interface with the Internet through fixed connections and/or 5G. This is sometimes blurred with multi-access edge computing (MEC) which moves some of the tasks traditionally performed in the cloud to the edge of the network.

The Cloud
Cloud servers are generally used for more generic tasks at scale like analytics and non-real-time inference.

There are several popular cloud technologies to be aware of including containerization, virtualization, microservices (i.e., partitioning applications to services), orchestration through frameworks like Kubernetes, and REST APIs. Together with today’s high-speed connectivity, the technologies have helped push traditional on-premise data centers into cloud-based services offering on-demand capabilities, resiliency, and scalability. Collectively, these technologies are abstracted into what is now referred to as the cloud for which there are several variations.

A public cloud can include publicly-accessible services by providers such as AWS, Azure, etc. where the same computing resources are allocated across multiple customers. Generally, cloud service providers own the infrastructure and sell their services using subscription or pay-per-use pricing models. In return, they provide good scalability and can lower the cost to entry since developers don’t have to buy a whole data center up front. They also tend to employ the latest technology.

A private cloud consists of services and resources available to select users. They can be hosted on-premises (e.g., through private networks) or through a cloud provider’s infrastructure as a virtual private cloud that allocates isolated resources to customers. A private cloud can quickly scale as demands change using technologies like virtualization, management software, automation, and can provide greater control and customization of hardware and software choices. When implemented on private, on-premise networks, security and access control can be handled behind a customer’s firewall, and an organization can enforce their own regulatory compliance. Of course, a private on-premise setup also comes with the higher costs associated with buying, installing, and maintaining hardware and software, much like with a traditional data center.

A Hybrid cloud combines public and private infrastructures. Here, customers can choose the optimal environment for their applications and workloads. As demands or business needs change, customers can move workloads between private and public services. For example, they can keep sensitive data on premise, while using public cloud services for tasks like additional storage.

Considerations for Distributing Processing Across the Edge and Cloud
When combined, edge and cloud offer a good deal of flexibility, but also require some planning. Below are a few considerations when deciding on where to deploy your next compute workload.

Consider choosing edge processing for:

  • compute-intensive or real-time workloads, especially when latency should be minimized
  • large amounts of data are collected that would require too much bandwidth to send to the cloud. Instead, consider filtering that data at the edge, and sending only the necessary data to the cloud
  • devices in remote locations or that lack reliable Internet access
  • sensitive data or strict data laws that require data to be kept at or near the edge

Consider choosing cloud processing for/when:

  • non-real-time processes like analytics or deploying software updates
  • hardware with reliable Internet connectivity
  • dynamic workloads (e.g., to spin up new services, virtualizations, etc. as demands change)
  • limited budgets or data center expertise prevent investment in on-premise server hardware. In this case consider choosing a public cloud or virtual private cloud to limit upfront costs and take advantage of the provider’s latest hardware

Technology for the Edge and Cloud
Qualcomm Technologies, Inc. offers several devices which stand ready for distributed architecture deployments across several verticals. Below are just a few examples.

For IoT, our Qualcomm QCS610 SoC allows you to run AI at the edge while Wi-Fi and Bluetooth provide connectivity to your edge or cloud server. Check out the following projects that use QCS610:

For robotics, our Qualcomm Robotics RB5 Platform and Qualcomm Robotics RB6 Platform offer optional 5G edge connectivity and our powerful Snapdragon SoC for on-device AI. Check out our Qualcomm Robotics RB5 Development Kit with Alexa skills project that shows edge-cloud interaction using Alexa Skills and Alexa Voice Services to control a robot.

On the cloud side, our Qualcomm Cloud AI 100 is a server blade that incorporates Snapdragon technology for datacenter-based inference. With the option to install and run multiple blades in parallel, the Qualcomm Cloud AI 100 can be used for cloud or edge servers to provide fast yet energy-efficient inference.

For more solutions, check out our hardware and dev kits on QDN, along with their software and SDKs, and other Projects on QDN.

Also be sure to check out some of our recent blogs:

Snapdragon, Qualcomm Robotics RB5, Qualcomm Robotics RB6, Qualcomm QCS610, and Qualcomm Cloud AI are products of Qualcomm Technologies, Inc. and/or its subsidiaries.