Overcoming Edge Kubernetes Challenges with the Akuity Platform

Akuity Platform For Edge Kubernetes Cover Image

Edge computing involves placing your workload as close to the user as necessary but no closer. It used to mean keeping computing close to the source of information to avoid transmission over the unreliable & slow networks commonly found at edge locations. However, if you're new, when you read “edge computing,” your first thought may now be IoT, but the industry is much larger than that.

With the widespread adoption of cloud platforms, edge computing is becoming an increasingly common pattern for industries with many points of presence, like retail, factories, and stadiums. These isolated locations evolve into endpoints in an interconnected platform for collecting data and optimizing customer experience.

Diagram of cloud connecting to edge locations like retail stores, factories, and stadiums
Diagram of cloud connecting to edge locations like retail stores, factories, and stadiums

Challenges with Kubernetes on the Edge

Organizations that use Kubernetes in their cloud platforms are inclined to create the same cloud-native experience in their edge locations to maintain continuity. Many of the benefits of Kubernetes in the cloud are also advantageous on the edge. Putting Kubernetes on the edge, however, introduces a new set of challenges:

  • Application delivery: operating a controller for each edge cluster adds operational complexity, and a central controller is nearly impossible due to the networking requirements.
  • Bandwidth constraints: internet connection may be slow and shared with other services like point-of-sale systems.
  • Network Access: the external IP address and connection may change, making maintaining inbound connections and firewall allow-lists difficult. This, in turn, limits remote access to the cluster.
  • Network stability: unlike in the cloud, the internet connection and latency are unstable, leading to dropped packets, lost connections, and extended WAN-down scenarios.
  • Security: edge clusters often exist alongside credit card processing services and with nearly unrestricted physical access, making their security even more crucial.
  • Space constraints: there is not enough space for a traditional server rack. Lower-powered devices tend to be used, making the compute resources sparse.

Argo CD is a prolific GitOps controller for Kubernetes and is commonly considered for continuous delivery to edge clusters. However, with open-source Argo CD installations, you have two choices for structuring your deployments, each with challenges and compromises.

  1. Argo CD per cluster will achieve independence from a central control plane and overcome several network constraints (like stability and access). However, you are left with no visibility of applications across locations. Argo CD is known for being a fantastic observability tool, but its potential is limited when you have to navigate many different instances. Not to mention the operational burden of managing a separate instance with its own configuration and tuning for each location. Plus, you must configure remote access to each site to interact with Argo CD outside of Git (or, even worse, write your own controller to establish access).

  2. A central Argo CD instance (hosted in the cloud) connected to each edge location will provide that central view across clusters. It allows you to interact with resources on the edge clusters through Argo CD but at the cost of security and reliability. This central instance will need direct access to the Kubernetes API server at each location and contain admin credentials for each cluster. Maintaining this access can be exceptionally difficult in edge locations where the network connectivity is unstable and may change without notice. In addition, the most bandwidth-intensive part of running Argo CD is monitoring all the Kubernetes events so that the application controller can react to changes in the live state, which, in this architecture, all events must be streamed from the edge to the cloud.

A central Argo CD instance is generally preferred due to the reduced operational burden for platform teams and the improved experience for end-users. Still, edge computing adds significant complexity to this architecture. If you need an in-depth breakdown of the different Argo CD architectures, check out our blog post How Many Do You Need? (updated for 2024).

What's The Solution?

The Akuity Platform was designed by the creators of the Argo Project to solve these challenges with an agent-based architecture for Argo CD. The Akuity Platform delivers improved performance, reliability, and security of continuous delivery to Kubernetes. It provides the advantages of the per cluster and central instance models without compromise. It's also exceptionally well suited for Kubernetes on the edge. Our customers use this architecture at scale for edge use cases in aerospace, retail, and major league sports industries.

Diagram of edge locations like retail stores, factories, and stadiums with Kubernetes clusters connecting to the Akuity Platform.
Diagram of edge locations like retail stores, factories, and stadiums with Kubernetes clusters connecting to the Akuity Platform.

Using the Akuity Platform, clusters at edge locations are bootstrapped by deploying the Akuity Agent to each cluster. Once running in the cluster, the agent establishes an outbound connection to a central Argo CD instance on the Akuity Platform. This part is key; the agent connects from the edge location to the central control plane, overcoming the network access constraints. So, concerns about maintaining an inbound connection to the edge cluster are alleviated. Even better, through the agent's connection to the Akuity Platform, you can interact with the resources in the cluster, view logs from pods, and even exec into containers from the central Argo CD UI.

Diagram of agent establishing an outbound connection to Akuity Platform
Diagram of agent establishing an outbound connection to Akuity Platform

The agent is responsible for pulling the desired state from Git and reconciling that with the cluster's live state. For users familiar with Argo CD's components, the agent in each edge cluster includes an application controller and (optionally, I'll get to this in a moment) a repo server. Thanks to this model, the bandwidth-intensive monitoring of the Kubernetes events happens over the local cluster network (similar to the per-cluster model). The bandwidth usage is significantly reduced compared to the centralized Argo CD model. Only the small amount of data required to populate the Argo CD dashboard is streamed from the edge location to the cloud.

Diagram of agent connecting locally to k8s, and sending minimal traffic to cloud
Diagram of agent connecting locally to k8s, and sending minimal traffic to cloud

During normal operation, the agent pulls the Application configuration for the cluster it's running in from the Akuity Platform, enabling platform teams to manage Applications from a centralized instance. Unlike in the per-cluster model, you can use the cluster generator for ApplicationSets to automatically deploy services to edge locations as the Akuity Agent is deployed. If connectivity is lost to the Akuity Platform, using the State Replication feature, the agent can continue to operate locally (pull the desired state and reconcile the live state).

Diagram of agent lost connection to Akuity Platform but still pulling from Git and updating K8s
Diagram of agent lost connection to Akuity Platform but still pulling from Git and updating K8s

With GitOps for edge clusters and organisations that operate a private Git server, they don't want to manage access for every location to the server. A unique advantage of the Akuity Platform's agent-based architecture is that the Argo CD repo server can be delegated to a specific cluster, and all other clusters connected to the instance can take advantage of it. The repo server can be placed in the same cluster as the private Git server to establish a local connection, and application controllers in remote clusters on the edge can utilize it.

Diagram of git delegate in one cluster connected to git server, and other clusters with agent
Diagram of git delegate in one cluster connected to git server, and other clusters with agent

Given the space constraints of edge locations, clusters tend to have fewer resources, and workloads need to be more efficient. Unlike the per-cluster model, where each cluster gets a full-blown Argo CD installation, the Akuity Agent contains only the components necessary for state reconciliation. While the remaining components are run on the central platform. Then, the agent is also correctly sized to the cluster's workload, reducing resource usage on constrained servers.

The Akuity Platform offers an innovative solution for managing continuous delivery to edge clusters, addressing the significant challenges posed by network instability, bandwidth constraints, and security concerns. Using an agent-based architecture, the Akuity Platform balances the advantages of centralized and per-cluster Argo CD deployments. This approach ensures reliable, efficient, and secure continuous delivery to Kubernetes clusters at edge locations, empowering automotive, factory, and restaurant industries to optimize their operations. With the Akuity Platform, organizations can achieve seamless GitOps workflows and maintain robust application observability and management across diverse and distributed environments.

Try it Out

To try the Akuity Platform, start your free trial today. In minutes, you can have a fully managed instance of Argo CD without any concern for the underlying infrastructure.

If you want to learn how to manage the deployment of the Helm charts in a declarative fashion using Argo CD and Github, take a look at our hands-on tutorial.

Help and Support

If you want insights on where to start with Akuity or Argo CD, please reach out to me (Nicholas Morey) on LinkedIn. You can also find me on the CNCF Slack in the #argo-* channels, and don't hesitate to send me a direct message.

You can also schedule a technical demo with our team or go through the “Getting Started” guide in the Akuity docs.

We encourage you to check out our free GitOps and Continuous Delivery course. Developed by the founders of the Argo Project, this course offers hands-on experience in implementing these practices with Argo CD.

Share this blog:

Latest Blog Posts

Introducing Akuity Workspaces

Introducing Akuity Workspaces

We are excited to announce two significant additions to the Akuity Platform that will enhance how your organization manages access to resources: Workspaces and…...

What's New in Kargo v0.7.0

What's New in Kargo v0.7.0

Kargo v0.7 is now available on GitHub ! The Kargo community has been hard at work driving Kargo closer and closer to a GA release. For users upgrading from v…...

Overcoming Edge Kubernetes Challenges with the Akuity Platform

Overcoming Edge Kubernetes Challenges with the Aku...

Edge computing involves placing your workload as close to the user as necessary but no closer. It used to mean keeping computing close to the source of…...

Leverage the industry-leading suite

Contact our team to learn more about Akuity Cloud