We are excited to announce the general availability of the Akuity Platform, offering fully-managed Argo CD as a hosted service. With the Akuity Platform, it’s never been easier to get a production-grade, highly available Argo CD instance, which you’ll be able to create and manage in minutes.
While many people love Argo CD, and we frequently get feedback on how it “just works,” we have heard from users who could do without the burden of self-hosting and managing it. Adding to that, Argo has been consistently one of the most active projects in the CNCF, and keeping up with all its new features, patches, and updates can be challenging. The Akuity Platform manages the provisioning, scaling, upgrades, and security patching of Argo CD, eliminating the need for customers to do it themselves.
Having built Argo CD, supported its users, and operated countless instances in production over the past four years, we know from experience that it can take dedicated engineering resources and operate and fine-tune as your Kubernetes usage grows. Here are some of the challenges you may eventually encounter:
Over time, as Argo CD is tasked to manage more and more clusters, it eventually reaches a limit to how much the controller can handle. All of Argo CD’s resource monitoring/processing happens centrally in the controller, putting a large processing pressure on a single component. After reaching a certain scale, operators will eventually need to tune this controller to handle the additional load, employing various techniques, such as sharding and tuning parameters (which we’ve covered in the past).
If you are using Argo CD to manage remote clusters, you know that Argo CD requires public access to the Kubernetes API server. But for security-restricted environments or ones with prohibitive networking setups, this requirement is often a blocker to adoption. This often results in Argo CD operators running an Argo CD instance per cluster, which only increases the operational burden.
One overlooked aspect of self-hosting Argo CD, is that Argo CD itself needs an environment to run in. As a deployment tool, Argo CD is likely one of your most privileged pieces of software, having write access to a large part of your infrastructure. Hence, an often recommended security practice is running Argo CD in a separate Kubernetes cluster isolated from your application workloads. But this is just one more cluster to manage, secure, and pay for.
One of the most popular features of Argo CD is its real-time visualization of cluster resources. However, presenting this information in real time comes with a hidden cost. To power this UI, all Kubernetes resource activity and pod churn are streamed back to the remote control plane to be processed. Even worse is that much of this data is irrelevant and discarded, leading to unnecessary traffic and bandwidth consumption. In the past, we’ve spent thousands of dollars on Kubernetes API egress traffic alone.
When embarking on this initiative, we realized we could do much more than simply offer a hosted Argo CD. We had the opportunity to make fundamental improvements to the architecture to address these operational challenges that Argo CD admins faced over time.
One of the unique innovations of the Akuity Platform is that it separates Argo CD’s data plane from the control plane. By this, we mean that each cluster runs its own agent and controller. This provides numerous benefits, but in short, it allows most of the heavy lifting to happen closer to its source.
Higher scalability is achieved by allowing the work of the controller to be distributed and delegated to individual clusters. Sharding and tuning have become things of the past. With our agent-based approach, resource processing happens in-cluster instead of over the network. Only the relevant pieces of metadata presented in the Argo CD UI are sent over the network. In our real-world testing, this has been shown to reduce traffic consumption by as much as 80%. We think this new approach will enable even more use cases for Argo CD, allowing for clusters to be managed over even unreliable and bandwidth-constricted edge environments.
On the security front, the Akuity Platform allows you to run more secure clusters, while simultaneously simplifying the networking requirements. Instead of opening your Kubernetes API server to the world, your managed cluster only needs the ability to make an outbound connection to the control plane. This feature allows Argo CD to manage private Kubernetes API endpoints, something not possible before.
But we weren't satisfied there. We wanted to enhance the Argo CD experience by providing observability into user and development activity. In the Akuity Platform, application activity is persisted long-term, allowing you to see historical data about your deployment trends and development velocity. An audit record of configuration changes is preserved in the event a significant incident needs to be pinpointed back to its source. And in the future we’ll be providing even richer analytics and metrics to provide deeper insights about your clusters.
Through our conversations with users, we’ve observed that our users weren’t using Argo CD to its fullest potential. The Argo ecosystem is rich with plugins, extensions, and labs/community projects. Still, much of this additional functionality is hidden behind documentation or requires Argo CD know-how to configure and enable. The Akuity Platform solves this by making it dead simple to configure and enable these addons, with the support and backing of the creators of the Argo project itself.
We want the Akuity Platform to become more than “just another Argo CD as a managed service” offer. The unique architecture of the platform allows us not only to simplify using Argo, but provides a solid foundation that powers dozens of important DevOps use cases. Expect more DevOps tooling innovation coming from our team, both as features of the platform and new open source initiatives.
We hope you join us on this journey 🚀
When using Argo CD UI, you might want to match its design with your company's branding or wish to differentiate between the instances by giving different color…...
This year North America's "motor city", Detroit, hosted KubeCon. It wasn’t our first KubeCon as individuals, but it was our first KubeCon as a team. Our First…...