Skip to main content

Knowledge Hard Won

A Series on Multi-cluster Management with Kubernetes

Headshot of Matt Anderson
Matt AndersonDirector, Delivery Engineering
Knowledge Hard Won

Filling in the gaps

Around 2010 I was a part of a company that was going through a series of acquisitions, which meant that on any given day I was involved in a passive-aggressive knife fight over what to do with our infrastructure. Public clouds were ramping up in popularity, directly intersecting with our needs, challenging existing competencies, and surfacing all the expected fears. The end result was a migration of our infrastructure to AWS. While their platform was significantly more advanced than what we were doing at the time, it didn't come without its feature gaps.

Recently, Packet co-founder and Equinix Metal marketing despot Jacob Smith was able to wear me down enough to start writing about some of the things we're doing with Kubernetes. Thinking about what might be interesting but not beat to death already, the parts of our architecture built for multi-cluster management jumped out at me. I'm still struck by how many aspects of that first public-cloud adoption required "filling-in", and in many ways, not much has changed. 

New platforms are both fun and terrible. A mentor of mine used the phrase "knowledge hard-won" (and has his own blog, if you want to check it out) which perfectly sums up the myriad of dark debt and dangerous corners waiting for you when implementing a resilient platform at scale. Building up a foundation of knowledge (through both successes and failures) is part of the process.

Multi-Cluster, Shmulti-Cluster. What’s So Special About Us?

Before everyone dusts off their pitchforks or blows up my inbox with links to kubefed or KubeSphere, I'll start with explaining why such projects don’t work for our specific use cases. 

When I joined Equinix Metal, the team made crystal clear that we'd be aggressively expanding into new data centers. When thinking about scale, our Managing Director Zac likes to say “millions of things in thousands of places.”  

We need to both build and maintain a home for centrally run assets (e.g. our API / data stores) and run at least one cluster per facility we're in - the icing on the figurative cake is the variability of what networking (or not) we may have with which to do all this work.

Moreover, establishing our own PXE stack, cluster bootstrapping, and continuous deployment tooling was a prerequisite for Engineering and Ops to do their work: so I'd effectively be the first tenant in our facilities - no niceties, there. (For more on this, check out Kelsey's blog on turning the physical digital). When you are the foundation for a cloud, avoiding circular dependencies is a reality.

The topic of multi-cluster management is broad and treacherous, but in this series I’m going to focus on the three main problems we set out to solve as we worked to build a cohesive internal engineering platform (on Kubernetes) with the described constraints:

  • Cluster Classifications
  • Configuration Management
  • Service Provider Boundaries

Approaching the Problem(s) 

Despite the often overexaggerated complexity of a well-rounded Kubernetes implementation, assessing how to approach Equinix's architecture was not materially different than the analysis you might perform on any other distributed system. I was concerned initially with the overarching power structures - how resources would be allocated, how control is shared, how workloads would be divided, how to keep a semblance of order across systems in a perpetual state of failure. 

Grouping workload types with a dash of organizational structure for access controls led to some generalized categories of classes - their exact names and boundaries aren't relevant, but what is worthwhile are the attributes they share and the logical groupings that can be applied around them. These class definitions can drive:

  • cluster tenants
  • default cluster and/or component parameters
  • structure for instantiating services
  • cluster naming, domain naming, service naming
  • network policies

This is just like anything in code: you want it to be something you can look at and quickly learn things about. Standardizing some basic conventions around cluster classes might read as pedestrian, but it can be a powerful tool to group behaviors downstream in a way that doesn't make it difficult to scale.

Where We Started

Let's take a look at an example of this in practice. To help, take a look at a repo I put together with some reference material, which includes the use of:

If you dig into the ansible inventory - you'll see the usual suspects, but also the attribute k8s_cluster_class. Within the roles, some uses of this grouping:

While some templating is convenient, the relevance of this example is the inheritance of the values — the Ansible facts and values generation feeds the /cluster-services/ values. If you review ArgoCD's declarative setup, you'll find you can pass values in not just with a file, but as a block - this gives you the ability to inherit through apps and into the destination chart, such as the external-dns example in the repo. 

By creating and passing values in this way, consistent generation of cluster attributes and subsequently management of the cluster services becomes easier to scale. The cluster classifications have needs that relate to a "cluster registry" in that there are often pieces of metadata that surround clusters or groups of clusters that drive decisions for humans and automation.

The above example is representative of "where we started.”  The next post in this series will focus on Configuration Management and touch more on "where we're going" with an API and event-oriented architecture for cluster management.

Published on

28 January 2021

Category

Subscribe to our newsletter

A monthly digest of the latest news, articles, and resources.