If you use Kubernetes to host your workloads and need to improve resiliency, deploy closer to end users, reduce infrastructure spend, or get stronger isolation between workloads, a multi-cluster Kubernetes architecture is a great way to achieve all these things. It opens up many more possibilities than a single cluster allows for.
The multi-cluster Kubernetes concept remains relatively novel, but it’s simpler to implement than you may think. If you already have a cluster in place, for example, you don't need to scrap it and rebuild from scratch. You can add a new cluster (or more) to your existing control plane.
In this post we’ll explain what expanding from a single- to a multi-cluster Kubernetes architecture would look like if, for example, you had your single cluster running in one public cloud region and wanted to add another in a location where your cloud provider may not have a data center, your application needs a kind of host your cloud provider doesn’t offer, or some other need the cloud provider cannot address.
Multi-Cluster Kubernetes and Multi-Site Infrastructure
In a multi-cluster Kubernetes environment, a single Kubernetes control plane manages one or more clusters. Those clusters could be located in a single site, be it a cloud facility, an enterprise data center, or a colo. But there are advantages to be gained from spreading them across multiple sites:
- Resilience: If you mirror workloads between clusters, more sites mean greater reliability. If one cluster (or entire data center), the other remains intact.
- Performance: Having multiple clusters in multiple sites makes it easier to deploy workloads physically closer to different groups of end users to reduce latency and increase performance, or to address varying data residency requirements.
- Cost effectiveness: The multi-cluster capability puts you in a position to take advantage of lower infrastructure costs that might be available in some sites.
- Isolation: With multi-cluster Kubernetes you can deploy workloads on sets of infrastructure that are physically isolated from each other.
Essential Configurations for Multi-Cluster Kubernetes
We won’t say a multi-cluster Kubernetes environment is easy to set up, because it does take some work. But we will say that it’s fairly straightforward. It really boils down to addressing three configurations:
- Configuring a multi-cluster control plane
- Configuring networking for a multi-cluster setup
- Configuring multi-cluster storage
Multi-Cluster Control Plane
To set up a multi-cluster control plane, you need to be able to deploy and manage Kubernetes to two different clusters using a single interface.
There are basically three ways you can go here:
- Dedicated API server: Use a tool like KubeFed, which extends the Kubernetes APIs to allow you to designate specific clusters on a deployment-by-deployment basis
- GitOps: Use a Git repo to store all of your deployment files and control which cluster they end up in. Arguably, this isn’t multi-cluster Kubernetes as much as it is multiple clusters with separate control planes that are managed via Git. But the end result is basically the same.
- Virtual Kubelet: With this approach, you use a “virtual” Kubelet to map remote clusters to local cluster nodes, allowing you to manage multiple clusters through a single set of APIs.
The CNCF has a great blog post that explains the pros and cons of each of these approaches to setting up a multi-cluster Kubernetes control plane. (You can use any of them with Equinix Metal.)
For your multi-cluster Kubernetes environment to work, your networking should be set up so that any cluster can communicate with any other cluster. This gets a little tricky when the clusters are running at different sites, because you may start running into issues like subnet conflicts. The local tools available for configuring networking will likely differ from site to site.
If one of your clusters is running on Metal, the simplest way to configure networking is probably to use Equinix Fabric, the global software-defined interconnection platform, to connect the Metal site to whichever remote cloud or other infrastructure you’re using to host the other cluster. In addition to being pretty easy to set up, a Fabric-based networking configuration allows you to keep your inter-cluster traffic private if desired.
You can also use a CNI (Container Network Interface) plugin designed to support multi-cluster networking. This is relatively easy to configure in most cases. The downside is you end up depending on a specific CNI, which is not ideal if you want to make changes to your Kubernetes environment or architecture down the line.
A third possibility is to configure interconnection within your clusters using a tool like Skupper and Submariner, multicluster Kubernetes interconnects. This is generally more complicated than relying on infrastructure-level interconnects, and it still leaves you dependent on a specific tool. But this approach may come in handy if you need really fine-grained networking control.
Configuring storage for multiple clusters is easy enough. The simplest approach, generally, is to use a hyperconverged storage solution, such as Portworx, which will run on any infrastructure.
If you need extra security, though, an alternative approach is to rent a storage array on Metal and map that storage to your Metal-hosted cluster. That way you can avoid placing your Kubernetes compute and storage infrastructure on the same machines.
Now Go Do It!
As you can see, setting up multi-cluster Kubernetes, while not exactly “easy,” is completely doable. You need to think through multi-cluster control plane, networking, and multi-cluster storage configurations, and there are tools out there, such as Equinix Fabric, that make each of these tasks easier to achieve.
Ready to kick the tires?
Sign up and get going today, or request a demo to get a tour from an expert.