- Home /
- Resources /
- Learning center /
- Kubernetes Cluster...
Kubernetes Cluster API
Learn how to provision a Kubernetes cluster with Cluster API
On this page
[This page has been updated] Cluster API is a Kubernetes sub-project focused on providing declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters.
Started by the Kubernetes Special Interest Group (SIG) Cluster Lifecycle, the Cluster API project uses Kubernetes-style APIs and patterns to automate cluster lifecycle management for platform operators. The supporting infrastructure, like virtual machines, networks, load balancers, and VPCs, as well as the Kubernetes cluster configuration are all defined in the same way that application developers operate deploying and managing their workloads. This enables consistent and repeatable cluster deployments across a wide variety of infrastructure environments.
This guide will show you how to deploy a Kubernetes cluster using the Equinix Metal Cluster API provider (CAPEM).
This guide assumes that you have an existing Kubernetes cluster available to run as your management cluster. For testing, you can use Kind, minikube, or Docker for Mac. For production, we recommend you take a look at our guide for building a resilient k3s management plane on Equinix Metal.
NOTE: when testing on macOS, we recommend using minikube with podman to create your management cluster to avoid a Docker Desktop networking issue that blocks creation of workload cluster nodes.
Vocabulary
These terms are defined in the Cluster API documentation, but are replicated here to save you a few clicks.
Management Cluster
A Kubernetes cluster that manages the lifecycle of Workload Clusters. A Management Cluster is also where one or more Infrastructure Providers run, and where resources such as Machines are stored.
Workload Cluster
A Kubernetes cluster whose lifecycle is managed by a Management Cluster.
Advisory
The provider is still working out how to safely migrate it's name and conventions to Equinix Metal. As such, you'll see packet
used in parts of this guide. We'll keep it updated as the migration evolves.
Cluster API CLI
We'll be using the Cluster API CLI to provision Cluster API in our management cluster and to generate the manifests of our workload cluster.
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.3.18/clusterctl-$(uname | tr '[:upper:]' '[:lower:]')-$(uname -m) -o clusterctl
chmod +x ./clusterctl
sudo mv ./clusterctl /usr/local/bin/clusterctl
Deploying Cluster API to the Management Cluster
Using clusterctl
We need to export our API key so that the provider can use it during bootstrap. You can use a project level API key, or a user API key.
export PACKET_API_KEY="<YOUR_API_KEY>"
Next, we can use the clusterctl
command to deploy the CAPI controllers to our management cluster, using the --infrastructure
flag to request that the Equinix Metal provider also be deployed.
clusterctl init --infrastructure packet
Manually
If you'd prefer not to use clusterctl
, you can deploy the manifests yourself.
Cluster API
We first need to specify which version of Cluster API to install. We can fetch this via curl
.
export VERSION=$(curl https://api.github.com/repos/kubernetes-sigs/cluster-api/releases/latest | jq -r ".name")
curl -o capi.yaml -fsSL https://github.com/kubernetes-sigs/cluster-api/releases/download/${VERSION}/cluster-api-components.yaml
We now have capi.yaml
, which is the description of the Cluster API workloads we need to deploy to the management cluster. We cannot apply this directly to the cluster because there are some values that need to be provided first. You'll need to search for EXP_
in the YAML and handle accordingly.
When you've handled these feature flags, you can apply the manifests.
kubectl apply -f capi.yaml
Equinix Metal Provider
Very similar to what we did for the Cluster API component, we can do the same for the Equinix Metal provider.
export VERSION=$(curl https://api.github.com/repos/kubernetes-sigs/cluster-api-provider-packet/releases/latest | jq -r ".name")
curl -o capem.yaml -fsSL https://github.com/kubernetes-sigs/cluster-api-provider-packet/releases/download/${VERSION}/infrastructure-components.yaml
kubectl apply -f capem.yaml
Provisioning a Workload Cluster
Using clusterctl
The Cluster API CLI provides a config cluster
helper that will generate the required manifests based on some environment variables. The environment variables you require are:
# The Project to deploy the new devices to
export PROJECT_ID="<SOME_ID>"
# The metro where you want your cluster to be provisioned
export METRO="fr"
# The operating system to use
export NODE_OS="ubuntu_20_04"
# The pod and service CIDRs for the new cluster
export POD_CIDR="192.168.0.0/16"
export SERVICE_CIDR="172.26.0.0/16"
# Device type to use for control plane and worker nodes
export CONTROLPLANE_NODE_TYPE="c3.medium.x86"
export WORKER_NODE_TYPE="c3.medium.x86"
# SSH key to use for access to nodes
export SSH_KEY="<SOME_SSH_PUBLIC_KEY>"
With our configuration set, we can now ask clusterctl
to generate the manifests. Feel free to modify the Kubernetes version, control plane node count (You can use 1 or 3), and worker node count (You can use any number).
clusterctl generate cluster my-cluster-name \
--kubernetes-version v1.21.2 \
--control-plane-machine-count=3 \
--worker-machine-count=3 \
> my-cluster-name.yaml
All that's left now is to apply the manifest and wait for the workload cluster to be created.
kubectl apply -f my-cluster-name.yaml
Manually
We strongly encourage you to use the clusterctl config cluster
approach above to generate the base configuration for your workload cluster. However, you may wish to add further node pools with different device configurations.
To do so, you can copy and modify some of the generated manifests from the steps above.
Machine Templates
You can define additional machine templates that can be used. This allows you to provide additional device types and operating systems to be used in your node pools.
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: PacketMachineTemplate
metadata:
name: "my-cluster-name-storage-workers"
spec:
template:
spec:
OS: "ubuntu_20_04"
billingCycle: hourly
machineType: "s3.xlarge.x86"
tags: []
Kubeadm Config Templates
In order to add new node pools to our workload cluster, we need to define a KubeadmConfigTemplate
that tells Cluster API and kubeadm
how to bootstrap the node. These should be defined for each device type, as you may need to tweak the kernel modules or disks for each; however, we've had success using a generic template across multiple device types on Equinix Metal. Your milage may vary.
kind: KubeadmConfigTemplate
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
metadata:
name: "my-cluster-name-worker-node"
spec:
template:
spec:
preKubeadmCommands:
- sed -ri '/\sswap\s/s/^#?/#/' /etc/fstab
- swapoff -a
- mount -a
- |
cat << EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
- modprobe overlay
- modprobe br_netfilter
- |
cat << EOF > /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
- sysctl --system
- apt-get -y update
- DEBIAN_FRONTEND=noninteractive apt-get install -y apt-transport-https curl
- curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
- echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
- apt-get update -y
- TRIMMED_KUBERNETES_VERSION=$(echo {{ .kubernetesVersion }} | sed 's/\./\\./g' | sed 's/^v//')
- RESOLVED_KUBERNETES_VERSION=$(apt-cache policy kubelet | awk -v VERSION=$${TRIMMED_KUBERNETES_VERSION} '$1~ VERSION { print $1 }' | head -n1)
- apt-get install -y ca-certificates socat jq ebtables apt-transport-https cloud-utils prips containerd kubelet=$${RESOLVED_KUBERNETES_VERSION} kubeadm=$${RESOLVED_KUBERNETES_VERSION} kubectl=$${RESOLVED_KUBERNETES_VERSION}
- systemctl daemon-reload
- systemctl enable containerd
- systemctl start containerd
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
cloud-provider: external
Machine Deployments
The MachineDeployment
custom resource is the glue that joins together our KubeadmConfigTemplate
and MachineTemplate
to provide a node pool for your workload cluster.
apiVersion: cluster.x-k8s.io/v1alpha3
kind: MachineDeployment
metadata:
name: my-cluster-name-worker-pool-1
labels:
cluster.x-k8s.io/cluster-name: my-cluster-name
pool: worker-pool-1
spec:
replicas: 3
clusterName: my-cluster-name
selector:
matchLabels:
cluster.x-k8s.io/cluster-name: my-cluster-name
pool: worker-pool-1
template:
metadata:
labels:
cluster.x-k8s.io/cluster-name: my-cluster-name
pool: worker-pool-1
spec:
version: v1.21.2
clusterName: my-cluster-name
bootstrap:
configRef:
# This name is the name of your `KubeadmConfigTemplate`
name: my-cluster-name-worker-node
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfigTemplate
infrastructureRef:
# This name is the name of your `PacketMachineTemplate`
name: my-cluster-name-storage-workers
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: PacketMachineTemplate
Getting Your Cluster Ready
Once you've applied your desired cluster resources to your management cluster, you should see devices spinning up and being provisioned. However, your cluster won't be "Ready" until you've deployed a Container Networking Interface (CNI) implementation.
In order to do so, you need to get the kubeconfig
for your workload cluster.
clusterctl get kubeconfig my-cluster-name > my-cluster-name.kubeconfig
If you wish to automate this, you can use ClusterResourceSets. Please note that they're currently behind a feature flag and experimental.
Now, using the kubeconfig, you can apply your CNI of choice. You can deploy any CNI implementation, but for this guide we'll use Calico.
kubectl --kubeconfig=./my-cluster-name.kubeconfig \
apply -f https://docs.projectcalico.org/v3.15/manifests/calico.yaml
You may also like
Dig deeper into similar topics in our archivesConfiguring BGP with BIRD 2 on Equinix Metal
Set up BGP on your Equinix Metal server using BIRD 2, including IP configuration, installation, and neighbor setup to ensure robust routing capabilities between your server and the Equinix M...
Configuring BGP with FRR on an Equinix Metal Server
Establish a robust BGP configuration on your Equinix Metal server using FRR, including setting up network interfaces, installing and configuring FRR software, and ensuring secure and efficie...
Crosscloud VPN with WireGuard
Learn to establish secure VPN connections across cloud environments using WireGuard, including detailed setups for site-to-site tunnels and VPN gateways with NAT on Equinix Metal, enhancing...
Deploy Your First Server
Learn the essentials of deploying your first server with Equinix Metal. Set up your project & SSH keys, provision a server and connect it to the internet.