- Home /
- Resources /
- Learning center /
- Gardener on Equini...
Gardener on Equinix Metal
Learn how to deploy and manage Kubernetes clusters across multiple cloud environments using Gardener on Equinix Metal, covering the setup of Shoot, Seed, and Garden clusters for efficient multi-cloud orchestration.
On this page
This is a technical guide to running Gardener on Equinix Metal.
What is Gardener
Gardener is a Kubernetes multi-cluster, multi-cloud, multi-region orchestration system. it allows you to deploy and manage large numbers of Kubernetes clusters in many regions across the world, covering your own managed on-premise clusters and multiple clouds, all from a single control pane.
This guide assumes familiarity with Gardener, and assists in setting up and running Gardener-managed clusters on Equinix Metal.
Gardener Cluster Types
Gardener has three tiers of clusters, each of which serves a different purpose.
- Garden cluster
- Seed cluster
- Shoot cluster
We explain them in the reverse order.
Shoot cluster
A shoot cluster is the cluster where you deploy your normal workloads: web servers, databases, machine learning, whatever workloads you are trying to deploy. You will have many shoot clusters deployed all over the world across many providers and locations.
Seed cluster
A seed cluster is responsible for managing one or more shoot clusters. It is a "middle tier" management cluster. Because of the latency issues between cloud providers, and between regions within a single cloud provider, you generally have one seed cluster per cloud provider per region. For example, if you have 15 shoot clusters, five deployed in each of three Equinix Metal metros, ny, da, fr - then you would deploy one seed cluster in each facility to manage its local shoot clusters. If you also have 3 shoot clusters deployed in AWS us-east-1 and 4 deployed in your data centre in Sydney, Australia, then you would deploy one additional seed cluster in each of those locations.
Seed clusters come in one of two forms:
- Seed - the Kubernetes cluster already is deployed, externally to Gardener, and the "seed" is deployed to the target cluster to turn it into a seed cluster
- Shooted Seed - Gardener deploys the actual Kubernetes cluster, and then the seed functionality to it
Garden cluster
The garden cluster is the single, top-level management cluster. It is responsible for:
- managing the seed clusters and, through them, the shoot clusters
- interacting with end-users, allowing them to deploy seeds and shoots
What Equinix Metal Supports
Equinix Metal supports the following:
- Shoots on Equinix Metal
- Seeds on Equinix Metal
- Shooted Seeds on Equinix Metal
Garden cluster on Equinix Metal is not yet supported. It is on the roadmap, but if this is a priority for you, please contact your account executive.
You can use a Garden cluster anywhere that Gardener is supported, and, from there, deploy seeds and/or shoots onto Equinix Metal. We have tested and approved Garden cluster on several public clouds, and have written simplified guides. These guides are not materially different than the official Gardener docs, but are simplified and will help you get started. The following are the supported clouds and guides.
Once you have deployed a garden cluster - via one of the above guides or on your own - you should deploy a seed and a shoot. There are several ways to get a Seed cluster:
- Depending on how you deployed your garden cluster, you might already have a seed deployed, as part of garden-setup
- You can deploy a "shooted seed", i.e. Gardener will deploy a managed Shoot cluster directly from the garden cluster, and then convert it into a Seed, see the official guide
Deploying a Seed is beyond the scope of this document, please see the official guides referenced above.
Finally, you are ready to deploy Shoot clusters on Equinix Metal.
Deploying an Equinix Metal Shoot Cluster
The steps are as follows:
- Create and deploy a Project
- From the Equinix Metal console or API, get your Project UUID and an API key
- Create and deploy a
Secret
and aSecretBinding
including the Project UUID and API key - Create and deploy a
CloudProfile
- Create and deploy a
Shoot
We go through each step in turn.
Create and Deploy a Project
A Project
groups together shoots and infrastructure secrets in a namespace.
A sample Project
is available at
23-project.yaml.
Copy it over to a temporary workspace, modify it as needed, and then apply it.
kubectl apply -f 23-project.yaml
Unless you actually will be using the Gardener UI, most of the RBAC entries in the file do not matter for development. The only really important elements are:
-
name
: pick a unique one for theProject
-
namespace
: you will need to be consistent in using the same namespace for multiple elements
Get Your Project UUID and API Key
Each project in Equinix Metal has a unique UUID. You need to get that UUID in order to tell Gardener into which Equinix Metal project it should deploy nodes.
When on the Equinix Metal Console, and you select your project, you can see the UUID in the address bar,
for example, https://console.equinix.com/projects/2331a81e-39f8-4a0f-8f82-2530d33e9b91
has the project UUID
2331a81e-39f8-4a0f-8f82-2530d33e9b91
.
In addition, you need your API key. You can create new API keys, or find your existing ones, by clicking on your name in the upper-right corner of the console, and then selecting "Personal API Keys" from the drop-down menu.
Create and Deploy a Secret and a SecretBinding
In order to give Gardener access to your API key and Project UUID, you save them to a Secret
and deploy them to the Seed cluster.
You also need a SecretBinding
, which enables Shoot
clusters to connect to the Secret
.
A sample Secret
and SecretBinding
are available in
25-secret.yaml.
Copy it over to a temporary workspace, modify the following:
-
apiToken
- the base64-encoded value of your Equinix Metal API key -
projectID
- the base64-encoded value of your Equinix Metal Project UUID -
namespace
- the namespace you provided in the Gardener Project in the previous step; this must be set for theSecret
and theSecretBinding
-
name
- the name of theSecret
should be unique in thenamespace
, and thesecretRef.name
should match it in theSecretBinding
Then apply them:
kubectl -f 25-secret.yaml
Create and Deploy a CloudProfile
The CloudProfile
is a resource that contains the list of acceptable machine types, OS images, regions, and other information. When you deploy
actualy Shoot resources, they will match up to a CloudProfile
.
A sample CloudProfile
is available at
26-cloudprofile.yaml.
Copy it over to a temporary working directory, modify the following:
-
name
- a unique name for this cloud profile. -
kubernetes.versions
- versions that will be supported. -
machineImages
- OS images that will be supported. Thename
andversion
must match supported ones from Gardener, and must be in theproviderConfig
, further down. -
machineTypes
- types of hardware servers that will be supported. Thename
must match the reference name from Equinix Metal. -
regions
- supported Equinix Metal metros. -
providerConfig.machineImages
- this is the list that maps Gardener-supported OS names and versions to Equinix Metal Operating Systems. Thename
andversion
must be supported by Gardener, and theid
must be the Operating System ID supported by Equinix Metal.
Then apply it:
kubectl apply -f 26-cloudprofile.yaml
Deploy Shoots
Finally, you are ready to deploy as many Shoot clusters as you want.
A sample Shoot cluster definition is available at 90-shoot.yaml. Copy it over to a temporary working directory, and modify the following:
-
namespace
- must match the namespace of the Gardener project you deployed earlier -
name
- must be a unique name for this Shoot -
seedName
- this is optional, however, if your Seed is deployed in a different provider than your Shoot, e.g. your Seed is in GCP and your Shoot will be on Equinix Metal, then you must specify theseedName
explicitly -
secretBindingName
- must match the name of theSecretBinding
you deployed earlier -
cloudProfileName
- must match the name of theCloudProfile
you deployed earlier -
region
- must be one of the regions in the referencedCloudProfile
-
workers
- a list of worker pools. Themachine.type
andimage
must match those available in the referencedCloudProfile
-
kubernetes.version
- version of Kubernetes to deploy, which must match one of the versions in the referencedCloudProfile
-
networking
- adjust to your desired type and CIDR ranges
Then deploy it, make a nice drink, and await:
kubectl apply -f 90-shoot.yaml
Deploying Garden Cluster
As described above, you can deploy the Garden cluster on AWS or GCP.
AWS Garden Cluster
We deploy a Gardener base cluster (a "garden" cluster) on AWS. We use kops to bootstrap the Kubernetes cluster on top of which the garden cluster is deployed.
The following instructions assume a Linux workstation. The instructions for macOS should be very similar with the exception of the paths used for downloading tools.
Requirements
-
kubectl
- A Route53 hosted zone (more info here)
- An IAM user to be used by Gardener with the following permissions:
- Full access to Route53
- Full access to VPC (required only for deploying AWS workload clusters)
- Full access to EC2 (required only for deploying AWS workload clusters)
- Full access to IAM (required only for deploying AWS workload clusters)
- An S3 bucket for storing the kops cluster state
- An ssh key pair for node access
Instructions
Install kops
Download and install the kops
binary:
curl -Lo kops \
"https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest \
| grep tag_name \
| cut -d '"' -f 4)/kops-linux-amd64"
chmod +x kops
sudo mv kops /usr/local/bin/kops
Create a cluster
Create a directory for the cluster- and Gardener-related files and navigate to it:
mkdir aws-garden && cd $_
Set the following environment variables:
export NAME=my-garden.example.com # The name of the cluster as an FQDN
export KOPS_STATE_STORE=s3://my-kops-bucket # An S3 path to store the cluster state in
export SSH_PUBKEY=~/.ssh/my-key.pub # An SSH public key to authorize on the nodes
Run the following command to generate a cluster configuration:
kops create cluster \
--zones=eu-central-1a,eu-central-1b,eu-central-1c \
--node-count 7 \
--node-size t3a.large \
--network-cidr 172.17.0.0/16 \
--ssh-public-key $SSH_PUBKEY \
--dry-run \
--output yaml > cluster.yaml \
$NAME
The default CIDRs used by kops for pods and services collide with some Gardener defaults. Edit
cluster.yaml
by running the following:
sed -i 's/^\(\s\snonMasqueradeCIDR:\).*/\1 100.64.0.0\/10/' cluster.yaml
sed -i '/^\s\snonMasqueradeCIDR:/a \ podCIDR: 100.96.0.0/11\n\ serviceClusterIPRange: 100.64.0.0/13' cluster.yaml
Verify the CIDR configuration:
cat cluster.yaml | grep -A 5 networkCIDR
Sample output:
networkCIDR: 172.17.0.0/16
networking:
kubenet: {}
nonMasqueradeCIDR: 100.64.0.0/10
podCIDR: 100.96.0.0/11
serviceClusterIPRange: 100.64.0.0/13
Deploy the cluster:
kops create cluster \
--zones=eu-central-1a,eu-central-1b,eu-central-1c \
--node-count 7 \
--node-size t3a.large \
--network-cidr 172.17.0.0/16 \
--ssh-public-key $SSH_PUBKEY \
--config cluster.yaml \
$NAME \
--yes
Run the following command and wait for the cluster to bootstrap:
kops validate cluster --wait 10m
Verify connectivity with the cluster:
kubectl get nodes
Sample output:
NAME STATUS ROLES AGE VERSION
ip-172-17-100-182.eu-central-1.compute.internal Ready node 2m21s v1.18.12
ip-172-17-122-29.eu-central-1.compute.internal Ready node 2m16s v1.18.12
ip-172-17-44-222.eu-central-1.compute.internal Ready master 4m40s v1.18.12
ip-172-17-58-41.eu-central-1.compute.internal Ready node 2m12s v1.18.12
ip-172-17-81-234.eu-central-1.compute.internal Ready node 2m17s v1.18.12
ip-172-17-92-16.eu-central-1.compute.internal Ready node 2m26s v1.18.12
Deploy the Vertical Pod Autoscaler
The garden cluster expects CRDs belonging to the Vertical Pod Autoscaler to exist on the cluster.
Deploy the VPA CRDs:
# TODO: Should we pin a specific version?
kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/940e304633c1ea68852672e39318b419ac9e155c/vertical-pod-autoscaler/deploy/vpa-v1-crd.yaml
Install sow
Gardener uses a utility named sow
for bootstrapping a garden cluster.
Download sow
and add the binary to your PATH
:
git clone "https://github.com/gardener/sow"
export PATH=$PATH:$PWD/sow/docker/bin
Verify sow
works:
sow version
Sample output:
sow version 3.3.0-dev
Prepare a "landscape" directory
Gardener uses the term "landscape" to refer to an instance of the Gardener stack. We need to prepare a directory which would contain the necessary files for deploying Gardener.
Create a landscape directory:
mkdir landscape && cd $_
Gardener uses a tool called garden-setup
to bootstrap the garden cluster. Clone the
garden-setup
repository into a directory called crop
inside the landscape
directory:
# TODO: Should we pin a specific version?
git clone "https://github.com/gardener/garden-setup" crop
Generate a kubeconfig file which Gardener can use to bootstrap the garden cluster:
kops export kubecfg --kubeconfig ./kubeconfig
Verify the generated kubeconfig file works:
KUBECONFIG=./kubeconfig kubectl get pods -A
Sample output:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system dns-controller-8554fd9c56-zlhbz 1/1 Running 0 9m24s
kube-system etcd-manager-events-ip-172-17-44-222.eu-central-1.compute.internal 1/1 Running 0 8m46s
kube-system etcd-manager-main-ip-172-17-44-222.eu-central-1.compute.internal 1/1 Running 0 8m46s
kube-system kops-controller-9ghxx 1/1 Running 0 8m35s
kube-system kube-apiserver-ip-172-17-44-222.eu-central-1.compute.internal 2/2 Running 1 7m48s
kube-system kube-controller-manager-ip-172-17-44-222.eu-central-1.compute.internal 1/1 Running 0 8m56s
kube-system kube-dns-6c699b5445-db7n6 3/3 Running 0 9m24s
kube-system kube-dns-6c699b5445-tr9h2 3/3 Running 0 6m56s
kube-system kube-dns-autoscaler-cd7778b7b-2k5lk 1/1 Running 0 9m24s
kube-system kube-proxy-ip-172-17-100-182.eu-central-1.compute.internal 1/1 Running 0 5m39s
kube-system kube-proxy-ip-172-17-122-29.eu-central-1.compute.internal 1/1 Running 0 6m32s
kube-system kube-proxy-ip-172-17-44-222.eu-central-1.compute.internal 1/1 Running 0 7m34s
kube-system kube-proxy-ip-172-17-58-41.eu-central-1.compute.internal 1/1 Running 0 6m52s
kube-system kube-proxy-ip-172-17-81-234.eu-central-1.compute.internal 1/1 Running 0 6m26s
kube-system kube-proxy-ip-172-17-92-16.eu-central-1.compute.internal 1/1 Running 0 6m38s
kube-system kube-scheduler-ip-172-17-44-222.eu-central-1.compute.internal 1/1 Running 0 8m54s
Create a file named acre.yaml
and edit the fields marked with "change me" comments:
cat <<EOF >acre.yaml
landscape:
name: my-gardener # change me
# Used to create endpoints for the Gardener API and UI. Do *NOT* use the same domain as the one
# used for creating the k8s cluster.
domain: my-gardener.example.com # change me
cluster:
kubeconfig: ./kubeconfig
networks:
nodes: 172.17.0.0/16
pods: 100.96.0.0/11
services: 100.64.0.0/13
iaas:
- name: (( iaas[0].type ))
type: aws
shootDefaultNetworks:
pods: 10.96.0.0/11
services: 10.64.0.0/13
region: eu-central-1
zones:
- eu-central-1a
- eu-central-1b
- eu-central-1c
seedConfig:
backup:
active: false
credentials:
# Used by Gardener to create Route53 DNS records.
accessKeyID: AKIAxxxxxxxx # change me
secretAccessKey: xxxxxxxxxxxxxxxx # change me
etcd:
backup:
active: false
type: s3
region: (( iaas[0].region ))
credentials: (( iaas[0].credentials ))
dns:
type: aws-route53
credentials: (( iaas[0].credentials ))
identity:
users:
# Used for logging into the Gardener UI.
- email: "someone@example.com" # change me
username: "someone" # change me
password: "securepassword" # change me
EOF
Additional fields can be changed as necessary. For more information about the configuration scheme visit the reference.
Deploy Gardener
To bootstrap a garden cluster on the EKS cluster, run the following commands inside the landscape
directory:
# Should produce no output.
sow order -A
sow deploy -A
This process can take around 10 minutes. When done, an output similar to the following is shown:
===================================================================
Dashboard URL -> https://gardener.ing.my-gardener.example.com
===================================================================
generating exports
exporting file dashboard_url
*** species dashboard deployed
Visit the URL in a browser and log in using the email and password which were specified in
acre.yaml
.
GCP Garden Cluster
This is based upon the official garden-setup guide, as well as the Gardener on AWS guide.
Notes:
- it doesn't matter if we use route-based or IP-Alias networking for GKE clusters, but this guide uses IP-Alias
This is the process. Anything that requires additional details either has a link or is further below in the document.
- Deploy a GKE cluster, using whatever method works for you: Web console, CLI, Terraform, etc.
- Deploy a nodepool of at least 4 nodes, with at least 8 GB for each node, and wait for it to deploy; you may eventually need more, like 6 or 8
- if not already installed, install gcloud CLI, also available via homebrew on macOS
- get a local kubeconfig for the GKE cluster
- deploy the vertical pod autoscaler (VPA) CRDs.
- Get the sow repository. Yes, unfortunately, you need to clone the whole thing.
git clone https://github.com/gardener/sow.git && cd sow
- Add the
sow
command to your path:export PATH=$PATH:$(pwd)/docker/bin
- Make a
landscape
subdirectory and clone garden-setup into a subdirectory namedcrop
in it. Yes, we are cloninggarden-setup
into a subdirectory calledcrop
, but that is what we need to do:mkdir landscape && cd $_ && git clone https://github.com/gardener/garden-setup.git crop
- Save your local kubeconfig from the earlier steps into the
landscape
directory. Yes, you need a local copy; you cannot simply reference the existing one. See below. - Create a file named
acre.yaml
inlandscape/
(not incrop/
). See below for details. - Gardener cannot work with the kubeconfig that launches the gcloud auth-provider, so convert the kubeconfig to use a Kubernetes Service Account. See below.
- Run
sow order -A
to see what ordersow
will apply things. It should return with an exit code of0
. - Run
sow deploy -A
- Wait. Make a nice hot drink.
detailed notes
Deploying autoscaler CRDs
Deploying the autoscaler CRDs:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/940e304633c1ea68852672e39318b419ac9e155c/vertical-pod-autoscaler/deploy/vpa-v1-crd.yaml
getting a landscape kubeconfig
There are two ways to do this.
- extract it from your existing kubeconfig
- use gcloud to get the kubeconfig and save it
To extract from your existing kubeconfig, assuming the context already is set:
kubectl config view --minify --raw > kubeconfig
To get it from gcloud:
KUBECONFIG=./kubeconfig gcloud container clusters get-credentials <your_cluster>
acre.yaml
You need to create a file named acre.yaml
in landscape/
. Be sure not to make it in landscape/crop/
,
where one already exists and must be left alone.
The reference for acre.yaml
is here.
Important fields to note:
-
landscape.name
- the unique name for the gardener. Not only must this be unique in your project, but the name of theetcd
GCS backup bucket with be<landscape.name>-etcd-backup
. Bucket namnes must be unique globally, so your name must not already exist. Additionally, this must qualify as a DNS-1123 label, i.e. just alphanumeric characters and-
. The restriction may be relaxed somewhat soon to allow any valid DNS name. -
landscape.domain
- must be distinct from the cluster itself; will be used to create DNS entries for access to the Gardener API and UI. This must be a subdomain of a managed domain. E.g. if the managed domain isabc.com
, then this field should besomething.abc.com
-
landscape.cluster.kubeconfig
- relative path to the kubeconfig you created above, relative to thelandscape/
directory -
landscape.networks
- CIDR ranges for the nodes, pods and services for the cluster you already deployed -
landscape.iaas
- you can define several seeds here. For now, just define 1, which should be identical to the configuration of your base cluster-
landscape.iaas.credentials
- for google cloud, put in GKE service account credentials. See below.
-
-
landscape.dns
- information for managing DNS, as configured inlandscape.domain
. If this section is missing, it will try to use the managed DNS provider and credentials for the firstlandscape.iaas
entry. If that type doesn't support managed DNS, it will fail.
Google Cloud Service Account
Gardener requires a Google Cloud Service Account in order to manage things. That should have full rights over:
- GKE
- google cloud-dns
- gcs
Follow the instructions for setting it up, then create a key in json format, and save it to the appropriate location.
GKE Service Account Credentials
Gardener requires a kubeconfig
to manage each cluster in landscape.iaas[*]
. When working with GKE, the kubeconfig
provided, for example, by gcloud container clusters get-credentials <cluster>
uses credentials that depend upon using the gcloud
binary every time. This will not work for the credentials Gardener needs.
Instead, we set up a Kubernetes service account in the cluster (Note: Kubernetes service account, not Google Cloud service account), and then use its credentials. We use the sa.yml
in this directory.
-
kubectl apply -f sa.yml
- Get the secret name for the service account you just created:
KUBECONFIG=./kubeconfig kubectl -n kube-system get serviceAccounts gardener-admin -o jsonpath='{.secrets[0].name}'
- Get the token for that secret:
KUBECONFIG=./kubeconfig kubectl -n kube-system get secrets -o go-template='{{.data.token | base64decode}}' <secret>
- Get the name of the first user:
KUBECONFIG=./kubeconfig kubectl config view -ojsonpath='{.users[0].name}'
. Note: This assumes you have a single user in your kubeconfig, per the steps above. If not, you will need to inspect it to find the right name for the user. - Update the kubeconfig to remove the auth-provider:
KUBECONFIG=./kubeconfig kubectl config unset users.<user>.auth-provider
- Update the kubeconfig to add the token to the user:
KUBECONFIG=./kubeconfig kubectl config set-credentials <user> --token=<token>
Optionally, you can simplify steps 2 through 5 above with the following:
export KUBECONFIG=./kubeconfig
token=$(kubectl -n kube-system get secrets -oyaml -o jsonpath='{.data.token}' $(kubectl -n kube-system get serviceAccounts gardener-admin -o jsonpath='{.secrets[0].name}'))
user=$(kubectl config view -ojsonpath='{.users[0].name}')
kubectl config unset users.$(user).auth-provider`
kubectl config set-credentials $(user) --token=$(token)
You may also like
Dig deeper into similar topics in our archivesConfiguring BGP with BIRD 2 on Equinix Metal
Set up BGP on your Equinix Metal server using BIRD 2, including IP configuration, installation, and neighbor setup to ensure robust routing capabilities between your server and the Equinix M...
Configuring BGP with FRR on an Equinix Metal Server
Establish a robust BGP configuration on your Equinix Metal server using FRR, including setting up network interfaces, installing and configuring FRR software, and ensuring secure and efficie...
Crosscloud VPN with WireGuard
Learn to establish secure VPN connections across cloud environments using WireGuard, including detailed setups for site-to-site tunnels and VPN gateways with NAT on Equinix Metal, enhancing...
Deploy Your First Server
Learn the essentials of deploying your first server with Equinix Metal. Set up your project & SSH keys, provision a server and connect it to the internet.