Skip to main content

Kubernetes Management Plane with k3s

Learn how to deploy k3s with real-time backup to S3 compatible storage

Kubernetes Management Plane with k3s

A rather common pattern within the Cloud Native ecosystem is to leverage the resource and reconciliation model provided by Kubernetes to manage non-Kubernetes resources. This is called the Operator Pattern. Tools, such as Cluster API, Crossplane, and Pulumi open up a whole new world of provisioning Kubernetes, Bare Metal, and even pizza with Kubernetes as the control plane. The catch is ... you need Kubernetes.

This guide will show you how to deploy a resilient Kubernetes management plane using k3s and Litestream. While standard Kubernetes uses etcd as it's database, k3s and kine allow for that to be swapped out by MariaDB, PostgreSQL, and even sqlite.

Litestream is an open source project that subscribes, via the write-ahead log (WAL) of sqlite and provides real-time replication of the data to a S3 compatible object store.

Used together, k3s and Litestream provide a Kubernetes management plane that can be shutdown and rebuilt without losing that all-important state when using tools like Crossplane.

k3s Configuration

It only takes a single curl | sh command to deploy k3s to a Linux machine. However, this runs k3s on a public interface that can be consumed externally, which makes a lot of sense in the majority of it's use-cases. For this guide, we're going to promote the use of GitOps and disable that public k3s interface. As such, we do need to tweak the environment a little prior to running the curl | sh installer.

The first thing we need to do is get the private IPv4 address for this machine. In this example, we curl out to the Equinix Metal metadata APIs and use jq to filter the network interfaces down to the private one that is considered our management interface.

export PRIVATE_IPv4=$(curl -s https://metadata.platformequinix.com/metadata | jq -r '.network.addresses | map(select(.public==false and .management==true)) | first | .address')

Now that we have the private address we wish to listen on, we can configure k3s with an environment variable. By setting the --bind-address, --advertise-address, and --node-ip we tell k3s that no public traffic should ever be able to speak to our API Server. One final flag that we add is --disable=traefik, which is the ingress controller; and as we don't allow public traffic, it's pretty irrelevant for this use-case.

export INSTALL_K3S_EXEC="--bind-address $PRIVATE_IPv4 --advertise-address $PRIVATE_IPv4 --node-ip $PRIVATE_IPv4 --disable=traefik"

There's just one last environment variable we need to set to install k3s correctly.

export INSTALL_K3S_SKIP_START=true

This flag tells the k3s installer not to start k3s after installation. This is important because we need Litestream to attempt a backup of the database before we create a new one.

Litestream

Installation

Litestream provides a Debian package for Debian based systems, which can be downloaded and installed with dpkg. As we'll be writing our own Litestream configuration prior to the installation, we need to tell dpkg to retain that during the install, which can be configured with the --force-confold flag.

curl -o /tmp/litestream.deb -fsSL https://github.com/benbjohnson/litestream/releases/download/v0.3.4/litestream-v0.3.4-linux-amd64.deb
dpkg --force-confold -i /tmp/litestream.deb

Configuration

Litestream is configured through a litestream.yml file.

The file must contain the access and secret keys required to read and write to the S3 compatible bucket. We can then add a list of databases that we want replicated. We only need to configure a single database, which is our k3s server sqlite database.

# /etc/litestream.yml
access-key-id: <access-key>
secret-access-key: <secret-key>
dbs:
  - path: /var/lib/rancher/k3s/server/db/state.db
    replicas:
      - url: < BUCKET_NAME >/db

Restoring the Database

Litestream provides a restore command, which accepts a flag that allows for it to exit gracefully if a backup doesn't exist: -if-replica-exists.

litestream restore -if-replica-exists /var/lib/rancher/k3s/server/db/state.db

Deployment

The commands and configuration needed to deploy this on Equinix Metal can all be provided via a user-data script. You'll notice place-holders for the required configuration (bucketName) and secrets (accessKey and secretKey). However you opt to render your user-data, you'll need to pass these values through accordingly. There's also a link to a Pulumi example towards the end.

#!/usr/bin/env sh

# Ensure k3s API isn't available on public interface
export PRIVATE_IPv4=$(curl -s https://metadata.platformequinix.com/metadata | jq -r '.network.addresses | map(select(.public==false and .management==true)) | first | .address')
export INSTALL_K3S_EXEC="--bind-address $PRIVATE_IPv4 --advertise-address $PRIVATE_IPv4 --node-ip $PRIVATE_IPv4 --disable=traefik"


# Configure Litestream to backup and restore to S3
cat > /etc/litestream.yml << END
access-key-id: ${configAws.requireSecret("accessKey")}
secret-access-key: ${configAws.requireSecret("secretKey")}
dbs:
  - path: /var/lib/rancher/k3s/server/db/state.db
    replicas:
      - url: s3://${config.require("bucketName")}/db
END


# Install Litestream
curl -o /tmp/litestream.deb -fsSL https://github.com/benbjohnson/litestream/releases/download/v0.3.4/litestream-v0.3.4-linux-amd64.deb
dpkg --force-confold -i /tmp/litestream.deb


# Install k3s
export INSTALL_K3S_SKIP_START=true
curl -sfL https://get.k3s.io | sh -


# Attempt a restore, if possible; don't fail if one doesn't exist
litestream restore -if-replica-exists /var/lib/rancher/k3s/server/db/state.db


# Start k3s
systemctl start k3s

# Start Litestream
systemctl enable litestream
systemctl start litestream


# GitOps all the rest
kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml apply -f https://github.com/fluxcd/flux2/releases/latest/download/install.yaml
kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml apply -f https://raw.githubusercontent.com/rawkode/equinix-metal-examples/main/pulumi-k3s/opt/flux/setup.yaml

Pulumi

A Pulumi version of this deployment is available on the Equinix Labs GitHub

Last updated

18 April, 2024

Category

Tagged

Technical
Subscribe to our newsletter

A monthly digest of the latest news, articles, and resources.