Skip to main content

Choosing a CNI for Kubernetes

In this article, we’ll help make selecting a CNI for your next cluster an easier choice.

Choosing a CNI for Kubernetes

On this page

Running your own Kubernetes cluster on Equinix Metal can be a fun but challenging endeavour. Even after you’ve written all your Infrastructure as Code and run kubeadm init on your first device ... you’re presented with a choice.

Which Container Networking Interface (CNI) should I adopt?

Fortunately and unfortunately, Kubernetes isn’t too opinionated and we can very quickly get decision fatigue. In this article, we’ll help make selecting a CNI for your next cluster an easier choice.

Calico

Calico is an open source CNI implementation that is designed to provide a performant and flexible system for configuring and administering Kubernetes networking. Calico uses BGP routing as an underlay network and IP in-IP and VXLAN as an overlay network for encapsulation and routing. In a bare metal environment, avoiding the overlay network and leveraging BGP will increase network performance and makes debugging easier, as you don’t need to unpeel the onion of encapsulation.

While Calico is a fantastic option, we do feel that there’s a better choice.

Cilium

Cilium is an open source CNI implementation that is designed and built around eBPF and XDP, rather than traditional networking patterns with iptables. Much like Calico, Cilium can run using VXLAN or BGP. Cilium used to rely on metallb to power it’s BGP features, but since Cilium 1.3 it is possible to use their own implementation built on GoBGP.

eBPF and XDP

eBPF is a relative new technology that runs within the Linux kernel and enables the execution of eBPF programs, which run in a sandbox environment. These programs allow for user-land code to run within the kernel with unprecedented performance; extending the capabilities of the kernel.

XDP leverages eBPF to provide a highly performant packet processing pipeline that runs as soon as the networking driver receives the packet. What does this actually mean? Well, with XDP - Cilium can help mitigate DDOS attacks by dropping packets before they even hit the traditional networking stack.

One might think there is a downside to leveraging these new technologies, as all your previous experience of debugging the network goes out the window. However, Cilium is accompanied by Hubble.

Hubble is an observability platform that builds on top of eBPF and Cilium to give teams deep visibility into the networking stack: with and understanding of Cilium and Kubernetes networking policies and L4 and L7 networks, Hubble can help you debug issues with your networking with ease.

hubble

Cilium Is the only CNI with L7 aware policies. This means you can write Kubernetes networking policies that understand DNS, HTTP, and even Kafka.

For example, you can write a DNS L7 networking policy to:

Restrict DNS Resolution to Subset

apiVersion: cilium.io/v2
kind: CiliumClusterwideNetworkPolicy
metadata:
  name: dns-allow-list
spec:
  endpointSelector: {}
  egress:
    - toEndpoints:
      - matchLabels:
        io.kubernetes.pod.namespace: kube-system
        k8s-app: kube-dns
      toPorts:
        - ports:
          - port: "53"
            protocol: UDP
          rules:
            dns:
              - matchPattern: "*.abc.xyz"

Allow POST HTTP requests to abc.xyz

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: http-post-abc-xyz
spec:
  endpointSelector: {}
  egress:
    - toPorts:
        - ports:
            - port: "443"
              protocol: TCP
      rules:
        http:
          - method: POST
    - toFQDNs:
        - matchPattern: "*.abc.xyz"

Restrict Kafka Topic Access to the Following Pods

Typically, we’re forced to write networking policies like: "Allow any application with the label kafka-consumer" to speak to Kafka. This casts a rather wide net, when with L7 policies we can limit the access to individual topics depending on the labels. As such, we can say that only the "beer-brewer" can publish to the hops topic. Amazing, right?

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: beer-brewers
spec:
  ingress:
    - fromEndpoints:
        - matchLabels:
            role: beer-brewer
      toPorts:
        - ports:
            - port: 9092
              protocol: TCP
          rules:
            kafka:
              - role: produce
                topic: hops

Developer Experience

Now don’t worry if these resources look difficult, because Cilium has that covered too. All of these network policies can be visualised, modified, and event constructed through an entirely point and click visual builder.

Check out the Cilium Editor to see for yourself.

Installing Cilium

Cilium is installable as a Helm chart. So you’ll first need to make the repository available:

helm repo add cilium https://helm.cilium.io/

Next, we can begin to understand and tweak the default values for the installation we require.

The highlights are:

CIDRs

Like with all CNI implementations, you’ll need to carve out your service and pod CIDRs.

Equinix Metal uses 10.x.x.x for their private network, so it’s usually best to split up 192.168.0.0.

If you require more address space than this, you can investigate Equinix Metal’s VLAN support.

IPAM Mode

Cilium has a few different modes to manage IPAM.

--set global.ipam.mode=cluster-pool
--set global.ipam.operator.clusterPoolIPv4PodCIDR=192.168.0.0/16
--set global.ipam.operator.clusterPoolIPv4MaskSize=23

Cilium also has a preview feature where the IPAM mode can be set to cluster-pool-v2beta, which allows for dynamic, resource usage based, allocation of node CIDRs.

Kube Proxy

Cilium’s use of eBPF and XDP means we’re not reliant on iptables, so we can actually disable the kube-proxy altogether. You’ll need to do this with kubeadm and through Cilium’s deploy.

--set kubeProxyReplacement=probe

Native Routing

As discussed above, Cilium doesn’t need encapsulation to handle the routing of packets within our cluster; so let’s ensure we enable it.

--nativeRoutingCIDR=192.168.0.0/16

Hubble

As Hubble and Cilium observability is a big part of the appeal, let’s not forget to enable it.

--set global.hubble.relay.enabled=true
--set global.hubble.enabled=true
--set global.hubble.listenAddress=":4244"
--set global.hubble.ui.enabled=true

Complete Install

helm repo add cilium https://helm.cilium.io/
helm upgrade --install cilium/cilium cilium \
                --version 1.13.4 \
                --namespace kube-system \
                --set image.repository=quay.io/cilium/cilium \
                --set global.ipam.mode=cluster-pool \
                --set global.ipam.operator.clusterPoolIPv4PodCIDR=192.168.0.0/16 \
                --set global.ipam.operator.clusterPoolIPv4MaskSize=23 \
                --set global.nativeRoutingCIDR=192.168.0.0/16 \
                --set global.endpointRoutes.enabled=true \
                --set global.hubble.relay.enabled=true \
                --set global.hubble.enabled=true \
                --set global.hubble.listenAddress=":4244" \
                --set global.hubble.ui.enabled=true \
    --set kubeProxyReplacement=probe \
    --set k8sServiceHost=${PUBLIC_IPv4} \
    --set k8sServicePort=6443

Conclusion

Cilium might be much newer to the Kubernetes CNI landscape, but in its short time it has become the gold standard for Kubernetes networking. While Calico is also a great option, Cilium’s adoption of eBPF and XDP provide a future facing solution, enriched with the best debugging tool available (Hubble) and the best developer experience with the assistance of the Cilium Editor.

Last updated

25 April, 2024

Category

Tagged

Technical
Subscribe to our newsletter

A monthly digest of the latest news, articles, and resources.