Skip to main content

When Your CI Pipeline Needs More Than the Default

Three developer teams whose builds, tests and distribution work best—or only work—on dedicated cloud.

Headshot of Steve Martinelli
Steve MartinelliDirector, Developer Relations
When Your CI Pipeline Needs More Than the Default

Say you work at a software company that’s showing signs of growth and maturing. Some of the more career-driven folks on the team have even traded their hip startup t-shirts for collared Polos. More importantly, now looks like a good time to automate how all the new code the team churns out gets committed to the codebase. Being a reasonable group of folks, you all reach for one of the popular Continuous Delivery tools: GitHub Actions, Jenkins or GitLab CI/CD. Next, you think about where to host your CI pipeline… 

Right? No, probably not. If your product is like most software out there, in the sense that your builds and tests don’t have any particular infrastructure requirements beyond a decent amount of compute, storage and network, you sign up for one of the cloud CI options and move on with your lives. Teams building software that does have special infrastructure needs, however, have to weigh the CI pipeline infrastructure question more carefully. These are teams that build operating systems, for example, or custom private cloud implementations, or virtualization software—in other words, teams that build software that runs close to the underlying hardware and need to see and control exactly how those software and hardware components interact. They could also be teams that run their builds at behemoth scale, way beyond what can be spun up in one of the big clouds quickly and without draining the budget.

Teams that have such needs but don’t have the capacity or the desire to build and run this physical infrastructure in house will find a lot to like about Equinix Metal, our globally scaled dedicated cloud service that’s fully automated and can be quickly connected to any other relevant cloud or network provider. Using it, they can build and test their releases directly on a variety of silicon (x86 servers by both Intel and AMD or Arm), they can access server NICs and NVMe drives and configure networking between physical nodes the way they need to. The single-tenant bare-metal hardware can be configured and provisioned remotely and on demand using the same familiar infrastructure automation tools developers use to manage public cloud infrastructure. Making this platform uniquely powerful is its presence in 27 major metros around the world and the ability to connect the infrastructure to other clouds and networks, either privately or over the internet. We’re constantly adding new metros—Equinix has more than 200 data centers, so we’re nowhere near done expanding our on-demand footprint.

Flatcar Container Linux CI Pipeline and CDN

One team using Equinix Metal for CI is the team behind Flatcar Container Linux. As the name of the open source project suggests, it’s a version of Linux designed specifically for containers. Based on Gentoo Linux, Flatcar is a “friendly fork” of CoreOS Container Linux. (Our control plane, the backend that powers the Equinix Metal API, happens to run on Flatcar!) The team behind the OS is Kinvolk, a Berlin-based company Microsoft acquired in 2021.

To keep things secure, the Flatcar team runs its CI pipeline on a separate set of Metal resources than it does its environment for nightly and release builds. It also uses Equinix Metal to run its heavy duty suite of release tests for a number of specific vendors whose machines Flatcar would be installed on. The suite includes tests for more than 100 complex scenarios, many of them involving multiple nodes. One reason bare metal is important for Kinvolk is that it allows the team to bring its own virtualization technology when testing release images for private clouds, such as VMware vSphere, OpenStack or QEMU

The Flatcar OS is immutable and doesn’t include a package manager, so the images are always built from sources, requiring heavy compute resources that are spun up temporarily to run the builds. 

Flatcar’s CI pipeline runs on self-hosted GitHub runners managed by GARM, or GitHub Actions Runner Manager, an open source tool for creating, maintaining and autoscaling pools of self-hosted runners. Flatcar runs its CI system in LXC containers, while its nightly and release builds are processed on bare metal and orchestrated by Jenkins. Both types of builds use AMD and Arm servers in Equinix’s dedicated cloud.

The team also uses a lightweight CDN built on Equinix Metal to distribute the OS images. There are two source servers running Caddy located at opposite ends of the world from one another. They push new images to a number of NGINX caching servers distributed around the world. This setup ensures images are pushed safely and privately from the team’s build infrastructure to the distribution servers and are quickly made accessible by users across the globe.

Being able to provision infrastructure on demand is valuable for both the CI pipeline and the distribution network. Kinvolk engineers can start dozens of instances to run their tests—each test case runs for a few minutes on average—and spin them down afterward. On the CDN side, the team uses the on-demand capability to scale the caching network to match local demand, offloading its release servers.

Alpine Linux’s Brawny Bare Metal CI Infrastructure

The small team of contributors behind the hugely successful Alpine Linux—it’s by far the most widely used Linux distribution for creating Docker container images—is another example of a team with a set of CI pipeline needs the Equinix dedicated cloud is uniquely well suited for.

They build their releases for multiple server architectures (x86 and Arm CPUs of both 32-bit and 64-bit varieties) using GitLab-based pipelines. When it’s time to build, they spin up the needed configurations of Metal servers and run their compute-hungry builds directly on the hardware, which enables them to complete much faster than would be possible on cloud VMs. The same goes for testing Alpine Linux updates and releases. Finally, similar to the Flatcar team’s approach, the Alpine team uses Metal servers in multiple locations around the world to ship releases to its global user base faster.

A bonus fun fact is that Equinix Metal runs Alpine Linux in memory on its own servers as part of the software stack that enables the automated provisioning and deprovisioning of the machines.

Mirantis’ Need for Full Control at Massive Scale

Our third example of a software team whose CI pipeline needs reach beyond the capabilities of traditional virtualized cloud platforms is Mirantis. The company has long been in the business of building enterprise-grade solutions based on open source technologies. Today, its focus is on building OpenStack and Kubernetes-based private clouds for large companies.

Mirantis’ CI/CD and testing routines require massive scale, granular control of networking, access to low-level server hardware components and no hypervisors. (It wouldn’t make sense to run its private-cloud VMs inside of other VMs.) It’s not uncommon for the team to run tests across as many as 1,500 servers at a time, and SmartNICs and other accelerators on the servers underneath must be configured to match configuration of the hardware the solution being tested would run on in a customer’s data center. Full control of Layer 2 networking between the nodes is required in order to validate and stress test complex networking configurations within Mirantis’ private cloud solutions.

The Space Between

One thesis we operate under is that the type of cloud infrastructure that currently dominates the market is dominant for good reasons. It’s incredibly flexible and low-commitment. It takes very little money and effort to get up and running, which most developers do without giving it a second thought. A team generally starts exploring alternatives to the big public clouds either when its workload has reached a scale when its cloud bill bites more into their profit margin than the business is comfortable with, or when the cloud platforms simply aren’t optimally architected for their specific use case. 

The three examples we highlighted illustrate this well: the Kinvolk team needs the ability to temporarily spin up a lot of compute muscle, test on different chip architectures, bring its own hypervisors and distribute images globally; the Alpine Linux team’s CI pipeline needs are similar, except that they don’t need virtualization; and Mirantis requires massive scale, bare metal compute and full control of hardware and networking configuration up and down the stack.

The function of Equinix’s dedicated cloud is to fill that space between the convenience of powerful, global infrastructure capabilities available on demand at one’s fingertips and the hard reality of budgets, profit margins and technical requirements.

Published on

06 December 2023

Category