Skip to main content

Building an Ephemeral Homelab

When I joined Equinix back in May, I didn't anticipate just how much I'd love hacking on hardware.

Headshot of Gianluca Arbezzano
Gianluca ArbezzanoPrincipal Software Engineer
Building an Ephemeral Homelab

And by hardware, I mean anything and everything I can get my hands on! I think this new focus on “tinkering” comes from the ongoing renovation of the house I bought last year with my girlfriend, Arianna. Her father is a maker; he works with metal and wood. I am learning a lot by working alongside him — like sometimes you just have to jump in and give it a try.

He also taught me that almost everything can be fixed.

With my dash of new confidence, I’m no longer scared of breaking things, or spending a bit of extra money to get quality!  And when I see new hardware challenges pop up in our community Slack channel and how people are building all kinds of cool projects, I’m inspired to jump right in.

A good example of this is my new homelab. Last year I managed to get a “high-density” environment from a former startup selling off old hardware on eBay (sadly, it’s not available anymore). Nothing is new, but I like that it’s made from a series of individual units that I can cluster together and leverage to play with cloud native stacks.

This isn’t small and I don’t have a ton of extra space for running equipment all the time. Besides, I’ve learned that having 10 NUCs and 5 Nvidia TK1s running when they aren’t needed is a waste of electricity and costs money I don’t need to spend.

But I’m getting a bit ahead of myself.
 

Before jumping into how I run my homelab, let’s look at what I’ve learned along the way.

First, after breaking a power input, I was left with only 9 of the 10 NUCs running. Again, almost anything can be repaired, and I’m looking for a replacement part to fix it.

My eBay special didn’t come with any RAM or hard drives, so I had to find and mount memory for all 10 NUCs. I still have a few without disks, but that’s the best part - it’s where the “ephemeral” part of my home lab comes from! Necessity, as they say, is the mother of invention.

To save on electricity, I need it to power on and off and get everything back in a few minutes. Thanks to Tinkerbell, I can do that! The few NUCs with SSD’s or HDD’s have state, but all of them are just running an in-memory operating system shipped as part of the Tinkerbell stack called OSIE.

Tinkerbell to the Rescue 

As part of my job, I work with Tinkerbell a lot, but the use cases primarily focus on data center class hardware. Of course, I wanted to try it in my home lab. The Tinkerbell provisioner lives in part of my lab with state — this is because the tink-server stores data in a Postgres database related to each NUC. to help me remember which drive I need to keep a close watch on, I marked it with a patch!

I use my lab mainly when I need a totally clean environment to compile or make some tests with an OSS project, or even Tinkerbell itself. Everything that requires state (like the Kubernetes Control Plane) runs where the Tinkerbell provisioner runs. In practice, it is a big single point of failure, but it is my single point of failure, and I love it.

To get going, I installed Ubuntu in the provisioner with a boring USB stick, so nothing fancy there. Next, I cloned the Tinkerbell Sandbox, which works the same way it would in a Vagrant environment, but, in my case, directly on the host. When all is said and done, I have a bunch of containers running in my host and boom, the Tinkerbell stack is ready.

I’m going to go more in-depth in the next section for those of you interested in trying this at home, but for now, know that one of the containers the provisioner runs is a Nginx server.  Tinkerbell relies on this webserver to serve static assets like the OSIE init ramdisk, and the kernel. I want to serve a public SSH key that will be downloaded in the servers (so that I can SSH into them). If you want to do something similar, do what I did and save your key under: ./sandbox/deploy/state/webroot/ssh.keys. Now you will be able to retrieve it in your network via:

$ curl http:///ssh.keys

Enrolling Tinkerbell to Get the Pixie Dust

Now that I’ve got a running provisioner, it is time to register all the hardware to Tinkerbell. Honestly, this process right now is a bit painful unless you have all the MAC addresses in advance (which I didn’t.). So, I had to create a JSON representation of the hardware and push it to the Tink server via the CLI.

In order to get the MAC addresses I had two options:

  1. I could connect the NUCs one by one to an external monitor via HDMI
  2. Or I could just watch the tink-server logs and look for the “new logs” line:
{"level":"info","ts":1608652334.4974012,"caller":"boots/dhcp.go:76","msg":"retrieved job is empty","service":"github.com/tinkerbell/boots","pkg":"main","type":"DHCPDISCOVER","mac":"f4:4d:30:64:8e:0f","err":"discover from dhcp message: get hardware by mac from tink: rpc error: code = Unknown desc = unexpected end of JSON input","errVerbose":"rpc error: code = Unknown desc = unexpected end of JSON input\nget hardware by mac from 

This is a bit awkward, but in short it says: “the hardware with mac address f4:4d:30:64:8e:0f is not registered”. 

This happens because when the hardware boots, it enters PXE mode and does a DHCP request, which then causes an application called Boots to (in this case) log that it can't find a hardware registered for that MAC address.

Next, it was time to carefully create a JSON file that looks like this one for all of my NUCs:

{
	"id": "ce2e62ed-826f-4485-a39f-a82bb74338e2",
	"metadata": {
		"facility": {
			"facility_code": "onprem ssh_key=http://192.168.1.64/ssh.keys"
		}
	},
	"network": {
		"interfaces": [
			{
				"dhcp": {
					"arch": "x86_64",
					"ip": {
						"address": "192.168.1.31",
						"gateway": "192.168.1.254",
						"netmask": "255.255.255.0"
					},
					"mac": "f4:4d:30:64:7a:3a",
					"hostname": "c1",
					"uefi": false
				},
				"netboot": {
					"allow_pxe": true,
					"allow_workflow": true
				}
			}
		]
	}
}

The ID is a random UUID I generated for each hardware definition; all the other information is specific to your hardware, but usually, they don’t change much. Obviously, MAC addresses should be the ones we register. The gateway and IP I specify will be assigned from the router to the NUC. Remember this because you’ll want to be sure you are using the right range.

The metadata.facility.facility_code is important, it holds the cmdline argument passed to the kernel via iPXE during net booting of the in-memory operating system Osie.

OSIE is based on Alpine Linux and if you have a look at the "PXE boot” documentation from alpinelinux.org, you can customize it even more. Currently, I am using the ssh_key field that will download an SSH key in the running OSIE. This enables the SSH server and lets me SSH into it. When running the Vagrant Setup guide, for example, we do not enable SSH because the serial port is emulated by VirtualBox GUI and usually there is a workflow that will persist a operating system. In my case, I just want a running OS for my ephemeral hardware.

Once I had the hardware JSON I was able to register it via:

$ tink hardware push

(For more info on this, check out the  Vagrant Setup guide I referenced earlier, and go to the section titled “Creating the Worker's Hardware Data”).

There is currently an issue open about this, but as of now. Boots prevents the in-memory environment from starting if a workflow is not assigned to the hardware. As a workaround, I created a “Hello World” workflow for each piece of hardware.

I feel like I need a deep breath here. I can now see the logs from the registered hardware showing that OSIE is shipped to the device and that it has booted. I know the IP specified for the node I want and, using the username root and the ssh_key specified, I am able to SSH into my hardware!

Pro Tip: I rely a lot on the list of devices connected to my router when troubleshooting for the assigned IPs and so on, so keep these handy.

Ephemeral Life 

As you’d expect, when I switch off the power on the NUCs, they stop and so does the provisioner. But when I switch everything on, all the nodes enter the booting phase. The provisioner boots from the hard drive and starts all the various containers at the same time as the other NUCs enter PXE boot. Luckily, there are solid retries in place and the NUCs keep sending DHCP requests until the provisioner answers and serves the PXE script and OSIE setting, bringing all of them back to life!

I’ve learned a lot of lessons during this experiment and uncovered a number of issues, PRs, or fixes to merge related to Tinkerbell.

The hardware registration is convoluted and not terribly friendly, but we have proposal 0016 (written by Dan Finneran) which suggests a default workflow that ensures when a new MAC address reaches Boots it has a way to register unknown hardware automatically. Neat!

We have a lot of work to do as a community around logging and operational experience. The project was built from the ground up, so the way it communicates with the outside is tricky and requires certain knowledge and context. This is obviously wrong but, since it’s an open source project, you can see how we’re working to address this in the open, making processes more transparent, which is a big win for everyone.

So, What’s Up Next for My HomeLab 

I haven’t provisioned the Nvidia board yet, but:

{"level":"info","ts":1608658874.6143324,"caller":"dhcp4-go@v0.0.0-20190402165401-39c137f31ad3/handler.go:105","msg":"","service":"github.com/tinkerbell/boots","pkg":"dhcp","pkg":"dhcp","event":"recv","mac":"00:04:4b:aa:aa:aa","via":"0.0.0.0","iface":"enp2s0","xid":"\"4b:5b:0f:5e\"","type":"DHCPREQUEST","secs":9} 

According to hwaddress.com 00:04:4b belongs to INVIDIA. That means that they already have PXE booting, and those GPUs are just waiting for something to run!

I want to build my version of OSIE that will have Kubernetes binaries included so that the NUCs will join as Kubernetes nodes when booting in memory. And I want to run a Telegraf agent to ship telemetry data to InfluxCloud.

Right now, even if I have an SSD in some of the NUCs I am not running any workflows, persisting any operating systems, or mounting any disk. I want to select a few of them to persist state via OpenEBS.

I want to develop power control via API to be able to reboot, stop and start the NUCs programmatically. I know I have limited possibilities when it comes to interacting with the NUCs, so, unfortunately, I’m pretty sure I won’t be able to set the boot device for now. But I am going to do my best and, if you have any suggestions or anything, you’d like to see me try, let me know!

I don’t know how this new passion will go but if you are curious you can follow me on Twitter.

Published on

01 February 2021

Category

Subscribe to our newsletter

A monthly digest of the latest news, articles, and resources.