Skip to main content

Why It’s Harder to Get a Server Into Sydney than a Satellite Into Space

In short: being good at getting hardware in (and maybe even out) of data centers of all shapes and sizes is increasingly important.

Headshot of Zac Smith
Zac SmithGlobal Head, Edge Infrastructure Services
Why It’s Harder to Get a Server Into Sydney than a Satellite Into Space

Not many people want to deal with getting stuff into data centers anymore, and with good reason!

Ask anyone who has spent time in a retail data center and they’ll confirm one of our industry’s dirty little secrets: unless you’re operating at massive hyper scale, putting anything into a data center is a slow, complicated, and expensive process.

I should know — I’ve been doing it for about 20 years. Even now, working for the world’s largest data center company, it’s still too hard. More on how we’re working to change that later.

For all of the advances made in deploying software over the last two decades, the way that most people deploy physical hardware hasn’t changed much: rack, stack, cable, configure, network, provision, troubleshoot, etc. Ideally never move anything. Oh, and keep it all the standardized. Document everything in spreadsheets — you’ll need them during an outage.

You get the idea.

And yet, the importance of digital infrastructure is increasing and we’re quickly moving from homogenous hardware paradigm to a heterogenous one built around specialized silicon. And instead of putting infrastructure in just a handful of places around the world, we’re starting to push into hundreds of edge locations to address valuable new use cases.

In short: being good at getting hardware in (and maybe even out) of data centers of all shapes and sizes is increasingly important.

From Central Office to CI/CD

Data centers as we know them date back to the 1940s, when the Bell System stamped out standardized central offices across the United States and Canada. Over time, these evolved to house new kinds of telecommunications equipment, especially for supporting internet access and then hosting early internet properties. The confluence of long-haul cables and zillions of other wires made some of these buildings incredibly valuable to the interconnection of networks.

One of my favorites sits behind the Apple store in Palo Alto: the Palo Alto Internet Exchange (PAIX). See our CIO Milind Wagle’s blog post about it. Another, in Tribeca near my own neighborhood is 60 Hudson — the former Western Union building. One is a former school house, the other an art deco office building. But both are now critical meeting points for the modern internet.

The Palo Alto Internet Exchange (now Equinix SV8)

Eventually our industry hit a fork in the road and started building larger, dedicated spaces for scale-out compute. These so-called server farms were located in areas where both land and power could be obtained more cheaply. So if you want to visit the center of the cloud, take a trip to the former farmland around Ashburn (VA), Prineville (Oregon), or Altoona (Iowa).

In these purpose-built “hyperscale” data centers, the largest infrastructure consumers (think Amazon, Google, Microsoft and Facebook) deploy hardware the way the rest of the world deploys software. Everything is on the table in the name of efficiency, including the design of the buildings, the cooling systems, and of course the racks and servers. When you install tens or hundreds of thousands of servers per year, it starts to look a lot like a CI/CD pipeline.

Looking back at videos (see the Google data center tour below) or articles (this Atlantic piece by Ingrid Burrington is a favorite) from five years ago, you can see just how much the largest infrastructure users have customized everything to meet their needs — and how far ahead they are compared to most everyone else.

What About the Rest of Us?

If you’re a hyperscaler deploying tens of thousands of servers a month, you have an entire ecosystem built around your needs and internal experts that manage and hone the process. You purchase space in multi megawatt chunks and sign decade-long leases to support well-understood growth. New infrastructure is produced and delivered by ODM’s, literally bolting down the pre-built racks in your custom-designed facilities.

But let’s face it, that’s not how most of us roll.

If your idea of scale is 10 or 20 racks at a time, the efficiencies of hyper scale simply don’t apply. You’re probably not buying from an ODM, but instead receiving pallets of gear in individual boxes from Dell, HP, Lenovo, or Supermicro. It arrives at the data center (ideally on time) and then the rest is up to you. You get to deal with:

  • Unboxing all your infrastructure
  • Finding a place to dispose of the packaging
  • Ensuring the racks you are using can fit the gear (and cables out the back)
  • Making sure you don’t overload the power density or cooling capacity of the footprint
  • Racking all of the hardware (gotta love those special cage nuts! Square hole? Round hole? Play the lottery there!)
  • Providing all your own cables, optics, power supplies, etc
  • Configuring everything. Updating the firmware, etc.
  • Returning any broken systems or parts, including finding that box you trashed
  • Taking photos and documenting MAC addresses for future maintenance

Welcome to the data center time warp!

The biggest pain you’re likely to encounter is cabling. As the internet shows us, cabling is an art form. It requires precision, patience, and the understanding that something will probably go wrong. Because something usually does go wrong. Why? Because navigating an incredibly complicated wiring set-up with up to 200-300 cables per rack (think 2 x power, 2 x data, 1 x OOB per server), all while 200 degree hot air blows in your face, is hard.

This is challenging enough when you have trusted, experienced people everywhere you need them. But as you stretch beyond a few locations into a few dozen (let’s assume one of them is Sydney), this can easily become a nightmare that involves a myriad of shipping companies, customs regulations, regional providers, spreadsheets, and late night WhatsApp group chats.

If we’re going to help hundreds of companies move infrastructure at software speed, we need to do better than that. If not a delightful experience, we need at least to make it boring and efficient.

Drawing Inspiration from Above

A few years back (in 2017) I was listening to NPR and had one of those driveway moments. The crew from Planet Money had decided to move beyond making t-shirts and drilling for oil to the next frontier: launching a satellite into space (listen to “Planet Money Goes to Space” for the full story). They even made a logo. I had just wrapped up deploying network and server infrastructure into 15 locations, an effort that had taken our entire company the better part of a year. And here was a group of radio heads launching their own satellite in a fraction of the time.

The difference? CubeSat.

In 1999, professors and students from California Polytechnic State University (Cal Poly) and Stanford University embarked on a mission to fit satellites into cubes of 10cm on each side. What they didn’t know is that these cubes, now known as CubeSats, would become a standard form factor for satellites.

Unlike the various attempts to standardize server infrastructure over the years (from blade chassis to the Open Compute Project), what CubeSat had done was provide a standardized, open delivery model. Everything inside the box — i.e.,. the valuable part — could be specialized and proprietary. Everything around the box could be boring and cheap.

With CubeSat, the cost and complexity of delivering that value to space was reduced dramatically with a shared form factor and deployment mechanism. Just hitch a ride on the next rocket.

I knew this was exactly what we needed for data centers, especially as hardware became more specialized and the number of locations exploded.

Enter Open19

Around the same time that I was listening to Planet Money, Yuval Bachar was incorporating the Open19 Foundation. After helping to lead hardware innovation at Facebook, Yuval was now at LinkedIn and designing for a much different scale. Before LinkedIn was acquired by Microsoft, it was operating at “web scale” - thousands of servers per year. Big enough where optimization really mattered, but too small to benefit from the hyperscale ecosystem.

That’s why Yuval (along with Flex, HPE, and Vapor IO) developed Open19: an open platform that could fit easily into any 19” rack environment for deployment of servers, storage and networking in a radically more efficient manner. Open19 was attractive for a few reasons:

  1. Standard, 19” Racks - As the name implies, Open19 was designed to work in 19” racks, which are the standard for retail, telco, and edge data centers worldwide (as well as your trusty IT closets).
  2. Balanced IP Model - Similar to CubeSat, Open19 works to lower the cost of the delivery model by “open sourcing” the form factor and connecting components, but allowing for proprietary innovation “within the box.” This enables end users or manufacturers to invest their special intellectual property (IP) inside the box, while reducing the costs of getting it into market via shared, common infrastructure components like sheet metal, power cables, and data couplers.
  3. No Cables - From an operational standpoint, this is the biggest win of all. With Open19, server “bricks” simply slide into available slots in a pre-deployed rack “cage” as blind mate connectors are pre-installed before the computers show up.
  4. Embraces Diversity - With Open19 cages, one can install different “bricks” into any slot. This helps operators take advantage of limited space in constrained environments.
  5. CAPEX Efficiency - By building the “cheap” infrastructure in advance (e.g. the sheet metal cage, power and cabling), Open19 allows operators to deploy the expensive parts (e.g., the servers) as needed. When you don’t need a cabling genius to install a few servers, there is no need to deploy everything in advance. As an important bonus, you can also remove servers from a rack or location quickly, redistributing equipment to where it is needed.

The proof is often in the pudding:

Equinix and Open19

Suffice it to say, I’m a fan of Open19. First at Packet and now with Equinix, I’ve led our investment in the Open19 project and currently serve as President of the project. Recently, Open19 became a part of the Linux Foundation family, marking a new chapter in its evolution.

At Equinix Metal, we’ve deployed thousands of Open19 servers to support our bare metal cloud and contributed a highly available QSFP+ based data cable, 48V DC Power converter bricks and several open server designs to the community.

What we’ve gained goes beyond the operational efficiencies that come from deploying with Open19. We’ve also tapped into our unique perspective (as both data center operator and “hardware as a service” operator) to help make it easier for customers of all shapes and sizes to move infrastructure at software speed.

Along with Tinkerbell (our open source provisioning engine), our investments in Open19 are part of our strategy for enabling thousands of companies to innovate with disruptive hardware and software, as well as meet our sustainability goals by improving power and cooling efficiency, reducing logistical waste and improving the usability and reusability of IT assets across global footprints.

And who knows, maybe we’ll even be able to make it easier for anyone to put a server into Sydney than a satellite into space.

Published on

05 January 2021

Category

Subscribe to our newsletter

A monthly digest of the latest news, articles, and resources.