Skip to main content

The Liquid Cooling Imperative

As the world’s digital infrastructure company, a key aspect of our corporate mission is to go beyond the basics and protect our planet, our people, and the communities where we operate.

Headshot of Zac Smith
Zac SmithGlobal Head, Edge Infrastructure Services
The Liquid Cooling Imperative

Equinix has invested aggressively to design efficient data centers that reduce our carbon and water footprint (water is used for evaporative cooling), and we’ve made steady progress towards using 100% renewable energy across our entire portfolio, hitting 90% this past year. We recently committed to be climate neutral by 2030, backed by science-based targets.

Although many of our customers interact with us programmatically and never set foot in our data centers, at heart we’re builders and operators of real, physical things. Those things — from air conditioners and cooling towers to backup generators, servers, network switches and storage appliances — take a significant amount of energy to keep online.

Hot Chips

But there is another trend at play — and it’s a hot one.

The pace of technology innovation (and consumption) is accelerating, fueled by the transformation of businesses across all sectors into technology-first companies. These digital leaders are making substantial investments into infrastructure to power traditional workloads, but also pushing the boundaries with machine learning, edge computing, big data and network-heavy applications.

Alongside the explosive growth of hyperscale public clouds, these use cases are driving the creation and adoption of technologies that demand increasing amounts of energy. All signals suggest that over the next several years, our industry will be defined by power hungry computers that have bigger chips packed with smaller nanometers, more cores, bigger dies (chiplets anybody?), faster memory, smarter NIC’s and tons of accelerators.

Power Up to Scale Up

Today the most amount of power we can put into one of our Open19 server “bricks” is 400 watts, and we need to drive that number much higher to power the silicon that is coming down the pike.

It takes around 150-200 watts to power the silicon in a processor. In the near future, that number will jump to 350 watts and beyond — just to power the processor, not to mention all of the other components like memory, flash, NICs, and fans! To tackle this, we’ve done work in our hardware development to go from feeding servers 12 volts of power to supplying highly efficient 48 volt native power to each brick, with ceiling support well over 2,000 watts. That’s 5X the power!

But, doesn’t amping up our power go against our sustainability goals? Glad you asked. With 4X the power comes 4X the cooling. It’s seemingly an endless cycle of more power, more heat, more cooling required. So how do we break this cycle?

What’s in the Toolbox?

Powering and cooling dramatically “hotter” IT equipment while pursuing a carbon-neutral future as fast as possible means that we need to focus on the areas where we can really move the needle. Here are a few of them:

  1. Reduce waste for getting things into (and out of) our data centers
  2. Reduce waste for stuff “around the server” (think cables, PDU’s, racks)
  3. Reduce or eliminate water that is used for our cooling plants
  4. Improve power utilization (less conversion, less loss, smarter allocation)
  5. Cool things more efficiently. Specifically, cool what needs to be cooled
  6. Make our own energy and store what we can for later use
  7. Capture the heat that is generated, reuse it, or provide it to other use cases

With Equinix Metal, we are operating new and important parts of the equation inside the datacenter and the IT rack. This offers us an opportunity to partner with Equinix’s incredible design and construction teams on “early adopter” investments that touch both datacenter and server (rack level) innovation. In other words, Equinix now has an “at scale” laboratory it can leverage to rethink how servers in our data centers are powered and cooled.

In terms of the list above, our efforts within Metal are currently focused on more efficient power (number 4) and liquid cooling (numbers 3, 5 and 7). Our goal is to dramatically reduce the amount of energy we consume by bending the efficiency curve (PUE) of how we extract heat from our servers and turn that heat into energy or productive use cases through heat capture. An additional benefit is a dramatic reduction in water use.

Community Driven Infrastructure for Sustainability

Even for complex challenges like net zero data centers and cooling multi-kilowatt servers, we tend to revert to our trusty playbook: investing in ecosystems.

A cornerstone of our investment is Open19, the Linux Foundation’s project dedicated to open source and open standard data center innovation. While we originally leaned into Open19 to standardize operations and reduce installation cost and time, the platform provides us an especially interesting way to make structural progress on heat capture and cooling. Even better, the work is done in the open and benefits anyone looking to deploy in 19” racks.

In addition to Equinix, Open19 is supported by industry leaders like Cisco, ASRockRack, Inspur, Molex, Vertiv, Zutacore, Delta, Submer, Schneider Electric, and Virtual Power Systems. This year the group started work on an “Open19 v2” specification that will extend Open19’s popular blind-mate connector model (pioneered by LinkedIn in its original Open19 design) into two new areas: a higher power envelope and liquid cooling.

Through dozens of weekly meetings and hundreds of hours of spirited debate, an Open19 working group has designed and proposed a new “plug and play” blind-mate coupler for liquid cooling systems. Executed thoughtfully, we believe this design can support all the major liquid cooling technologies, including immersive, single phase and dual phase — all while maintaining a common standard that would bring economic and adoption benefits.

Of course, there are two sides to any liquid cooling equation: the “in rack” system along with the necessary data center mechanics. The complexity of deploying these two pieces of the puzzle together has limited liquid cooling adoption to at-scale users, but we think Open19’s new blind mate approach is one way of dramatically lowering this barrier to entry.

If the colocation industry can support the mechanical side of liquid cooling (as well as the practical issues of training, certification, regulation, maintenance, etc) then I think that server manufacturers will have confidence to create liquid-capable solutions that can be deployed in a wide variety of data centers. I call it solving the chicken and egg problem for liquid cooling, and by doing this work in the open with the Linux Foundation, I’m hopeful that we can spark an industry-wide movement.

Later this year, we expect to ratify the Open19 v2 specification and move to the proof of concept stage. Fortunately, Equinix’s Office of the CTO (led by Justin Dustzadeh) has been operating liquid cooling labs for over a year and has significant experience with various technologies.

Just Be Cool

We have an opportunity to start a network effect for liquid cooling, and it starts with us bringing liquid cooling into our facilities in a scalable way with Open19. As Equinix Metal, we hope to be the “anchor tenant” for liquid cooling inside of our data centers, paving the way for other customers who build their own infrastructure and might have a preference as to how they want their equipment cooled. That’s why Open19’s goal of blind-mate, leak-free connectors and manifolds is so important. From single-phase and two-phase direct-to-chip cooling to immersion and air assist, customers will be able to cool what and how they want.

How cool is that?!

We want to build an environment that invites systems builders to participate. We would love to work with companies that want to get ahead with liquid cooling. Sound like you? Let’s connect and see how far we can go together!

Behind the Magic Curtain

The cloud can feel like a magical place, but it also takes a lot to make it run. Here’s the nuts and bolts of it: PUE is really about lowering the wastage of electrons that are doing anything but powering the IT world. It measures how much energy is going to computing equipment in contrast to energy going to other things in the facility such as lighting, cooling, etc. Even turning the lights on in the facility would be considered a waste! Taking it a step down from that (literally) is when we have to step down the power from a utility to work in our data center. There’s a lot of power that goes into stepping down from a 10K volt power feed into 408, then again to 208, and –in the US–stepping down once more into the IT equipment at 120.

But here’s the problem: they all become dead electrons that appear as heat inside the data center.

It’s a vicious cycle really, similar to how we eat. We bring in power to the data center, we do something with that power, and we waste some in our other processes.

Waste Not, Want Not

Typically, most companies throw dead electrons out into the atmosphere and wash their hands of it. But, that’s not us. We’ve worked on a patent within our organization to use those “dead electrons” and do something with them–like possibly produce electricity again. We’ve been testing this out in Helsinki, using that waste heat to power residences near the data center.

And then we thought, what if we could develop a heat recovery mechanism to capture all that heat and power our own data center cooling systems? We could literally reduce the amount of wastewater work required to dissipate heat into the atmosphere by hundreds of millions of gallons. Think about what something like this could do for a place like California! This kind of recovery cycle will launch us further down the path of sustainability, and get us one step closer to achieving our goals. But, we can’t do it alone.

Stepping Up the Standard with Open19

Open19 is an open platform for servers, storage, and networking that can fit in any 19-inch data center rack environment. And if you’re new around here, we’re big fans. With Open-19, we can innovate with our partners and the broader ecosystem around step functions in efficiency for our data center infrastructure. Currently, we are working with groups at Open19 to disruptively change our sustainability message for making computers go on and data centers.

Simply put, it’s a way for us to enable putting all the stuff in all the places with the least amount of friction. For V1 of Open19, we set out to design a cage that would basically act as a plug-and-play infrastructure for any module (or “brick”) adhering to the Open19 fit.

This means, our Ops Team can prebuild a low-cost infrastructure of sheet metal, cabling, and power before the demand of cage space is needed. That way, customers’ bricks have a place to live that’s rigged to go the instant they’re ready to move in.

The V1 deployment has been a success in seeing a more uniform infrastructure and we have learned a lot about what our next steps should be. For V2, we need to get more power to our infrastructure to be able to scale with customer’s needs.

Published on

06 October 2021

Category

Subscribe to our newsletter

A monthly digest of the latest news, articles, and resources.