Skip to main content

Hello Ice Lake! Kicking off Equinix Metal’s Next Chapter With Intel

Our latest Intel servers boost performance for low-latency edge applications and heavy duty enterprise workloads.

Headshot of Jason Powers
Jason PowersSenior Director, Global Product Strategy
Hello Ice Lake! Kicking off Equinix Metal’s Next Chapter With Intel

At Equinix Metal, we get out of bed each day excited to make state-of-the-art, interconnected, and sustainably powered digital infrastructure easily available to everyone. If you’re looking to build what’s next for our digital world, we want to help you get there.

This vision, of course, depends on close relationships with the leading creators of data center technology. This year we are opening new chapters in many of those relationships, but I’m especially excited about what we’ve got in store with the granddaddy of Silicon Valley: Intel.

First, a little background.

Our story with Intel began way back in 2015 with the c1.small.x86, our first server powered by an Intel processor. It was a 4-core, 8-thread CPU, and it was super popular with customers who valued high clock speeds and didn’t mind smaller core counts and memory footprints. In those early days of cloud native and Kubernetes, a larger number of smaller machines was more than a defensible strategy—it was a benefit.

But things really took off on the “small side” when Intel introduced an 8-core, 16-thread part. In the two years since we launched the c3.small.x86, powered by that CPU (specifically Intel E-2278G “Coffee Lake”), we’ve been barely able to keep up with demand.

The Network Effect

We’ve found that the sweet-spot workloads for these smaller instances are low-latency edge applications that need both processor speed (the c3.small features a 3.4Ghz clock speed that runs up to 5.0Ghz) and network performance.

But, as we like to say, bare metal isn’t anything unique. It’s the “Equinix” part that makes Equinix Metal special. And that means network performance.

As the leading neutral data center provider—where thousands of networks, cable landing stations, and cloud service providers physically converge—Equinix is often considered “host” to the edge of today’s internet. If you want the lowest possible latency to the biggest swath of the internet (and end users) globally, proximity to these networks is what matters. And that’s exactly what Equinix Metal unlocks.

With this in mind, it’s not surprising that we’ve seen ad tech companies scoop up the c3.small to reduce round-trip latency, alongside companies with media transcoding, gaming, and IoT use cases. With 2 x 10Gbps network interfaces and an uncongested routed Layer 3 network topology, these zippy servers have been a big hit for NFV (Network Functions Virtualization) and CNF (Cloud-Native Network Functions) workloads.

So, Ice Lake or Rocket Lake? We’ll Take Both!

Because of how popular c3.small.x86 has been, we’re excited to move to the 3rd Gen Intel Xeon Scalable “Rocket Lake” CPU. Built on the E-2378G processor, our new m3.small.x86 will double the available memory capacity (to 128GB) and feature increased network capacity, with dual 25Gbps ports per server. The Rocket Lake chip also features an integrated XE series GPU—a first for Equinix Metal that builds upon earlier embedded Iris GPUs.

While the m3.small iterates on the c3.small, we think a long-overdue upgrade to our “big boy” Intel server (the n2.xlarge.x86) will be even more compelling. Our new n3.large is designed to do the heavy lifting that’s required by traditional enterprise workloads, such as virtualization platforms by Nutanix and VMware and database platforms like Oracle, Postgres, and more.

The n3.xlarge.x86 will be powered by top-of-the-line 3rd Gen Intel Xeon Scalable CPUs, with 32 physical “Ice Lake” cores (64 threads). Each machine will feature up to 1 terabyte of memory, 4 x 25 Gbps network ports, and a pair of 3.8 TB NVMe drives.

Both the m3.small.x86 and n3.xlarge.x86 will be built on OEM platforms and in the Open19 format, the open source data center hardware standard that Equinix helps to lead at the Linux Foundation. For the first time, our Open19-based hardware will feature a PCIe slot for expansion, enabling the addition of discrete GPU (up to 75 watts) or FPGA accelerators, which are widely used for AI workloads.

These scale-out options will be available as part of our new Workload Optimized lineup.

Looking at What’s Next

Beyond the current features that 3rd Gen Intel Xeon Scalable processors make possible, our new hardware lineup paves the way for additional innovations on Intel’s horizon, including support for DDR5 memory and PCIe 5.0. Both will enable even greater performance improvements across a range of popular data center workloads.

A big part of staying competitive in business today is being able to take advantage of the latest and greatest technology. As Intel’s new lineup shows, the most disruptive innovations are increasingly buried deep down in the silicon and hardware layers. Unlocking that value when and where you need it is a huge challenge. That’s why we’re building more than a bare metal cloud—we’re building a platform that helps connect our customers more directly (and more quickly) with the technology they need to compete and win.

Published on

14 January 2022

Category

Subscribe to our newsletter

A monthly digest of the latest news, articles, and resources.