Hardware is the invisible skeleton of the Internet. Not only has it given us access to the world's base of knowledge, but with the cloud, a seemingly infinite storage space for files. Of course, the cloud is neither ethereal nor located in the air: it's a massive network of huge data centers. And running these facilities sustainably while innovating hardware to keep up with demand is one of the great technological challenges of our time. In the final episode of Traceroute, we take a closer look at hardware and why its advancement is crucial to the development of the internet. Joined by our guests Amir Michael, Rose Schooler, and Ken Patchett, we explore the synergy of software and hardware in data center services and its effects on the connected world.
MICHAEL:
Every time you send a text message or you upload a photo or you share a video with a family member, that has to go and sit on a server somewhere, inside of a building somewhere //.[00:08:28] People don't actually realize how reliant they are on this cloud infrastructure.
[MUSIC BUMP]
This is Traceroute [rowt]... a podcast about the inner workings of our digital world - all the physical STUFF that most of us never have to think about. In a world that is increasingly defined by digital…. We look at the real people and services building, maintaining, and scaling the internet.
I’m your host, Grace Ewura-Esi, a technical storyteller at Equinix, the world’s digital infrastructure company. In this episode: COMPUTE.
[Act I]
Amir Michael began working in hardware in the early 2000s. It was his first job out of college.
PATCHETT:
Found a Craigslist ad for a company that needed someone to repair servers. I'd worked with PCs, and I'd built, you know, gaming rigs before in the past, and I said, “Well, I could fix the server.” So I responded to the Craigslist ad, and this company called Google emailed me back and they said, Hey, we liked your resumé. Why don't you come in for an interview? And I said, Sure, sounds good. Small search engine. Why not? In the two decades since, Michael’s made a career of building infrastructure, and he’s currently the chief technical evangelist for Lightbit Labs.
And over the years, he’s worked alongside a number of software engineers.
MICHAEL:
Good software engineers understand what the hardware is doing underneath, and good hardware engineers understand how the software is utilizing the hardware. And so if you really want to be good at what you do, if you really want to create efficient infrastructure, both hardware and software, you have to have an understanding of both of them.
But even some software engineers don’t think much about the physical technology that underlies their work.
Michael says many may have never actually been inside a data center – the giant warehouses of chips and processors where the cloud physically exists.
MICHAEL:
It's out of sight, out of mind. [// 00:11:21] I created a device when I was at Google that lets you write a software algorithm, and it would measure exactly how many kilowatt hours it consumed running on a server, and I could then take that to a software engineer and say, “Run your algorithm and see how much energy you're using.” And that was a concept that was completely new to them, right? And they said, “What do you mean? I write software. What do you mean? I consume energy?” And I said no, there's a direct correlation by the number of four-loops you might put in your algorithm to the amount of energy your actually consuming. // And so they're oftentimes surprised by that.
In the world of the internet, we want things to be invisible. That’s even true for a lot of people working in the tech industry. They focus on code and the things they can build online, but not the infrastructure.
But Michael says these days hardware is getting a bigger spotlight. Because the cloud doesn’t run without data centers.
And tech giants from Google to Facebook to Apple are investing more in the space.
MICHAEL:
There's no doubt that the amount of resources they put into the hardware is growing. // There are thousands of people at large companies that are driving not only the design of the hardware, but the supply chains behind them as well. // And if you just look at the financial reporting from these companies, they spend billions and billions of dollars on infrastructure. And without that infrastructure - well, no internet.
[ACT II]
SCHOOLER:
Hello, my name is Rose Schooler and I am corporate vice president of Data Center Sales at Intel
She’s been with Intel for over 30 years.
SCHOOLER:
We make semiconductors, all kinds of semiconductors.// that store data and you'll hear products like persistent memory and storage// we make products that drive connectivity, anything from silicon photonics to Ethernet controllers, a lot of different semiconductors. But we also have a pretty extensive software capability within the company (can we fade down in the middle of this, because it’s just a list of what they make?)
The company was founded in 1968 - specializing in memory. They moved on to microprocessors - which are central processing areas of computers, often located on just one chip. She’s seen hardware grow by leaps and bounds in her career.
SCHOOLER:
I started off as a fab process engineer. Right. So at that time, our process densities, you know, you think about an analogy, it was like the width of a hair and now we're laying Tranz that you're laying transistors on the width of a hair. Now we're laying transistors on atoms.
By the early 2000s, they started shifting into a different kind of processing - which led to the networking and storage business. Otherwise known as today’s internet backbone.
SCHOOLER
So let's think about the Internet. What do you do? You process and you store data and the Internet is made up of a bunch of different devices. It could be servers where the compute and servers, it could be networking equipment like switches and routers and wireless access infrastructure are microprocessors are at the heart of all of those different devices. Let’s slow down a bit here – and break down all the building blocks of what we need to get online today.
SCHOOLER:
You've got your silicon and you've got computers. Let's just call it microprocessors, right. You're going to have them and a PC. You're going to have them in a server. You're going to have it actually in a smartphone. You're going to have a network. You're going to have it in storage. And that's kind of your foundational element. And the more transistors that we put in those, the more features, the more performance and the more applications that you can run] Companies take that technology... and they build routers and switches that help build the network. They build storage devices that help manage the data that goes through the network. They build servers that run applications or the hyperscale build out clouds to run instances and manage workloads. You have people that are building your wireless infrastructure. So when you want to call your Uber on your cell phone, that all of that compute from the device to the edge to the network, to the storage, to the hype, to the cloud is is all built around that foundational micro microprocessor capability]
So this is the moment when a traditional microprocessor company – cue the Intel inside noise – started caring about the internet. The groundwork was there – but the dawn of cloud computing and the growth of the internet opened the door. They started trying to figure out how their computer technology could support networking – the part of the business that Schooler was running at the time.
SCHOOLER:
We just said, you know what, there's a bunch of workloads in the network that aren't necessarily our architecture isn't necessarily tuned for applications. We can run applications all day long. That's what we do on PCs and that's what we do on servers. But there's things like how do you move packets, which are what carries the data through the Internet and how do you like when you get a notification on your phone and that you used to get that little circle in the corner of your app that that's part of the control plane function of the network, sending those signals back and forth and then the wireless access] Schooler and the company had a vision that they could run a lot of computing functions – the ones necessary for the internet – on their existing architecture. SCHOOLER [And it was a giant inflection point for our company, as well as for the companies that serve and help create the Internet// and what it really did was bring the computer economies of scale to the network and the Internet and what used to be, you know, an industry that was built on I'm going to build this piece of silicon with this operating system and this proprietary form factor in this custom software. It just broke wide open]
This means that Intel could start making computers not just for home use – but be integral in making the parts that make up data centers, used for networking… and eventually the cloud.
SCHOOLER: [00:18:19]
There was so much cool stuff happening in terms of the next transformation of our industries related to the Internet. Right. You have the emergence of the hyper scalars. You had the transformation of networking, you had storage moving from big fixed function hardware over to software defined.
And it’s not just the cloud and networking that have grown - more growth in hardware is on the horizon as things like Artificial Intelligence, 5G, and edge computing gain traction. And are influencing silicon and chip makers to expand their horizons as well.
SCHOOLER [00:30:44]
What we're seeing is a big transformation//It's creating new business models, new technology, new approaches to market, new ecosystems]//[You see the equities of the world, you see managed service providers, you see cloud management platforms, it's like this whole new cool community, an ecosystem that's being brought up from these trends that we're seeing in the market are around commodification and what's driving that? Again, if we go outside in, it's quick access to technology. You know, you can get on your laptop and provision assets real time so you don't have to go through the whole supply chain. I'm going to order this. It's because they're going to ship it to me. I'm going to land a server in my garage. If I want to do a startup, you just go. Access technology, real time, and I think it's driving new companies, new innovation and new approaches to the market]
[Act III]
OK, so the cloud physically exists in data centers. And Amir Michael of Lightbit Labs says it was in the early 2000s that these data centers started getting bigger and bigger.
MICHAEL:
In the early 2000s is when the really large scale – we're talking, you know, tens hundreds of megawatt data center sites – really started being planned and started popping up at the time. // [00:33:30] Google started building their own data centers. Yahoo started building their own data centers. But for relatively smaller companies, it didn’t make sense to build and maintain their own massive data centers.
MICHAEL:
Running infrastructure can be fairly complex. And if you don't do it well, it's very expensive because you end up being inefficient in how you use your resources. // And so that that is somewhat disconnected from a product, right? If you're a consumer product, you're streaming videos, that's what you care about. You don't necessarily want to spend engineering resources on managing infrastructure. // And so a company that does that well and does that very efficiently – like a cloud service provider – can have an advantage and sell that and make a profit from their efficient management of that infrastructure. And for a company that's trying to build a product, it's a great opportunity for them to be able to focus on their product and not on all the infrastructure that needs to happen on the back end. You know, not all that different from someone that might be selling a clothing line, and they want to focus on the design, but not necessarily on the manufacturing.
So instead of building their own data centers, smaller companies in the early 2000s increasingly turned to other businesses – like Amazon– to meet their cloud and infrastructure needs. So why did the cloud take off? Michael says a major reason was the rise of connectivity based applications – more people doing things like editing documents on their browsers rather than their local devices. Amir Michael: [38:17] Anything you do on your smartphone today likely goes through some sort of network connection. Right? And that really is what spurred the demand for a lot of this data center and remote server infrastructure – really connecting all of these interconnected apps today – if it's from social networking to finance – you know, no one really goes into a bank anymore. Everything's just done over the network over these cloud resources today. It's how we've become accustomed to getting a lot of work done today. And so you need all that infrastructure to drive that. And I think it's just going to become more and more so in the future as well.
All of this has meant a growing demand for space. The companies maintaining the data centers - have had to build and expand. Data centers have been scaling up. And as the industry has matured, there’s also been a need to optimize.
MICHAEL:
The focus has come a lot more on efficiency and how to use as little energy as possible for the surrounding facilities and power distribution and cooling. That's very important. Tech giants that previously focused on software are more and more interested in the hardware game. Michael says one reason for that is the chance to build hardware tailored to specific applications.
MICHAEL:
If you have a unique application, if it's, for example, a search application or whatever that is, if you run that on custom hardware, you're going to do that very, very well. Very, very quickly. Right. If you use that generic server to do that, that server is designed to do many things well, but not necessarily optimized for any one thing. And so that custom architecture gives you the best performance. Another reason is the scale itself – how much these tech giants are now relying on hardware.
MICHAEL:
They build so many servers, in some cases, millions of new servers a year are deployed into some of these large companies. You need to have a custom supply chain for that, right? You know what your demand looks like. You can't really rely on a third party for something // that is that critical to your business, right? So it's worth it for you to pay for the extra overhead to have that supply chain in place in order to support the scale at which you operate it. You need to have that predictability because you can't afford to not have enough capacity for your service when you're generating so much money off of it. But innovation in hardware isn’t as easy as in software
MICHAEL:
Hardware has development cycles that are much longer than software. From the time you make a design decision to the time you see the impact of that design decision can be months because something physical has to change. A factory has to go and produce this piece of hardware that you designed. You have to receive that and then power it on and test it, and the feedback cycles are much longer. And so the agility that you have with the hardware development process is isn't as high as software. //You want to make sure that everything is correct before you send it out to manufacture because you can't just go back and change one line of code and automatically have your product updated that way.
And Michael says further innovation when it comes to data centers is key.
MICHAEL:
The next thing that's coming up is density. I think people are going to try and use a lot more power and put a lot more resources into smaller spaces, which is a challenge as well. And so there's going to have to be interesting solutions around power distribution and cooling and making that happen as people try and cram more of this compute power into smaller and smaller spaces. And largely that's being driven by wanting to be as close as you can tell your end users. And you can't have big, massive, you know, hundred acre data centers near all your users. That's just not going to work. And I think that's that's going to be one shift we're going to see coming in the future, too. The stakes, Michael says, are high.
MICHAEL:
If we're not able to figure this out, the amount of innovation we have by being able to cram more compute resources into smaller spaces, will will will start to go away. And this this pace of innovation that we've had will slow down. And so it's important that we we are able to solve that because innovation is important for our economy, for the for the creating jobs for our technology. It's important for the environment to be able to do more with less resources. And so it's something that it's absolutely critical that we can that we figure out.
[Act IV: DATA CENTERS]
Ken Patchett has worked on data centers for over 25 years - including stints at Microsoft, Facebook, Google, and Oracle. But he started out as an iron worker.
PATCHETT:
In fact, I helped build the Canyon Park Data Center in Bothell, Washington, when I was eighteen years old. I had no idea what it was] That early experience took him places he’d never imagined. Like the Olympics.
PATCHETT:
I think it was the 2004 Olympics. I was sitting in front of a sequel server. There was hosting the Olympics with an Iridium phone and a radio just in case something happened to that single server. And if I recall, it was a compact for Prolia 5000 series with so one compact perlite in the box and and four or five skuzzy attached driver. At the end of the day, that technology was all encompassing. It was huge and it had a huge blast radius of that server went down.
In his years building data centers, he’s seen the internet - and customer expectations - change quickly.
PATCHETT:
[Data and the usage of data has become much like a microwave in a home, it is simply required, is expected. Most people don't look for it, they don't need it, and they don't really think about it that much until it doesn't work]
Patchett knows just how much it takes to make a data center run well.
PATCHETT:
The cloud is a combination of of data centers, of various sizes across the globe that are all connected through network, which is fiber in the ground or satellite systems or some kind of connectivity. And all of that costs a lot of money to do. When we build data centers, a rule of thumb is is four to one. So we're it costs me one hundred dollars to build a data center. It cost me four hundred dollars to fill that data center up with servers and computers and that network gear that's necessary to provide this data or this information to an end user.// So a lot of people, they just think, oh, if I had a cell phone or if I had a desktop, I would be online. Well, that's not actually true. What's true is you have to have interconnectivity all the way back to where the servers are in these data centers] When Patchett first started building data centers, redundancy was the goal, so if something broke, there were backups. And companies built data centers that were like bomb-proof bunkers.
PATCHETT:
I should say, nobody ever got fired for being too safe or for thinking of any crazy corner case that might happen. They used to say, and they still do in large part, don't build a data center near an airport. Don't build it on the curve of a road where a car could come through the window. Don't do this. Don't do that. At the end of the day, they were building that those rule sets in order to stay up uptime was was the most important thing simply because we couldn't handle failure. Software wasn't predictive, software couldn't readjust. Employees didn't exist as a for instance. So anything that shook the bones of the Internet would cause outages for people around the world.
As they built more data centers, and the cloud grew, it made more sense to frame building around resiliency. Software and hardware have to work together. Data centers fail, data centers break. So as our reliance on this technology or this microwave has grown, so has our need to find ways to keep it available at all times. Because somebody who wakes up in, let's say, Japan in the morning does not really care that the data center of Microsoft in Bothell, Washington, had a problem yesterday. It doesn't matter to them. They won't know about it, they won't see it, and they shouldn't have any effect] It was a maturing of the industry and an understanding of what was really needed to run the internet.
PATCHETT:
We are aware as an industry that things will fail, that we need to spread them out, we need to distribute them, and we need to have the end users not be impacted by any technological issues that happen. And there's a myriad of them and they're all physical. So a train started on fire in a tunnel. A backhoe cut a fiber line going the north and south. Right. And that brings that brings technology to a standstill. And so building a data center, when we realized that there's a better way to build a data center and it doesn't have to be bomb proof as a for instance, would save and some materials cost, Data centers started redesigning hardware to optimize it for different uses, depending on who’s renting the server space. It’s kind of like optimizing a race car - stripping out all the parts you don’t need to help ensure efficiency.
PATCHETT:
The industry always goes in two ways. It's like a bow tie. The race car is stripping all the way out, getting it lightweight, making it go faster and faster and faster. And you want to do that and you got to do that. At the same time, there somebody is going to be working on a motor and that motor is going to be bigger, better, faster. And maybe we have to enhance the frame. Maybe we've got to add new set of tires. We have to have other technology.
PATCHETT:
There's there there's you know, when you think about this is kind of like those those cool racecar guys back in the day on the strip, you know, in Southern California, some that is in their, in their, in their shop. They're working on a motor. Somebody else is working on a new type of rubber. Right. You know, to stick to the road a little bit better. Again, these companies are all working together and talking together now in such a way as to create infrastructure that is simply better than it has been. I can do more with less. I can I can focus on my company's workload and type of work because the products that exists out there now that I can buy or build or engineer the thing that is most, most most appropriate for my company is type of workload versus, let's say company X, Y, Z]
And what that process meant for the data center layouts was...
PATCHETT:
Now we put all compute in one rack. We put all storage in another rack. We have them interconnected through top of network gear and we're able to leverage, again, resiliency more than redundancy by having all of these things integrated with one another. So we lighten the load, let's say, on a on a motherboard. We made a purpose in specific for the work that they were trying to do In the quest for more efficiencies, companies also realized that their proprietary hardware didn’t always have to be so fortress-like, either. They started embracing open hardware projects.
PATCHETT:
Open Compute really opened the door and said to everybody in the world like this data center, this infrastructure space, the server space, this should not be fight club. This is better for everybody to come together and figure out how to get bigger, better and faster and more resilient with these servers and these components. They need to think about a rising tide, floats all the boats, partner with a lot of folks to build an infrastructure that we can then build software products on top of that actually make a better world.
If you look at the data centers that are being built, they're more efficient. We're trying to get more compute units out of every processor. We're always working on the hardware to try and make the hardware, take less power and deliver more output or throughput for the workloads that we're working on. So, again, these companies sharing and their knowledge and their information across the world is allowing us to build bigger, better, faster, cheaper. The data center space should not be fight club, the data center space is where humanity is able to come together and share their technology, share their knowledge and share those experiences]
ACT V: WHERE WE GO NEXT
Amir Michael says the hardware space has come a long since the early days of manufacturing transistors and chips …
MICHAEL:
It started off where you could almost take a warehouse and and perhaps seal it off to prevent the amount of dust out there. And you'd buy specific silicon manufacturing pieces of equipment and you'd set them up and you would start manufacturing your own silicon. The barrier to entry was fairly low at that time.
Fast forward a couple decades to today – when materials are smaller and more sensitive to impurities and to dust…
MICHAEL:
The requirements for building anything become much more stringent. And today, as you hear about in the news all the time, we're talking about billions of dollars to set up a manufacturing site today of highly specialized equipment, highly specialized facilities. And the barrier to entry is much larger now. Advances in chip production go hand in hand with advances all the way up the stack - and some of the most exciting internet trends on the horizon, have hardware to thank - from developments in satellites to artificial intelligence to edge computing. And 5G.
MICHAEL:
You know, things like 5G, self-driving cars, all of those things that are going to be, you know, game changing for us, you need to be able to have hardware that can meet the demands of those applications, right? We’re asking the hardware to do more and more things for us today. We want our cars now to be able to drive around and recognize pedestrians and cyclists and other traffic and stop at red lights and do that all safely. That's a lot of processing power. In the past, you needed a supercomputer to do those things. Now we want to take what a supercomputer was used for and put that in your car. That's pretty cool, and you need really cool hardware for that.
The hardware will need to get there… in ORDER for the software to continue to improve.
MICHAEL:
Lots of companies and economies are now being built on this type of infrastructure. And if it's expensive and cost prohibitive, it limits the growth of our economy and it limits job opportunities for people, right? And so the more efficient we can do it, the more efficiently we can build this. You know, these bricks that make up our digital economy, the better it is.
And ultimately, the stakes for better hardware are as high as they get, says Ken Patchett.
PATCHETT:
one of the most important things that I think about and even the reason I'm in this career. Is that. Access to information, knowledge, data should be a fundamental human right, and a lot of people in our space say, well, it doesn't matter, nobody died. Well, you know, they do. They do. [00:03:44] There's somebody right now in in an underserved market in some part of the world who's wishing or trying to figure out if their child has dysentery or not. And access to that information of that knowledge is the reason that I got into this space so that we could build infrastructure such that other people around the world would have access to this and they could have a better life. So we need to keep that in mind. This is real stuff, folks.
[Credits]
This has been Traceroute (rowt), a seven-part series about the inner workings of our digital world ... from Equinix, the world's digital infrastructure company.
To find more from the people behind the internet, check out Origins-dot-dev for an up-close and personal look at our digital world through a creative lens. and if you’re ready to dive in deeper? Visit youtube-dot-com-slash-Equinix-Developers for developer-led livestreamed technical content.
Thanks for listening to Traceroute, an Equinix production. Our theme music is by Ty Gibbons. This series was produced by Rococo Punch. Be sure to subscribe on Apple Podcasts, Spotify, or wherever you get your podcasts. You can learn more by heading to metal dot equinix dot com slash traceroute. Want to get in touch? Reach out to us at [metal dot equinix dot com]. And make sure to leave us a review and tell us what you think.