What is Specialized Hardware and Why Open Source Will Drive Adoption
Over 35 years ago, Alan Kay gave a now famous talk at a seminar called Creative Think, during which he even more famously remarked: “People who are really serious about software should make their own hardware.”
History has confirmed Kay's thesis repeatedly, and today’s cloud-fueled landscape is proving again just how impactful non-homogenous hardware can be.
In the digital infrastructure space, a new class of hardware is emerging that includes SmartNICs, FPGAs, DPUs, TPUs, and custom Arm-based CPUs — each designed to accelerate particular workloads or applications, and to do so more efficiently than generic solutions. As these innovations take center stage and more users begin to wrap hardware around their software (instead of the other way around), we are watching the slow end to an era of homogenous CPU dominance, the onset of a powerful tool for innovation, and an exciting rallying point for the open source community.
The one-size-fits-all approach of conventional server CPUs and systems architecture is increasingly inefficient for many contemporary workloads, because they are designed to run general purpose business logic. For applications that need to run at any kind of scale, it is much more efficient to design a “custom cloud” for that application. Moore’s law is slowing down as consumer demand for content (like Netflix) and smart devices (like connected cars) is heating up.
Power Consumption
Both financial and ecological sustainability have made power consumption an increasingly central topic. Specialized hardware — optimized for particular use cases — requires less power than generic CPUs. For example, although there’s a lot in the news about the power consumed by GPUs (let alone ASICs or FPGAs) for mining cryptocurrencies, that consumption pales in comparison to the energy it would take to use generic CPUs for the same job.
Even within the CPU market we’re seeing how optimization for performance and power consumption is a major driver for change. Arm CPUs typically perform favorably against x86 based architectures on power consumption mainly because they are more purpose built for specific use cases. This is one of the reasons why Arm has been so successful in the battery conscious mobile device market — but we’re now starting to see these energy savings (and performance gains) move into more powerful devices, from Apple’s recent Arm based M1 announcements, all the way to datacenter scale deployments from AWS’ Graviton 2 and the Altra lineup from chip startup Ampere.
Performance
While power consumption is often an important consideration, sometimes it’s pure performance that’s the main driver. As mentioned above, GPUs are more efficient than generic CPUs for certain workloads such as video gaming and machine learning. But their ability to perform massively parallelized computations means they’re now finding adoption in fields as diverse as bioinformatics, intrusion detection, and video transcoding.
Application Specific Integrated Circuits (ASICs) are about as specialized as hardware gets. It’s a chip created to perform a specific set of tasks and nothing else. They are the peak of performance and you’ll find them in every appliance around you — but from a developer point of view they’re boring, because we can’t program them!
Enter the Field Programmable Gate Array (FPGA). “Field Programmable” literally as in, “in the field” or after manufacture. FPGAs consist of thousands of Configurable Logic Blocks (CLBs) and other components that can be programmed to make it behave like a microprocessor, or as an encryption unit, graphics card, or anything else you want it to be. FPGAs represent a programmable as-close-to-ASIC-as-possible compromise that has seen adoption in fields as disparate as high frequency trading and automotive (and more recently, led by Microsoft, in cloud computing). However, FPGAs can perform poorly on power consumption.
Scalability
Increasingly, we’re seeing the demand for power conscious, highly performant hardware at scale. This runs two ways, from the exponentially increasing scale of data being collected and processed each day, to the smaller scale of the kind of edge data centers that we operate at Equinix. Specialized hardware is being used to help with both.
As edge computing steps up to the demand of increasingly distributed data sources, the constrained nature of edge deployments requires that they’re architected with efficiency of every kind in mind. Companies betting on edge don’t have the luxury of running thousands of servers in a traditional data center, so the machines they do run will benefit heavily from specializing the hardware to the workload.
Key to Success: Bottoms Up Adoption via Open Source
The more specialized the hardware is to the application, the better the potential power consumption, performance or scalability. But there are clear trade-offs that can hinder the wide-spread adoption of specialized hardware: namely accessibility and compatibility.
Due to its massive adoption, almost everything works on an x86 CPU, which provides a powerful incentive not to experiment with and adopt specialized hardware types. Hardware manufacturers face two problems when addressing the issue of compatibility:
- Where to prioritize their compatibility efforts,
- How to get all the work done?
Open Source communities can help with both.
In August, I wrote on the WorksOnArm blog about the incredible energy that has been unleashed over the last four years by the cloud native computing movement. By attracting the attention of open source developers (who are naturally drawn to exciting new problem domains), Kubernetes and its family of cloud native projects revolutionized computing in 4 short years.
Hardware manufacturers would be well advised to follow Arm’s lead. From my previous article: the WorksOnArm project is a big reason why “when Ampere’s Altra systems land in data centers, all major CNCF projects will already be compatible.”Harnessing open source communities is no free lunch, however, as they need to be encouraged and enabled to succeed. Get it right though, and your new hardware startup might be able to offset significant development costs and build a community of eager adopters in one go. That is, of course, if you can also solve the accessibility problem.
While it might seem obvious to say that in order for developers to innovate with specialized hardware, they will need access to it — it’s actually a little more complicated. An entire generation of developers have grown up with the cloud, which means that for the most part they expect to consume their hardware innovations through an API.
This means that hardware manufacturers and everyone in the supply chain — from chip designers, system designers, datacenter architects, et al — are faced with a chicken and egg problem. They need cloud providers to adopt their hardware to get it into the hands of eager open source developers, but they need adoption to convince the cloud providers to stock their innovative new technology.
Whether it be driven by the need to lower power consumption, increase performance or scalability, this exciting new wave of innovation in specialized hardware will be led by those who, as a key competency, activate and build their developer communities best.
If you're interested in the intersection of open-source and fundamental infrastructure you might want to check out Proximity, our first technical user conference on 3/3.
Ready to kick the tires?
Use code DEPLOYNOW for $300 credit