Like autonomous vehicles, servers built with Arm-based processors have spawned a lot more discussion over the last 10 years or so than real-world applications. But there are clear signs of a change in the wind. Not only can Arm server processors now hold their own against x86 alternatives in terms of performance, they do particularly well with cloud-native workloads. And if there is one thing that is growing fast, it is applications written to scale in the cloud!
If Arm’s middling performance in the data center market over the last decade led you to write it off as a serious solution, it’s time to rethink that attitude.
Promises Didn’t Materialize Early
Experts at the time predicted that they would cost less, use less energy, and produce less heat than conventional x86 machines. These predictions fueled much of the excitement. But the technology hadn’t lived up to the hype, and x86 continued to dominate the computing-infrastructure space.
Why Arm Servers are Worth a New Look
A lot has changed since then, and Arm servers are now a much more serious alternative to x86 in the data center. Here’s why:
Performance and Cost
First, raw performance. Sure, the fastest x86 silicon still outperforms the fastest Arm silicon, but benchmark testing on some of the recent Arm-based servers has shown that these machines can equal and in some cases even outperform bread-and-butter x86 data center servers.
Combine that with the fact that Arm server processors cost a lot less than x86 alternatives, and their path into the market becomes clearer.
Cloud-Native Apps Like Many Cores
Even more important than performance and cost, arguably, is the ability of Arm servers to pack so many cores—today, up to 128—into a single machine, which is a powerful advantage for distributed workloads.
With dozens of cores, it’s easy to dedicate a core to each microservice or container instance in a cloud-native app. With one core per microservice, you are much less likely to run into “noisy neighbor” issues that typically arise because multiple services or processes compete for finite resources on the same core.
There are high-end x86 servers that support large numbers of cores, too, of course. But the key descriptor there is high end. You’ll generally pay less for a high core count on an Arm server than on an x86 machine.
The third factor, and one that’s probably throttled Arm’s success in the data center market more than anything, is software optimization. By now, however, it’s been largely solved, especially in the cloud-native space.
Large software vendors and most open source projects had little interest in investing in Arm support ten years ago. Some developers did make sure their apps compiled and ran on Arm, but that’s where their efforts stopped. They didn’t bother to optimize algorithms to maximize performance on Arm chips.
This has changed as big names have become more involved in the Arm server space in recent years. AWS is now on the third generation of its Arm-based Graviton server processors, designed in house. Oracle Cloud offers Arm servers. Such big players throwing their weight behind Arm servers has placed greater pressure on developers to ensure their code not only runs on Arm but runs well on Arm. Plus, a lot more developers have been building software for Arm-powered laptops since Apple introduced computers powered by its Arm-based M1 processors. Also, with Arm servers available from cloud providers, the code developers write on their M1 laptops becomes easier to deploy in the cloud.
Again, modern cloud-native apps are particularly well positioned to take full advantage of Arm-specific software optimizations. While legacy apps may be written in languages with libraries that haven’t been optimized for Arm, developers with the luxury of designing and building new apps from the ground up can do so in ways that leverage Arm to the fullest extent possible.
So, Who Should Use Arm Servers?
None of the above means that the demise of x86 is imminent, or that every data center workload should be ported to Arm servers. That’s not how the world works.
But some common data center workloads are good candidates for Arm servers today. Examples include:
- CI/CD build operations, which can benefit from high processing power and high core counts of Arm servers to decrease compile times
- Applications that have high memory requirements, as memory tends to be more abundant in Arm servers
- Applications that need to scale rapidly (The ability to devote cores to individual process or service instances comes in handy when you need to add instances quickly without worrying about overburdening cores that are already managing other instances.)
Legacy workloads are less obvious candidates for Arm, especially for organizations without the software engineering resources necessary to optimize those workloads for the architecture. And, although it’s possible to use hypervisors like QEMU to virtualize x86 workloads on Arm hosts, the performance results are not likely to be very satisfying. In most cases, legacy workloads built for x86 are best left on x86.
While self-driving cars are still largely a fantasy, there are good reasons to believe that Arm servers no longer are. A new generation of Arm hardware combined with takeup by large cloud providers and Arm-centric optimizations in programming languages and operating systems make Arm-based machines real contenders against x86 in the data center—especially for distributed, scalable, cloud-native apps that can take full advantage of a high core count.
We’re not saying move every data center workload to Arm immediately. But we are saying that Arm is now worth a serious look—in a way in which it wasn’t just a few years ago.
Ready to kick the tires?
Sign up and get going today, or request a demo to get a tour from an expert.