We get up and go to work each morning to help developers deploy and run applications where high performance, reliability, speed and low latency aren’t optional. Our goal is to remove as much friction from that process as possible, to offer a global infrastructure platform that’s consumable as a service. The latest step in that direction is the new Equinix-native load balancer, a common tool in the cloud-native developer’s arsenal that is uniquely powerful when part of the Equinix platform. Using it is a way for your applications to leverage our superpower: Equinix’s global high-speed network backbone.
The load balancer, now in public beta, is one of the many things we’ve been building to make the power of Equinix accessible to developers in a cloudy manner. It’s a vast platform consisting of 250 state-of-the-art data centers located in all the key global connectivity hubs and linked up by one of the fastest and highest-capacity network backbones on the planet. These data centers are where all the major public cloud providers and network operators interconnect with their partners and customers to ensure the most efficient data transit for their services.
We realized recently that by describing Equinix Metal as “bare metal as a service,” we may have unwittingly obscured this platform’s power. Sure, it provides automated bare metal servers on demand—but the fact that they are integrated within Equinix means you can do a lot more with them than just compute. “Dedicated cloud” is a more accurate description, so that’s what we’re increasingly calling it. Our dedicated cloud service provides single-tenant compute, storage and networking resources on demand, embedded in the global network of Equinix data centers, where they can be easily linked to all the largest public clouds, networks and enterprises to design hybrid and multicloud architectures for applications where performance, control over the entire infrastructure stack, data location, connectivity and cost are paramount.
The work to launch more locations where all these capabilities are available as a service never stops. We started 2023 with dedicated cloud services live in 25 metros, ended it with 28, and are just putting the finishing touches on three more metros. That’s in addition to expanding capacity in existing markets. The new availability regions that came online last year were Manchester, Miami and Mumbai, with Dublin, Mexico City and Milan coming up shortly. Also on the roadmap is Johannesburg!
Why a Load Balancer?
When your app is ready to be deployed at scale for users around the globe, load balancers are a necessary part of the infrastructure stack, managing user traffic to your servers in every region you serve. Configuring and managing these load balancers is one of the things you do on an ongoing basis to keep your application running. Our developers have had to do it, and while common, you’d be hard-pressed to find many folks who enjoy it. (We looked on our team and didn’t find any… 🙂) So, we decided to build our own load balancer that would work on Equinix without wasting valuable engineering cycles on configuration management. Eventually, we realized that our customers, too, probably didn’t enjoy configuring load balancers and decided to make it part of the product.
The load balancer is fully automated, built into the Equinix interface (both the console and the API), and takes only a few steps to provision. Put it in front of a pool of servers in a metro and see it start distributing traffic among them. Once you do, any user traffic on the internet destined for your app will first travel to the nearest site advertising an Equinix IP, get onto our private network and transit to your load balancer.
The service today supports Layer 3 and Layer 4 TCP traffic. The public beta is available in four metros, covering eastern, western and central US (Silicon Valley, Dallas, New York and DC), but we will be quickly expanding it globally. The service is free while in beta, so go ahead and take it for a spin! (Find all the technical details in the docs.)
Dedicated Cloud Use Cases
The load balancer is a new tool in our deep networking toolbox assembled to enable users to create the exact network architectures they need. (Here’s an overview of all the architectural possibilities, including links to detailed technical guides.)
The Equinix dedicated cloud platform powers CI/CD pipelines for heavy duty applications that require deep access to bare metal hardware and massive scale, to build global private networks that require maximum control and configurability to support enterprise security applications, to create predictable-cost backup and disaster recovery infrastructure for critical applications and for many other use cases where virtualized multi-tenant cloud platforms aren’t an ideal fit.
The platform makes it possible to design a network where you are in full control of how and where data packets travel. You decide where and how your network connects to cloud platforms and where and how your and your customers’ traffic egresses to the public internet. Select the locations closest to your customers and allow egress from your private network (as a service) in those locations only, ensuring low latency while maintaining the security and performance of private networking behind the scenes. This is how enterprise security company Menlo Security, for example, uses our dedicated cloud, doing all the backend processing in AWS and egressing to the internet only via edge locations hosted on Equinix Metal.
We’ve been putting together new pages that highlight the common use cases and explain how they benefit from running on Equinix:
- CI/CD Infrastructure
- Backup and DR
- Flexible Capacity On Demand
- Software POC Infrastructure
- Hybrid Multicloud VDI
- Private Network as a Service
Here are some more real-world stories of customers deploying on Equinix dedicated cloud to support some of these use cases:
- Mirantis: CI/CD
- Tremor International: Flexible Capacity
- AWS: Proof of Concept
- Travelping: Network as a Service
Private Storage, Public Clouds
We’ve also been adding storage options to go together with our compute and networking capabilities. Recently, much focus has been on fast, high-capacity storage appliances, which we offer in addition to the NVMe drives in Metal servers. Storage appliances have been available to Metal customers for some time, but last year we enabled ordering of the powerful enterprise-grade Pure Storage and NetApp flash arrays in the same console you use to provision and manage other dedicated-cloud resources. The appliances are fully managed and billed as you go.
Here, again, the unique power of the offering lies in where the appliances are deployed and what they’re connected to—even more than the market-leading storage technology itself. Being able to provision the arrays right where they can be directly (and privately) connected to enterprise networks and public cloud platforms opens up interesting use cases: processing private enterprise data using public-cloud tools, storing and serving digital media content, creating backup and DR infrastructure with ultrafast snapshots and recovery, training AI models on private data without exposing it to the public internet and more.
Storing data for any of these use cases privately but adjacent and connected to hyperscale cloud platforms gives you full control over networking and data egress costs—something you don’t get when storing data in those clouds. You get the advantages of elastic storage capacity and usage-based billing while not being dependent on any single cloud platform, and keeping your data private to ensure security and regulatory compliance.
As we build out new capabilities, it’s hard to overstate the value of user feedback, and last year we formalized the process of collecting it by creating a beta tester program. It offers early access to new features and improvements to existing ones as they are being worked on. Users who have a degree of proficiency in deploying bare metal hardware and who don’t mind taking some time to share feedback with us are welcome to sign up and get a discount on the Metal hardware they use.
Your Own Global Network Edge, With an API
The above are just a handful of highlights from a long list of capability and user-experience enhancements we’ve been making to the Equinix dedicated cloud service. A ton of work is ongoing behind the scenes to expand integrations with the Metal API, improve documentation, produce technical guides for getting specific things done using the platform and technical blogs meant to share our infrastructure and DevOps knowledge—regardless of what platforms and tools you’re using.
Examples of recent integration work to enhance the experience of using infrastructure-as-code tools to interact with our API include updates to our Ansible collection; to Pulumi, Crossplane and Terraform providers; to our Python, Java and Go SDKs; and new CI/CD GitHub Actions for those wanting to use self-hosted runners on Equinix.
Our startup program, which supports founders with business and technical guidance, co-marketing and, of course, infrastructure credits, received nearly 3,000 applications last year—that’s eight times more applications than it received in the preceding year! More than 2,000 people attended our in-person Propel Your Startup events in 2023.
This level of demand from the startup community is one of the many indicators giving us confidence that there is a substantial and growing need for single-tenant, unopinionated infrastructure that can be bought and provisioned in a cloudy, as-a-service manner at the edge, where it can be interconnected with cloud platforms and network providers and play an integral role in hybrid and multicloud architectures that scale globally and change and adapt on demand.
Ready to kick the tires?
Sign up and get going today, or request a demo to get a tour from an expert.