Skip to main content

How Dedicated Cloud Differs from Public Cloud

What dedicated cloud is for, what you gain and what you give up when you choose to deploy your workload on single-tenant cloud infrastructure.

Headshot of Steve Martinelli
Steve MartinelliDirector, Developer Relations
An abstract illustration to an article comparing dedicated cloud and public cloud

A version of this article has been published on The New Stack. This version contains more details about the Equinix dedicated cloud product and its latency, connectivity and cloud-onramp differentiators.

Public cloud is the natural, most logical type of platform for much of the software written today. The immediacy, elasticity and scale of the largest public clouds make them more convenient than any other style of infrastructure for building and deploying applications. But there is a swath of workloads out there for which general-purpose clouds are either suboptimal or not an option. If you have one of those workloads, your infra choices are limited to self-hosting, some version of traditional dedicated hosting services, or dedicated cloud, which offers single-tenant infrastructure with many of the public clouds’ conveniences.

While dedicated cloud services give you a level of architectural control you will not get from public clouds, using them comes with tradeoffs, the biggest one being the amount of infrastructure engineering ability needed. But if your team has concluded that a public cloud isn’t a good fit, you probably know that already and have at least some of that ability on hand.

So, what is a dedicated cloud exactly? Let’s go down the list of the key differences between dedicated clouds and public clouds and look closely at each to explain what dedicated cloud is, what you can gain from it and what you give up when you choose this type of platform. 

Basic Hardware Performance Differences 

Because of their scale and financial might, the largest public cloud providers are usually first to deploy the latest processors by the top vendors as they come out. But raw silicon performance is different from the performance that’s available to users, who get slices of each physical host’s resources in the form of virtual machines. This makes all the difference in compute performance.  

The utilization rate of each physical public cloud host continuously ebbs and flows. One customer’s VMs can hog the host’s resources and slow down its neighbors on the same host. This “noisy neighbor” issue doesn’t exist in a dedicated cloud environment. Furthermore, public cloud users share hardware with each other and with the cloud provider’s own software stack (the hypervisor and all the other software they need to provide their services), so, even if no-one else is using a host as a customer, you’re always sharing it with at least one other entity. 

At the heart of a dedicated cloud solution is dedicated hardware. Each user has full control of and exclusive access to each physical host, down to its memory, NICs and storage media; they are free to use all its resources, configure the hardware as needed and install the best software stack for their application. 

Low-Level Hardware Access 

Having someone else’s software stack sitting on their hardware is a nonstarter for many app makers out there. These are companies that build operating systems, for example, or virtualization technology, or private cloud solutions for enterprises. They need to test on bare metal hardware, often by multiple vendors: Intel, AMD and some of the Arm server chipmakers. 

Dedicated cloud is an ideal fit in such scenarios, offering all the low-level access these developers need to test on a variety of platforms. They can spin up what they need, run their tests and then spin it all down, paying only for the time they use the resources. They can dig into logs to troubleshoot failures and ensure their products are compatible with the platforms they will run on in production. 

Control Versus Simplicity 

Often, whether a public cloud is or isn't a good fit for a particular workload depends on the degree of infrastructure control its owner needs. A public cloud provider controls every layer of the stack below whatever abstraction they present to the customer, be it a VM, a Kubernetes cluster, a serverless function or a storage bucket. Cloud users are happy to hand that control over to the provider–along with the responsibility for managing all the infrastructure underneath. 

Many developers, however, must have more say in how the infrastructure is configured. Their reasons vary. Some need specific hardware configurations for their applications to run optimally. Others must guarantee that sensitive data is stored in specific locations and never transferred out. Yet others require that their packets travel only on specific network routes. 

Cost Factors: Scale and Tolerance for Uncertainty 

On a small scale, public cloud costs aren’t an issue—especially for cloud-native applications. Infrastructure costs for large cloud deployments are a different story. They get unwieldy without a serious investment in optimization. An entire cottage industry of consultants has sprung up, helping companies, for a fee, comb through and rein in their complex and unpredictable cloud bills. 

Cloud costs can run up for many reasons: a mismatch between a workload’s requirements and configuration of the compute instances, frequent data transfers between availability zones or between different cloud services that lead to unanticipated data egress fees, more API calls than expected to a cloud service billed on a per-call basis and so on. Generally, the larger the scale of your deployment, the more it takes to understand and control your spending. 

Running applications optimally on a dedicated cloud, too, requires effort and expertise. It costs more to use a bare metal server than a cloud VM, but you get a lot more horsepower. If your infrastructure is tuned correctly, your workload uses the bulk of each host’s capacity and your overall compute cost is lower. Put simply, you need fewer server instances to process the same workload. 

Dedicated cloud costs are also more predictable, since the user has control over their infrastructure configuration and knows upfront what the services are going to cost them, including bandwidth and egress. Having the right ability matters a lot here, since you have the power to design your network using the infrastructure primitives the provider offers. 

The primitives and their quality differ from one provider to another. Equinix dedicated cloud, for example, offers a deep set of building blocks that can be used to fine tune your network architecture exactly to your application’s requirements—and to your budget. You decide where and how data gets transferred on your private network, at the edge and to the public internet. You can reduce bandwidth costs by combining multiple virtual network circuits on a physical link (to a public cloud, a network operator, an enterprise network, etc.)and save on egress by storing the bulk of your data on a private array with access – via low-latency, high-bandwidth private connections – for processing in your dedicated cloud and in any of the major public clouds. 

A Rich Feature Set Versus Premium-Quality Primitives 

Both dedicated and public cloud services give you a lot of agility. You can quickly launch an application in new markets to test demand with relatively little risk. You pay as you go; if the demand isn’t there, you can spin it all down; if usage grows, you can quickly scale up. A more predictable deployment can benefit from discounts on long-term capacity reservations. 

The largest public cloud providers’ vast service portfolios offer more versatility, and they add to their catalogues all the time, giving users a constant stream of new features to try. But all the different services are in different stages of maturity, so your mileage will vary depending on which ones you use.

Dedicated cloud providers generally don’t offer services “up the stack,” serving users who tend to either build their own or adapt open source tools that fit their needs. These providers choose to focus on ensuring they offer the best infrastructure primitives they can while relying on partners to extend the ways in which their platforms can be used. To illustrate, Equinix dedicated cloud provides an API for its bare metal compute, networking and storage services, while nurturing a long list of companies that use the API to provide everything from backup solutions to managed Kubernetes. 

How Cloud-Neutral Can You Get? 

How freely an organization can switch public cloud providers or add another one to the mix depends on a lot of factors. One that can make a cloud especially “sticky” is extensive use of services sitting up the stack, above the infrastructure. If your business depends on one provider’s application that doesn’t have an obvious equivalent in another cloud, the idea of switching is likely a nonstarter. Sure, if there’s a will there’s a way, but it will not be worth the pain. 

If a highly differentiated cloud software service isn’t a critical dependency, however, you have options. Aiming for cloud-neutral architecture is a worthy pursuit, giving your organization more flexibility. Some dedicated cloud providers can help a lot in this pursuit by supporting connectivity to multiple public clouds. 

For example, the cost of transferring a large amount of data out of a public cloud can alone be a blocker for switching platforms. A good way to avoid this problem is to store your data privately, in a dedicated cloud, and enable public cloud applications to access it when necessary. 

Equinix enables this by hosting the end points for an unequaled number of networks in its data centers–it’s a global interconnection ecosystem. In fact, most cloud providers (dedicated and otherwise) themselves use Equinix’s interconnection services to enable connectivity to different clouds for their users. 

Intercloud connectivity is possible in public clouds as well, usually via site-to-site VPNs or direct-connection services. Because of limited throughput and relatively high latency, a VPN, while quick to deploy, isn't always ideal for cloud-to-cloud data transfer. Direct-connection services like Google Cloud Interconnect, AWS Direct Connect and Azure ExpressRoute are faster and more reliable, and Equinix provides access to more of these “cloud onramps” globally than any other operator, with software-defined connectivity on demand. 

Latency: What's “Good Enough” for Your App? 

Different applications have different latency requirements, and most work just fine with the latencies they get in a public cloud. Developers with more latency sensitive workloads, however (think market data feeds, digital ad auctions, media streaming, real-time multiplayer gaming, augmented reality and so on), must think carefully about the physical location of their data and compute—especially if deployed globally. 

The big public cloud providers build most of their hyperscale data center clusters away from densely populated metros, in areas where the massive amounts of power and real estate these campuses require are plentiful. They extend their networks by deploying many small points of presence within the metros, using them for things like content caching and DNS resolution to speed up their services. While the services their customers use benefit from this infrastructure under the hood, it isn't directly accessible for customers like the availability zones hosted on the hyperscale campuses are. You can call a serverless function in some of these locations, for example, but you can’t spin up a cloud VM or launch a storage volume or a database instance. 

In recent years, traditional cloud providers have made some efforts to address select latency-sensitive use cases, introducing services that run on infrastructure inside colocation facilities in densely populated areas–including many Equinix sites–and solutions customers can deploy in their own data centers. Those offerings tend to be either limited to specific applications and industry verticals or require customers to provide data center space, do a lot of configuration and integration work and sometimes even deploy the cloud stacks on their own hardware. 

Depending on the provider, dedicated cloud can be a good solution for reducing latency by deploying your workloads physically close to users. A provider that fits the bill has data centers where a critical mass of your users is found and makes it easy to access the networks they are connected to. Equinix provides dedicated cloud infrastructure in 31 major global metros, along with high-bandwidth, low-latency connectivity to all the key local network operators and internet exchanges.  

If you’re transporting traffic globally, Equinix’s high-capacity backbone connects its data centers directly (no third-party transit) and provides materially lower latency on many international routes than on any of the largest cloud providers’ networks. For example, average latency on the Equinix network is lower by 25 to 50 milliseconds on routes connecting São Paulo, one of South America’s largest markets, to major European metros, such as Berlin, Frankfurt, London, Milan, Paris and Warsaw. On routes connecting Hong Kong, one of Asia’s dominant business hubs, to top-tier North American markets, such as Chicago and Northern Virginia, average latency is lower by more than 25 milliseconds, and on routes between Hong Kong and elsewhere in Asia, like Seoul or Osaka, by 36 milliseconds. 

Ultimately, dedicated cloud is about keeping control and giving yourself options. You can quickly deploy different combinations of resources, interconnecting dedicated infrastructure with public cloud services, and keep fine tuning and refining as you go. You get full control of your data and your architecture—with the freedom to change your mind. 

The tradeoff is that you must be ready to roll up your sleeves and manage operating systems, deploy storage servers, tinker with traffic routing and do whatever else you need to do to get your architecture just right. But again, if you already know that you need more knobs than you can turn using a typical public cloud provider, you are probably ready anyway. While this extra effort is overkill for most applications, many software-centric businesses looking to increase the competitive advantage of their core revenue-generating applications choose to place them in a dedicated cloud.

Published on

08 October 2024