Skip to main content

The Tools for Controlling Cloud Cost and Their Limits

While there are best practices for keeping cloud costs under control, they only get you so far. But there is a way to go further.

Headshot of Hrittik Roy
Hrittik RoySoftware Engineer
Headshot of Yevgeniy Sverdlik
Yevgeniy Sverdlik
The Tools for Controlling Cloud Cost and Their Limits

In early 2015, Dropbox for the first time in its history started running fully on its own infrastructure. One year prior, its leadership had decided to move off AWS and onto an in-house platform because its cloud hosting costs had gotten too high. The story sent waves across the IT world, which by that point had learned to take for granted the idea that the writing was on the wall: Public cloud was the sensible infrastructure strategy for any enterprise and would soon be home to all workloads. In its IPO filing three years later Dropbox disclosed that the move had saved it $74.6 million over two years. 

Another three years later Andreesen Horowitz analysts used Dropbox’s story to illustrate a trend they had been observing. While using a public cloud was a no-brainer for a company starting out, they wrote in their now-infamous blog post, cloud infrastructure costs put an increasingly heavy drag on profit margins and market capitalization as its business reached a certain scale. 

The third big public story that fueled “cloud repatriation” in the zeitgeist was 37Signals, whose co-owner and Ruby on Rails creator David Heinemeier Hansson wrote extensively about moving the company’s products Basecamp and HEY mostly off the cloud because they had matured beyond the point where the flexibility one gets by running workloads in a public cloud was worth the high cost.

There were other repatriation stories in between, but these have been the big three driving the narrative in recent years. One interesting detail from the 37Signals story was that, as DHH claimed, the company’s use of public cloud had been “incredibly optimized, with long service commitments, scrupulous right-sizing and monitoring.” Still, the $3.2 million annual cloud budget was unsustainable for the business.

Optimizing infrastructure costs while meeting workload demands is a challenge. In this article, you'll learn about the common strategies for lowering cloud hosting costs. You'll also learn about dedicated cloud, which can offer higher performance at a lower cost while retaining the flexibility of cloud.

Making Sense of Cloud Hosting Costs

One reason behind elevated cloud costs is the complexity of cloud providers’ feature sets and pricing models. To effectively control costs, you need to know exactly what you are being charged for and how the costs can accumulate, which is a challenge all its own.

Factors Contributing to Your Large Cloud Bill

  • Overprovisioning: One of the key purposes of the cloud is to deliver sufficient resources on demand, catering to both customer needs and internal team requirements. This often involves overprovisioning, which can lead to substantial cost overruns.
  • Mismatch of instance types: A mismatch between instance types, storage classes and the use of specialized services, such as GPUs, can cause performance bottlenecks or higher bills if not configured properly.
  • Data egress fees: Moving data between availability zones and different systems can be unpredictably expensive, especially for high-traffic applications.
  • Licensing fees: Software licensing fees can contribute to high costs if you can’t bring your own licenses (BYOL).
  • Pay-as-you-go model: This model, where charges are based on usage, is what makes the cloud adaptable and appealing, as it offers a low entry point for building a service that caters to users across different locations. However, it demands vigilant cost oversight.
  • Labor costs: You need to take into account the people required to secure and oversee all your infrastructure. For instance, as you redesign your architecture to achieve greater cost savings or establish dependable services through multicloud or hybrid-cloud setups, a dedicated team becomes essential.
  • Storage: Without a good archiving and retention strategy, the cost of maintaining your structured and especially unstructured data can escalate significantly. Moreover, transferring data to another provider to take advantage of discounts for committed cloud usage can introduce additional costs, but the three largest cloud providers (AWS, Azure and Google Cloud) recently did away with these outbound data transfer charges if you’re leaving their respective platforms completely.

Two Common Misconceptions About Cloud Hosting Costs

Various incorrect assumptions about cloud hosting can also lead to increased costs. One of them is that cloud hosting is always cheaper than traditional on-premises infrastructure. This is not always true. Although the advertised price tag of cloud services may seem smaller, you also need to consider all the other factors we listed above when calculating the total cost of ownership. In some cases, on-premises solutions may actually be more cost effective, especially for workloads with consistent and predictable usage patterns.

Another common misconception is the belief that cloud providers automatically optimize costs for you. While cloud providers do offer tools and recommendations, the ultimate responsibility for cost optimization rests with the user, and becomes even more critical as you scale. Without proactive monitoring, budgets and resource tagging, costs can easily spiral out of control. 

Strategies for Minimizing Cloud Hosting Costs

Once you have a better understanding of what’s causing elevated costs, you can focus on strategies to help you reduce them. You can optimize your instances to make better use of available resources and adjust those resources based on demand and the application in question.

Rightsizing and Resource Allocation

Aligning resource allocation and planning for optimal capacity is one of the best things you can do to save money. This process starts with using monitoring and profiling tools (i.e. profiling CPU and memory usage) to help accurately identify resource needs. For example, Tryg saved a substantial amount without experiencing any performance issues thanks to dynamic rightsizing.

By aligning your resources and planning effectively, you gain more detailed awareness and control and can engage in capacity planning at the team level. This involves assigning individual teams their own designated cost centers.

Autoscaling and Managing Peak Demand

Workload demand isn't always constant, and managing demand fluctuation is a reason companies turn to the cloud. That's why autoscaling is a vital technique for managing costs, particularly in environments with variable workloads. Autoscaling ensures optimal resource provisioning during peak demand while avoiding overprovisioning during off-peak periods.

The pay-as-you-go model can help you increase resources with your autoscaling policies based on metrics like CPU utilization or incoming traffic, ensuring that you pay for resources only when needed. However, implementing autoscaling can be challenging as there are a lot of configurations to get right, including setting up autoscaling groups, defining triggers and fine tuning scaling policies.

Using Reserved and Spot Instances

By appropriately sizing and configuring autoscaling, you gain a better understanding of your usage patterns. That allows you to take advantage of pricing models that can reduce your costs, such as reserved instances (RIs) and spot instances.

RIs enable you to make long-term infrastructure commitments, securing discounts that can vary between 5 and 50 percent, depending on the length of the term. Spot instances are ideal for testing purposes as they use idle capacity and hence come at a massive discount. (AWS, for example, advertises spot EC2 instances that are up to 90 percent cheaper than on-demand ones.)

Refactoring Applications

If an application was lifted and shifted to the cloud from a traditional on-prem environment, rearchitecting it to fit the cloud model can result in both cost savings and a better functioning and more versatile application. This could involve implementing containerization and microservices, adopting serverless technology and optimizing data storage methods.

This strategy requires a deep understanding of both the application and cloud services, so keep in mind the cost of developer resources necessary to implement it.

Using Cost Monitoring and Optimization Tools

Cost monitoring is a big industry, with a lot of players offering to help customers save money. Their tools enable in-depth analysis of cost data and anomaly detection and recommend optimization opportunities.

To ensure ongoing monitoring, you can establish a customized dashboard, configure alert systems and craft scripts for automating cost-saving actions that get triggered based on input from these tools.

Examples of popular cost monitoring and optimization tools include AWS Cost Explorer and Microsoft Cost Management.

Cloud Cost Optimization Only Goes So Far

Although the strategies outlined here are useful, they all have their limits as to how much savings you can achieve—as illustrated by 37Signals’ example. As time passes, the returns on these optimizations begin to diminish. Additional optimizations may demand more human resources and benchmarking, which could outweigh the cost-saving benefits. After the initial low-hanging-fruit cost saving measures are taken, delving deeper into further optimizations yields smaller cost savings while demanding more effort and resources.

Additionally, cloud providers can and do change their pricing models and services offered as they see fit. This means that your careful optimization can be affected when a service you rely on is discontinued or pricing gets changed. These are things you don’t have direct control over. 

Some organizations use a multicloud strategy as a way to hedge against unexpected changes by any single provider. This approach has its own limitations. Using multiple clouds means having to build abstractions that will work across the different cloud platforms. Not only is building those abstractions costly, they often prevent you from using the features only one of the cloud providers underneath offers.

Is There a Better Way?

There is an alternative style of infrastructure that allows users to keep the on-demand and pay-as-you-go convenience of cloud at a lower cost and with greater control: dedicated cloud.

Unlike traditional cloud services, a dedicated cloud—like Equinix’s—offers bare-metal compute instances. While automated and provisioned on demand, they do not rely on virtualization, leaving more of the underlying hardware resources for your workload. This not only offers better performance but results in a more efficient use of the computing capacity you pay for.

A wide selection of server configurations, networking options and storage enables users to match infrastructure resources to their application needs. You can keep your bandwidth and data transfer costs low and predictable by aggregating bandwidth on dedicated links, paying transit providers the same rates that the public cloud providers pay and storing content in the same data centers where you peer with ISPs, eyeball networks and the largest internet exchanges.

The number of global locations where Equinix’s dedicated cloud is available is on a par with the major public cloud providers’, so you don’t lose scale and reach in exchange for control and savings.

Finally, dedicated cloud doesn’t mean giving up all the higher-level tools and features public clouds offer. Equinix users can connect to any cloud onramp privately and use cloud services as needed together with their dedicated cloud infrastructure in a hybrid cloud manner.

Get a quick understanding of what’s possible with dedicated cloud by exploring our use cases.

Published on

20 March 2024



Subscribe to our newsletter

A monthly digest of the latest news, articles, and resources.