Designing a Cloud Network That Stretches to the Edge
What to know when connecting the core to the edge and the edge to end users.
Augmenting networks with edge computing nodes has become an indispensable component of many teams’ infrastructure strategies. For engineers building such networks, the traditional elements of network design—things like bandwidth, latency, caching, load balancing and the push-pull between cost and reliability—all apply but with some edge-specific nuance. This article is an overview of those key considerations.
As an example, we will use a scenario where a significant two-region cloud deployment (say, one core node on the US East Coast and another on the West Coast) is augmented with numerous smaller metro locations at the edge. There’s a variety of possible edge topologies, but this example serves as a common reference point. Sometimes referred to as the “service provider edge,” it’s a kind of edge Equinix is intimately familiar with, because Equinix IBX data centers host in many parts of the world.
Designing Cloud Networks with Edge Nodes
The key areas of consideration for designing cloud networks that extend to the edge are:
- Bandwidth requirements: Different applications’ bandwidth needs vary, and considering them upfront helps prevent issues down the line.
- Latency requirements: Latency is a crucial factor that’s closely tied to geographic distances. The need to reduce latency is often the primary reason companies deploy edge infrastructure.
- Caching: Caching, or storing copies of data closer to end users, is how edge infrastructure is often leveraged to meet bandwidth and latency requirements.
- Load balancing: Load balancing is especially important in edge networking, used to prevent any single edge device or node from becoming a bottleneck. As your environment grows, load balancing ensures even distribution of workload as you scale capacity of individual edge locations.
A Two-Leg Journey
In an environment that combines a cloud core with edge locations, network design decisions can generally be split into two categories: connecting the edge to the core and connecting the edge to end users. Let's take a closer look at each.
More cloud networking education:
- How Cloud Networking Is (and Isn’t) Different from On-Prem
- The ABCs of Cloud Network Design
- The Essentials of Linking Clouds
Connecting the Edge to the Cloud
First, consider the relationship between reliability and cost of the connections between the cloud and the edge. One advantage of the public cloud is its accessibility over the public internet, which is relatively low-cost. However, while easy to access and increase bandwidth, internet connections are less reliable than private ones.
Using public internet connections also raises security concerns. They make data vulnerable to interception and common threats such as person-in-the-middle attacks. Virtual Private Networks can be used to mitigate security and privacy risks, but they do not address the reliability factor.
If reliability is critical for your application, dedicated connections from edge devices to the public cloud core should be considered. They come at a higher cost but offer greater reliability. Cloud providers offer dedicated connections with varying bandwidth and generally lower latency than the public internet. These private connections are often available in the regional data centers that are ideal for hosting edge nodes.
Determining whether the public internet will be sufficient (possibly through a VPN connection) requires understanding your application and workloads. With cloud networking, you can easily test an internet connection between your edge location and the public cloud to see whether or not it will be necessary to budget for the higher-cost private links. (One convenient way to test an architecture is by using an on-demand Equinix Metal server in one of your edge metros, where it can be easily provided with internet access and/or a private connection to your cloud provider.)
How much redundancy your connections should have is another question to answer here. A redundant connection is highly available but it also costs twice as much as a single one, so this decision also calls for some careful cost-benefit analysis.
With redundant connections in play, load balancing can be used to distribute the load between them. Load balancing software and devices can be used to automatically route traffic to a healthy connection when there is an outage.
Connecting the Edge to the Users
Providing users with a connection to the edge that has sufficient performance while being secure and reliable is also essential. These connections often involve local Internet Service Providers and peering on Internet Exchanges.
When selecting local ISPs, factors beyond latency and bandwidth should be considered—such as reliability, support and scalability. Service Level Agreements should align with application requirements.
Peering on internet exchanges can be a way to improve performance, reduce costs and increase redundancy. Here, factors like traffic patterns, costs and geographic location should be taken into account here.
If the use case is streaming media, for example, latency and bandwidth of the user’s connection greatly affect their experience. Put simply, they need a reliable connection to ensure smooth media playback. Streaming technologies are often designed with latency and bandwidth in mind and are typically accessed over standard internet connections.
If the use case is a data-collecting system (such as a server in a lab, a healthcare facility, a manufacturing plant or a retail branch), it typically stores data locally while also periodically sending data to the cloud for analysis. Users connect to it through a local network or the internet.
Latency and bandwidth are concerns here, too, especially when data volumes are high, so establishing acceptable minimums for latency and bandwidth is crucial. Storage on the edge device holds data for later transmission, while end-user devices may also have application space for data preloading or aggregation. These scenarios can be managed by software at the edge and on end-user devices.
It's important to note that minimum connection requirements are often specified in agreements between equipment or software vendors and end users. Falling below these thresholds may result in an unreliable user experience and lack of support.
Equipping the Edge
The proliferation of edge infrastructure has made edge routers an ever more important piece of networking technology. Deployed as either dedicated hardware or software running on commodity servers located at the edge, these routers not only provide familiar network services but also offer additional security features like encryption and firewall capabilities.
Versatility is a key characteristic of edge routers, as they can support various types of network connections. Whether it's cellular, satellite, MPLS, or consumer broadband, edge routers can typically accommodate these different connectivity options, enabling standardized deployments irrespective of specific site requirements.
Synchronization, Data Protection and Security
Two additional considerations that should not be overlooked when designing a cloud network that extends to the edge are data synchronization and data protection and security.
Staying in Sync
Keeping data synchronized between the cloud and edge infrastructure is crucial, regardless of the direction of data flow.
In the streaming-media example, where content delivery networks are often used, data is regularly updated and pushed to a CDN. Adequate bandwidth is essential to ensuring these updates are timely.
Data also often needs to be sent back to the cloud for analysis and storage. Here, recovery objectives come into play. To ensure data availability for recovery operations, edge data must be transmitted back to the cloud based on recovery point objectives, which, again, requires sufficient bandwidth.
An edge device may have its own storage or rely on another edge device for local data copies, which are then transmitted back to the cloud at a set interval. This approach facilitates smooth recovery to a new edge device or the cloud.
Securing the Edge
Edge devices are often located in less secure environments, such as open areas in medical settings or utility rooms in retail stores. While physical access control is important, it may not always be feasible. This is why most edge routers include security features. (Another benefit of synchronization is that it helps ensure that data is sent back to a secure location to meet recovery objectives.)
Adopting an "assume breach" security methodology can help mitigate risks and prepare for potential attacks by malicious actors. By incorporating appropriate design measures, security can be enhanced and vulnerabilities reduced.
Embracing edge networking while maintaining a robust cloud network requires careful consideration. Understanding the bandwidth and latency requirements of your applications, ensuring data synchronization and protection and being prepared for potential security breaches at the edge are all vital components to work through.
Each scenario will have its own unique aspects. Streaming media may require low latency and a lot of bandwidth, while data-collecting devices may prioritize local storage and timely transmission to the cloud. The key is understanding your application's specific needs, plan your network design accordingly and be prepared for potential security breaches at the edge.
As the digital landscape evolves, our strategies for designing efficient, secure and reliable networks must also evolve. The convergence of edge networking and cloud deployment offers organizations new opportunities to optimize their networks for their specific requirements.
Ready to kick the tires?
Use code DEPLOYNOW for $300 credit