Even as applications move from traditional data centers to the cloud, server load balancing continues to be a core element of IT infrastructure. Whether servers are real or virtual, permanent or ephemeral, there is always a need to intelligently distribute workloads across those multiple servers.
But there remains a chronic gap in the ability to reliably distribute workloads across multiple clouds, multiple data centers and hybrid infrastructures. The result is poorly distributed workloads and degraded application performance that could be avoided if workloads were better managed globally. In short, there is a need for better global server load balancing (GSLB).
The Cloud and Load Balancing
Also referred to as application-delivery controllers (ADCs), load balancers are widely deployed in data centers. Their function is to distribute workloads to back-end servers, thereby ensuring optimum use of aggregate server capacity and better application performance.
Providers including Citrix, F5, Kemp Technologies and Radware occupy the traditional load-balancer market. Their hardware ADCs have been the go-to solutions for infrastructure and operations teams for some time. Recently, software-based ADCs from these vendors and software-only solutions such as HAProxy, Nginx and Amazon ELB have emerged as enterprises have moved applications to the cloud.
Organizations can implement multi-data-center, multi-cloud GSLB using one of two basic approaches. The first is to use a traditional managed-DNS provider for basic traffic management. It has the advantage of being easy to implement, low in cost and reliable, requiring no capital outlay. Unfortunately, it offers only minimal traffic-management capabilities such as round-robin DNS and geo-routing. These approaches fail to prevent maldistribution of workloads because they use fixed, static rules rather than basing traffic routing on the real-time workloads and capacity at each data center. For example, geo-routing can only ensure that users (and their workloads) are sent to the geographically closest data center. It cannot account for uneven distribution of users geographically, local demand spikes or server outages in a data center.
Many ADC vendors offer their own purpose-built DNS appliances that have a tighter integration with their load balancers to address these limitations. This is the second basic approach. These appliances can make traffic-management decisions on the basis of actual use levels at each data center by receiving real-time load and capacity information from the local load balancers.
The benefit is overshadowed by its tradeoffs, which many enterprises find unpalatable:
- These are typically high-performance network appliances with high capital expenditure. And because they must be widely deployed, redundantly configured and defended from attack, the solution overall results in both high capex and high opex.
- The performance of a DNS hosted at a single data center is inadequate to meet the needs of a global user base, but the cost and complexity of deploying a globally ubiquitous DNS are prohibitive for most enterprises.
- Attacks on DNS (DDoS) are widespread and difficult to mitigate. It becomes a single point of failure for the enterprise’s Internet-facing services. The need to deploy and defend the DNS becomes an added operational and cost burden.
- DNS is a mission-critical service that requires specialized skills to operate correctly with 100 percent availability. Few enterprises are equipped for this task.
Consequently, most enterprises that have deployed data center load balancers aren’t using the GSLB functions available from their load-balance vendor. Those that have deployed GSLB functions are open to replacing them with a better solution. A superior approach is a cloud-based, managed GSLB solution that uses real-time telemetry from load balancers to make intelligent traffic-management decisions.
GSLB as a Service
GSLB is best delivered as a cloud-based managed service. The core attributes and advantages of such an approach are as follows:
- Active. An effective GSLB solution needs to do more than direct workloads away from points of presence that are overloaded. It should prevent overload conditions from happening in the first place. Doing so requires the capability to detect the onset of overload conditions and shift traffic appropriately, whether those conditions are due to demand spikes, loss of capacity or both.
- Open interface for ingesting real-time telemetry. Most companies currently using cloud architecture have a hybrid architecture (RightScale, 2017) containing some private data center servers and some that are cloud-based. Because enterprises deploying hybrid infrastructures often use a mix of ADC types (both commercial and open source), the GSLB solution needs an open interface for collecting real-time data from disparate ADC types.
- Cloud-based as a service. As noted above, basic managed DNS doesn’t offer great traffic management but is attractive from the perspective of being globally available, well performing and well managed. A cloud-based GSLB solution must retain those attributes while offering true real-time GSLB capabilities.
- Lower costs all around. By definition, cloud-based GSLB as a service reduces capex, as there’s no need to purchase hardware or software appliances. Running one’s own authoritative DNS requires deploying globally for high performance and must be designed for redundancy, protected from attack, maintained and staffed 24x7. Thus, a managed GSLB solution is likely to have lower capital expenditures and operating expenditures.
Living the Dream
It’s now possible to enjoy the best of both worlds: a globally performing, reliable managed DNS service and advanced traffic-management capabilities that were previously available only with proprietary ADC solutions. This combined offering provides new opportunities for enterprises to prevent maldistribution of application workloads and deliver better overall application performance as well as a better, more consistent end-user experience.
About the Author
Jonathan Lewis brings to NS1 over 25 years of IT-industry experience comprising product management, product marketing, customer service and systems engineering. Jonathan has played key roles contributing to the success of several industry-leading companies including Nortel, Arbor Networks, and SSH Communications Security (SSH1V). He holds BS and MS degrees from McGill University, an MBA from Bentley College and CISSP certification.