How many times have you sat in a restaurant and said, “The service is a bit slow in here?” Chances are either you walked out and decided not to eat there again, or you advised friends not to go near the place. Blame the people that run the restaurant: they probably didn’t consider the number of customers that would turn up and didn’t staff up accordingly. Let’s not be too hard though: walk into the restaurant on another day and it might be milling with staff staring at empty tables. Maybe you caught them on a bad night.
Your IT department can feel like that restaurant sometimes. Your customers—in this case the business executives, external customers, internal employees and others—all want their applications to work fast and flawlessly. And another customer—the CFO—also wants to avoid picking up an unnecessarily big bill at the end.
It all comes down to capacity planning: being able to reliably predict how much IT capacity is needed on the basis of past application performance so you can organize your infrastructure accordingly. Get it right and you deliver an exceptional end-user experience, reduce cost and minimize risk. Get it wrong and you risk a loss of business through application downtime, high capital expenditure (capex) costs owing to an under-utilized infrastructure and uncertainty over how effective your infrastructure is at delivering service levels.
Planning capacity is a big challenge. Your budgets are as flat as the atmosphere in that understaffed restaurant, while the demand for IT services has skyrocketed, driven by the march of mobile services, cloud and the consumerization of IT. Moreover, complexity is everywhere: a blend of physical, virtual, cloud and mainframe environments all must be optimized to deliver business-critical applications.
First solution to the problem? Overprovision. Do what the restaurant manager would do, and build up your resources (in this case your infrastructure) to meet the forecast increase in demand for IT services. You probably won’t be in business for long if you do though. If the demand doesn’t materialize the way you think it will, you are left with idle servers. Overprovisioned data centers not only call for higher capex, they consume more operational costs (opex), like maintenance, upgrades, power and licensing. And at the back of your mind is the knowledge that the average utilization of a virtualized server can be as low as 20 percent.
Application performance management (APM) has most of the answers. APM captures your transaction performance data from problem sources like applications, end users and the infrastructure, and it employs integrated end-user experience information to help prioritize problem resolution and manage service-level agreements. But you still need a way to cost-effectively address the capacity issue without increasing risk to the business.
Unified Predictive Capacity Planning
The real answer lies in blending the power of APM with the control of capacity management. By applying both APM and capacity management in a unified predictive capacity-planning solution, you can more reliably forecast future capacity needs. A predictive reliability solution allows you to mitigate risks, help ensure quality of service and right-size your application delivery environment while optimizing costs.
APM works in two ways. First, when a problem occurs, it alerts IT operations to an incident—like a server running at more than 80 percent utilization. You recognize the need to take action and reach into the server workload data. You then use the capacity-management component to run “what if?” scenarios for the affected server and entire application delivery chain, solving the system problem that caused the APM alert.
Take the real world case of a multinational food and beverage company. Its existing testing process failed to uncover bottlenecks in its large-scale SAP environment. Testing was also a very expensive process, and the company was under pressure to cut costs and improve service delivery. This food and beverage company deployed a predictive reliability solution to deliver an early-warning system during the design, test and production phases. This meant the development teams could discuss the data, redesign and avoid problems that could have occurred in production. As a result, the company saved significant time and money by catching design problems early in the life cycle.
The second scenario for predictive capacity planning involves right-sizing your environment for future growth. Using APM performance data from your production environment, the solution enables you to conduct scenario analyses simulating different load patterns. This information allows you to optimize your production infrastructure with the right system configurations based on the planned workload.
That’s how a leading financial-services organization uses predictive capacity planning. The company was challenged to adapt to changes in consumer expectations—always there, always on, always with me—and it also needed to lower the cost to build and operate applications. Model-based performance testing provided the confidence to know what’s going to happen for any given change to the finance firm’s application environments. Costs went down, performance went up and confidence in the supportive infrastructure soared.
Guessing how capacity is allocated and consumed is never going to be the answer. Predictive capacity planning takes lets you reliably right-size your infrastructure on the basis of real performance trends.
No conjecture, speculation or guesstimates.
Leading article image courtesy of David Monniaux
About the Authors
Jeremy Rossbach (@jeremyrossbach) is Principal Product Marketing Manager for Capacity Management Solutions at CA Technologies. Jeremy has been in the IT industry for more than 15 years in both the public and private sectors, managing data centers for startups, health care, financial and federal system integrators. His roles as data center administrator, engineer, architect and manager have given him unique insights into the challenges and goals of today’s IT consumer. Jeremy earned a B.S. in Marketing from the University of Baltimore.
Jason Meserve (@jmeserve) has been working in high tech for over 15 years. He is currently Principal Product Marketing Manager at CA Technologies, focusing on service assurance solutions such as application performance management. Before CA Technologies, Jason worked as a journalist for Network World for 10 years, creating articles, features, blogs, videos and podcasts. He has also held marketing and editorial positions at Constant Contact and Application Development Trends.