Industry Perspective is a regular Data Center Journal Q&A series that presents expert views on market trends, technologies and other issues relevant to data centers and IT.
This week, Industry Perspective asks Andrew Mallaband about best practices for driving optimum IT economics and how data center professionals can deploy the right amount of IT resources for their projects. Andrew is Managing Director for VMTurbo. He leads the company’s global focus on the service-provider market, as well as heading sales and operations for Northern Europe. Andrew has spent the last 15 years working to improve the productivity of IT operations. This effort included leadership roles at Logical Networks (acquired by Datatec), where he ran the Network and Systems Management group; SMARTS (acquired by EMC), which pioneered automated root-cause analysis; and Opalis (acquired by Microsoft), the market leader in IT process automation. Andrew is employee number seven at VMTurbo and has worked closely with the engineering team and pilot customers to get products to market and has aided early customer acquisitions in the U.S., Europe and Asia.
Industry Perspective: How does applying economic principles to manage IT resources help organizations allocate resources and meet overall business goals in a cost-efficient manner?
Andrew Mallaband: Today many organizations host a large percentage of their IT applications on virtualized infrastructure. Virtualization introduces the concept of resource sharing so that pools of compute and storage assets can be shared by applications, thereby turning IT infrastructure into a true utility. It also introduced the capability to dynamically adjust the allocation of physical and virtual resources through software controls.
The big challenge facing organizations today is how to fully exploit these software controls to maintain equilibrium between the supply of shared compute and storage resources and application/workload demand, so that application performance is assured while maximizing the ROI from infrastructure and data center facilities. We refer to this as the intelligent workload management problem.
Using a management solution that employs abstraction and analytics based on economic principles of supply and demand provides a holistic way to address this intelligent workload management problem by determining what ongoing resource-allocation actions should be taken to maintain the equilibrium. We refer to this as approach as economic scheduling of IT resources. Customers who have adopted this approach are typically realizing savings of 20–30% in ongoing infrastructure costs, cutting operational expenses associated with problem and incident management by more than 50%, and reducing the risk of application outages and brownouts, which are very costly to the business.
IP: How does the CIO (or another data center professional) determine the optimum level of IT economics?
AM: The beauty of using an economic scheduling algorithm is that market principles of supply and demand automatically determine what actions need to be taken to maintain the equilibrium between the supply of resources and application/workload demand. This means that customers deploying such an approach can realize immediate results—literally within 20 minutes of deployment, without any configuration.
If users or systems choose to be more sophisticated in the use of economic scheduling, they can define simple policies to set the equilibrium point at different levels in different parts of their infrastructure. As an example, an organization may be less concerned about performance in test and development and may be happy to run this environment hotter than production. It is also possible to set policies to prioritize which workloads get access to compute and storage resources in the event of infrastructure shortfalls, such as a major outage. This means the applications that are most important to the business can continue to run in the event of a disaster.
IP: How does an organization implement this optimum level of IT economics?
AM: Actions recommended through economic scheduling can be executed automatically through software or manually by operations staff. Certain actions, such as adding server or storage infrastructure capacity to meet increased demand, might involve human intervention to physically add resources, but new hardware architectures like Cisco UCS and NetApp’s Cluster Mode enable full automation of even these types of actions, since the profile of physical resources can be dynamically adjusted through software capabilities inherent in these technologies.
IP: What is the role of intelligent workload management in this process?
AM: IT costs can quickly escalate as you scale out virtual and cloud infrastructure, and at the same time, owing to the complexities associated with exploiting shared compute and storage resources, application/workload performance issues are common and often increase proportionately with the scale of the infrastructure. Maintaining the equilibrium between performance and efficiency is imperative to ensuring that the IT department can continue to provide excellent service to the business for the lowest possible cost.
IP: Can you offer a few examples of intelligent workload management in action?
AM: Here are three great examples of how effective intelligent workload management is for customers:
- Global Investment & Retail Bank, in the first year of deployment, achieved a net savings of $6 million in server infrastructure and software licenses by increasing infrastructure utilization by 10%.
- European Telco experienced about 30 incidents per day attributed to issues with workload performance. Each one resulted in one to two hours of triage and mediation by IT operations, with an estimated cost in excess of $800,000 per annum. Using economic scheduling, it reduced this bill by more than 50%.
- US Insurance Company deferred $2 million of an investment in an infrastructure refresh by more accurately assessing the requirements of the project.
IP: What can CIOs/data center professionals do to enable optimal IT economics?
AM: Here are three core best practices to follow:
- Put in place the capability to continually measure workload consumption of the holistic set of compute and storage resources. Simultaneously, the solution should have the intelligence and automation to execute ongoing workload placement and resource allocation actions to meet performance and efficiency goals, without the need for human intervention (decision making) and its associated limitations.
- Exploit the same approach to plan for the future so that the impact of planned changes in workload demand and infrastructure investments can be accurately simulated.
- Set aggressive KPIs for your delivery teams so they are focused on exploiting the solution to reduce IT delivery costs through improvements in infrastructure utilization/efficiency and reductions in problems/incidents.
IP: What should CIOs/data center professionals not do to maintain optimal IT economics?
AM: The solution to solving the intelligent workload management problem should not be viewed as an IT monitoring project. Today many organizations are deploying IT monitoring solutions so they can measure the utilization of IT resources. Although this provides line of sight to the state of IT, what is missing from this picture is the ability to drive optimal resource allocation decisions. This is fundamental to an organization’s ability to manage IT economics—that is, sweating IT assets while meeting service-level objectives.
Today many IT delivery organizations overprovision IT infrastructure in an attempt to assure workload performance. This is a completely inefficient approach and reduces the competitiveness of internal IT.
IP: What is the outlook for intelligent workload management? Will CIOs and data center professionals be able to further increase their capabilities in this area?
AM: Solutions that solve the intelligent workload management problem will eventually be used by organizations to fully automate data center delivery processes and operations management. This includes the planning, deployment and ongoing real-time optimization of workload performance and infrastructure efficiency.
Virtualization is a key enabler for economic scheduling of IT resources because it introduces the ability to dynamically adjust resource allocation and workload placement through software controls. In this domain alone, the penetration of economic scheduling is still relatively low: between 1% and 2%. The potential for adoption is very significant, on the basis of the business value that can be realized by any organization that has more than just a few hundred virtualized workloads.
As more vendors of server and storage infrastructure implement software controls to dynamically adjust resource allocation in the physical infrastructure, new opportunities will arise to expand the scope of economic scheduling. We believe it can play a role in driving greater value from IT assets in virtualized and cloud environments.
Software-defined networking and hybrid cloud also create new opportunities to expand the scope of how economic scheduling can be employed because these technologies remove many of the networking constraints that limit where specific workloads can run in internal data centers or external service-provider clouds. Using economic scheduling increases the ROI that IT departments can realize from their internal IT infrastructure and external services by determining which workloads should run where and when to meet efficiency and service-level objectives.