We are currently witnessing a transformation in the way in which organizations purchase and deliver IT. The commoditization of computer hardware, the use of virtualization and the presence of cloud and colocation services has opened up a number of options for companies to reduce IT costs. Before taking advantage of these options, however, businesses must know what will be the best financial move. They need to consider the costs and expertise that have already been sunk into existing infrastructure, particularly data centers. Essentially, organizations need to have a clear overview of all their IT assets and understand the exact cost of any proposed IT project to arrive at the best decision.
Cost Is Still an Afterthought
Regardless of how it sources its IT services, any self-respecting organization should be able to match capacity with requirements. The problem is that in many cases, data center capacity far exceeds the organization’s needs, with little to no understanding of cost and how this unused capacity affects the business. As a result, the organization is spending resources on assets that provide no value; if data center operators want to attract further investment from the CFO, it will be much harder if they are already sitting on a financial black hole.
For instance, in the effort to reduce the costs of running and operating IT, many organizations view the cloud or colocation as alternatives to in-house IT services. If the organization is still running its own data center, however, this approach might not always provide the presumed savings. Quite simply, the organization can find itself paying not only for the cloud or colocation space, but for its existing infrastructure, which will now be increasingly underutilized and so provide even less value. To use an analogy, if you already own a house, you wouldn’t rent an apartment, split your family and belongings between the two, and keep paying the full bills for both while the house lay half empty. Also if you can’t compare the costs (one being capital dominated by the mortgage and the other being operating cost dominated by the rent) on a “unit costing” basis, an accurate comparison of which is the best financial option becomes very difficult.
Data center operators are currently in a Catch-22 situation. Demands for greater cost savings in IT make outsourcing and the public cloud an attractive proposition. Yet the more services that are removed, and the more IT infrastructure is reduced, the more costly in-house services become, and any historic investment business case for in-house data centers becomes compromised. The ultimate end game of this situation could be the complete winding down of in-house data centers, with all IT services instead placed in the cloud or with a colocation provider. This outcome would be unacceptable to the vast majority of organizations for many reasons, not the least of which being that doing so is essentially relinquishing total control of critical services and data.
When making any decision, the ultimate consideration should be the cost to the business: after all, CFOs will only support investment if they can see a clear return. Unless the organization makes a choice resulting in a lower overall total cost of ownership (TCO) than the current status quo offers, data center owners are creating a false economy: increasing the costs to the business overall without any benefit to the bottom line. Instead, the organization must add up every potential cost from its decision, whether that cost is from the power bill or the infrastructure budget.
With the cloud and colocation, CIOs have the opportunity to break the “service monoculture” that existed traditionally with a single-cost/single-SLA solution. Instead they can begin to target services into appropriate cost and service-level buckets, delivering to the business much more value for each invested and operational dollar. The most important thing, however, is to recognize how much of the available capacity is actually being utilized versus sitting empty and still being paid for, then identify a rationalization that drives up the use of capacity that already has sunk and/or ongoing contractual costs associated with it.
Energy Efficiency Is Not Financial Efficiency
For this process to happen, organizations need to understand just what the different options cost them. Although power usage effectiveness (PUE) and other efficiency-focused metrics have been touted as the best way to measure a data center’s value, ultimately the TCO and unit costing should be at the forefront of decisions—especially because they are more likely to make sense to the CFO. Once they have their data center TCO, organizations should then make sure that they have the TCO breakdown for each IT service. With this information, CFOs can then begin to understand what will happen to IT costs as data center usage changes, and they can support decisions that include a real view of the financial impact for any given scenario.
So why aren’t organizations doing this right now? The effort needed to understand the true costs of data centers, normalizing and then comparing these costs in a highly complex environment, involves a great many variables and substantial human resource. Too often this complexity means that simplifying assumptions are made, and “fudge factors” added, to reach an expected result, but one that is often incorrect or at best sub-optimal. This outcome shouldn’t be the case, however. It’s now possible to quickly and accurately predict the operational and financial impact of any data center decision, whether that means updating cooling systems or removing servers. Eventually CIOs will need to decide on the future of their data center infrastructure. To gain a complete understanding of future investment, businesses should take great pains to understand their data center costs so that they can ensure that whatever investments are made in future will be money well spent.
About the Author
Zahl Limbuwala is CEO and cofounder of Romonet and is passionate about the data center and ICT industry. Starting his career as a chartered electrical engineer, he then moved into IT systems engineering, network engineering and later software development. Zahl is a chartered engineer, chartered IT professional and Fellow of the BCS. He has 20 years of experience in companies such as Digital Island, Real Networks, Cable & Wireless and many engineering firms writing control and automation software for production-line automation and robotics.