Organizations of all sizes are looking at how their data center facilities can support their IT requirements for greater flexibility and responsiveness using cloud computing and virtualization. But implementing these technologies and strategies in existing data center environments is easier said than done. Data center managers must take their data center, which can be up to 15 years old, and rebuild it using the latest deployment principles.
One approach here is to take a more modular approach to data center design. The initial adopters of this strategy five years ago were Amazon and Google, and they are usually the reference for data center best practices in current deployment methodology. For companies evaluating where modular data center assets can help their own strategies, there are both lessons to be learned and pitfalls to avoid.
The growth of cloud computing is turning traditional thinking about data center design on its head. When looking at a data center refresh project, the first step is to consolidate and take this opportunity to deploy new IT equipment. Today’s Intel-based servers require much less the electricity for the same amount of CPU power compared with older servers. Making an investment in new hardware will therefore reduce your equipment power bill and cut your server cooling costs. Virtualizing these servers at the same time will remove up to 60 percent of the physical machines in the data center, too, with the equivalent power and cooling cost reduction.
Alongside these steps, there are also opportunities to reduce the power consumed at the network level as well. Investing in new networking equipment to take advantage of Fibre Channel over Ethernet can help consolidate your LAN and SAN. This will reduce 80 percent of your server cabling costs and cut your support costs in half with one infrastructure.
How you choose to deploy this equipment matters, and this is where some cloud and virtualization deployments fail to realize their full potential around energy efficiency. If you carry out deployments in the traditional way, you will have 68 percent of the cold air bypassing the IT equipment, according to research by ASHRAE. This means that you are unnecessarily paying double for your cooling to achieve the needed result.
Taking a modular design approach can solve this problem. Preconfigured cabinets that are implemented and approved by the manufacturers themselves are designed to have the best possible PUE and need minimal maintenance. Products in this category cover everything from a single cabinet preconfigured for a specific switch or server deployment; preconfigured cabling from simple patch cables through to complete end-to-end data center patching; and complete preconfigured switch, server and storage assets loaded with a virtualization platform. Examples here range from a single cabinet such as the NetApp FlexPod up to nine-cabinet 6000 Virtual Machine stacks from VCE; these can even be loaded into shipping containers for full “data center in a box” installations.
Managers should therefore look for the appropriate-size preconfigured solution that best suits their requirements, which means filling cabinets and packing as much density as you can into the available space. This approach follows the hot aisle/cold aisle principles but also involves using containment. You are able to add doors and a roof to either the hot aisle for “hot aisle containment” or the cold aisle for “cold aisle containment.” You can also use chimney doors to create a vertical exhaust system to contain the hot air. It doesn’t matter which containment you use, as independent testing of all three solutions has been carried out for the same IT load. When properly configured, they all increase efficiency by 23 percent, leading to reduced cooling expenditures.
To get the best out of virtualization you will need to provision for spare capacity. If you have reduced 200 servers down to 20, having some spare capacity deployed will allow for dynamic provisioning. This will also help the data center team plan ahead sufficiently, but without needing to add more resources that require power and cooling but do not deliver business results.
A Cooler Data Center Than You Think?
Cloud computing implementations are being sold on their flexibility and ability to scale up and down rapidly according to business demand. But the reality from the physical data center side is that power and cooling resources will be consumed whether the equipment is working hard or not.
The solution to this paradox is to use cooling that is closely coupled with the IT equipment deployment. By using inline cooling systems from the likes of Stulz, these units are deployed in the cabinet rows and can mix with any of the containment regimes that might be deployed.
The biggest variable here is due to virtualization. The virtual machine manager that the customer has in place will move the virtual machine load to the most appropriate resource in real time according to business rules and demand for services by other virtual machines. The inline cooling units will monitor the changing load and will adjust the airflow to suit. Similarly, the network equipment can respond to computing load in the same way using the IEEE 802.3AZ Power Down While Idle standard. This will vary the switches’ power requirements up and down in line with IT demand, reducing the overall power requirements for the switches themselves.
Previous cooling systems rely on fixed torque motors to run the fans for cooling, while the Stulz units use variable torque motors. The significance of this difference is that in the old systems, reducing the load by 20 percent drops the power consumption by the same amount—20 percent. With variable torque motors, the fan speed can be varied. Consequently, if the IT load requirement drops by 20 percent, the power requirement for the cooling system drops by 50 percent.
Making the Move to Cloud—A Physical IT Perspective
To make the move to deploying cloud within an existing data center, the first step should be to create some space, and then use modular deployments to create a virtualized environment that will support any private cloud installation. The business can then install and run its application environments in parallel while any testing and configuration work is carried out.
This is a good test to see if support staff can maintain the new way of delivering services. Virtualization products are tried and tested, and had been commercially running for over five years, so if you are happy with the installation, you are now able to turn off the old systems and run completely on the new module. Once running on the new module, you can decommission the old application servers and create more space in the data center.
The Uptime Institute found from a survey of its user database that if your business is growing at seven percent per annum, your database will have doubled its switching, server and storage capacity within 10 years. If your business is growing at 10 per cent per annum, however, this doubling capacity will have occurred in seven years. This information on business growth can therefore help with long-term data center planning.
Data center managers have been realizing that they must move from the traditional IT delivery model to keep up with the pace of change within their organizations. Running behind the curve because they have previously routinely built bespoke and piecemeal solutions deployed in 90 days is not seen as good enough anymore. The new breed of data center managers are anticipating business requirements and building up their skills around mapping these needs to service-level agreement frameworks and management delivery options. IT staff can no longer compete with the external providers if they remain too specialized; it is better to be a generalist and be able to manage the delivery of services using modular and preconfigured data center units that can be deployed ahead of business demand, rather than to meet it.
About the Author
David Palmer-Stevens is Systems Integrator Manager EMEA at Panduit.
Photo courtesy of walkinguphills