Today, many organizations are looking for new ways of doing more with less, reducing IT budgets or perhaps curtailing the incidental costs associated with data center expansions. In a rapidly changing market environment, data center managers must focus on creating efficient operating environments to augment the life of existing data centers. Efficiency in data centers can be attained through numerous ways, including increasing compute densities, creating cold aisle containment systems and more effective use of outside air, but the key component over time is to have an easily understood metric to gauge just how efficient the data center is and how much improvement in efficiencies have been created.
Power usage effectiveness, or simply “PUE,” is one of the basic and most effective metrics for measuring data center energy efficiency. It is calculated by taking into account the total power consumed by a data center facility and dividing it by the power consumed by the IT equipment. The resulting ratio provides the effective power overhead for a unit of IT load. For example, a PUE value of 2.0 means that for every watt used to power IT equipment, an additional watt is required to deliver the power and keep the equipment cool. There is increasing pressure being exerted on data center managers to take measures to reduce the PUE.
The power usage effectiveness (PUE) metric was introduced by The Green Grid, an association of IT professionals focused on increasing the energy efficiency of data centers. To effectively manage and monitor the energy efficiency in the data center, having metrics to measure the impact of changes is essential. The Green Grid had introduced two primary metrics, PUE and DCE (data center efficiency). The latter was subsequently changed to DCiE (data center infrastructure efficiency). Both metrics measure the same two parameters, the total power to the data center and the IT equipment power.
A PUE value of 1 depicts the optimal level of data center efficiency. In practical terms, a PUE value of 1 means that all power going into the data center is being used to power IT equipment. Anything above a value of 1 means there is data center overhead required to support the IT load. Data center infrastructure effectiveness (DCiE) is the reciprocal of PUE. It is calculated as a percentage by taking the total power of the IT equipment and dividing it by the total power into the data center multiplied by 100%. A PUE value of 3.0 would equate to a DCiE value of 33%, meaning that the IT equipment was consuming 33% of the facility’s power.
Let us take a look at the way power is consumed in a data center.
Figure 1. Power usage in the data center
In an ideal case, all the power entering the data center should be used to operate the IT load (servers, storage and network). If we consider that all the power entering the data center is consumed for operating it, then the resultant PUE should ideally be 1. Realistically, however, some of this power is diverted to support cooling, lighting and other support infrastructure. Some of the remaining power is consumed due to losses in the power system, and the rest then goes to service the IT load.
Let us take an example to see how PUE is calculated. Consider that the power entering the data center (measured at the utility meter) is 100 kW and the power consumed by the IT load (measured at the output of the UPS) is 50 kW; PUE will be calculated as follows:
A PUE value of 2.0 is typical for a data center. What does this mean? For every watt required to power a server, two watts of power are consumed. It is important to note here that since we are paying for every watt of power entering the data center, every watt of overhead represents an additional cost. Reducing this overhead will reduce the overall operating costs for the data center.
The two ways in which we can bring about a change and improve data center energy efficiency are:
- Reducing the power going to the support infrastructure
- Reducing losses in the power system.
This way we can ensure that more of the power entering the data center should make it to the IT load, thus improving data center energy efficiency and reducing the PUE.
Are There Drawbacks to Using PUE as a Measurement of Data Center Efficiency?
Data center managers are under immense pressure to reduce costs and match the reported PUE with that of other companies. Unfortunately, this is not always the right approach and can have a negative impact. If data center managers focus only on reducing PUE, they may inadvertently use more energy and increase data center costs.
For example, consider a captive data center with an input power of 100 kW, 50kW of which is being used to power IT equipment. As previously illustrated, this would give us an initial PUE value of 2.0.
Suppose the organization now decides to virtualize some servers. In fact, it is so successful with virtualization that it is able to reduce the power to IT equipment by 25 kW and the overall power to the data center by the same amount. In this case with the same compute capacity, the PUE may go up as data center utilization goes down, but it will still lead to a higher savings on overall power cost.
So PUE should not be the only focus for saving power.
Example of Virtualized/Unvirtualized Data Centers
Here’s an example using power-pricing data from Maharashtra state in India.
Annual energy utilization = 100kW x 8760 hrs/yr = 876,000 kWh
Annual electricity cost = 876,000kWh x Rs. 3.10/kWh* = Rs. 27, 15,600
*Base Tariff for HT I – Industries – Mahadiscom
Assumption that the PUE goes up to 2.1 due to reduced capacity utilization.
Annual energy utilization = (25+25*1.1=52.5) kW x 8,760 hrs/yr = 459,900 kWh
Annual electricity cost = 459,900 kWh x Rs. 3.10/kWh* = Rs. 1,425,690
*Base Tariff for HT I – Industries – Mahadiscom, for example at commercial consumer level prices are at more than 2x of these levels.
The above example shows that there is huge amount of savings in spite of increased PUE, demonstrating that IT load management can deliver better results beyond just PUE optimization.
Considering that both data centers (both before and after virtualization) are able to perform the same amount of work, we can see from the above calculation that the virtualized data center is noticeably more energy efficient. In fact, the virtualized data center can be made even more energy efficient if the support infrastructure is now reduced to match the reduced IT load.
PUE becomes a meaningless number if we do not know how to use it to measure the outcome of changes in the data center. Knowing that virtualization will eventually increase the PUE of the data center, should we avoid it? No. In fact, when we examine the PUE of our data center over a period of time, we should also take into account when the virtualization actually took place. We must track any changes that may have taken place in the IT infrastructure or IT load in addition to tracking our PUE, so that we are able to correlate the changes to the PUE value.
There are many other factors that may affect PUE. Redundancy, for example, will increase PUE. There will always be tradeoffs between availability and energy efficiency. Data center equipment—from cooling equipment to UPSs to server power supplies—will run more efficiently when they are heavily loaded.
The bottom line is that PUE, while an important piece of the energy efficiency puzzle, is just that—one piece of the energy efficiency puzzle. PUE constitutes only one component of a comprehensive energy management program, which must consider both sides of the coin: IT and facilities.
What Else Needs to Be Measured Along With PUE?
PUE is best used for tracking the impact of changes made to the data center infrastructure. Let us revisit Figure 1.
Although it is important for an organization to reduce losses in the power system and the power used for the support infrastructure, it is also apparent that the bulk of the power consumption in the data center goes to the IT load itself. If the organization can reduce the IT load, it will reduce the overall power required for the data center.
As a matter of fact, reducing the IT load will have a compounding effect, as it will also reduce the losses in the power system and the power required for the support infrastructure. This can be termed a cascading effect. Let us see how this works.
For example, if we assume that one watt of power can be saved at the IT load, it will reduce losses in the server power supply (AC to DC conversion), reduce losses in the power distribution (PDU transformers, losses in the wiring itself), reduce power losses in the UPS, reduce the amount of cooling required and, finally, reduce power losses in the building transformer and switchgear. The end result of the cascade effect is that saving one watt at the IT load may actually result in two or more watts of overall energy savings.
Powering the IT load forms a major chunk of the overall electricity cost in a data center; hence, for any energy efficiency initiative to be successful, an organization should first look at reducing the IT load.
Reducing the IT Load
There are a number of ways to reduce the IT load in the data center. These include the following:
- Decommission or repurpose servers that are no longer in use
- Power down servers when not in use
- Enable power management
- Replace inefficient servers
- Virtualize or consolidate servers
Decommission or Repurpose Servers
Data center managers always struggle with how to identify unused or lightly used “ghost” servers. One way of identifying a ghost server is to use CPU utilization as a measure of whether or not a server is being actively used. But this may not hold true every time. A server may appear to be busy when it is actually only performing secondary or tertiary processing not related directly to the primary services of the server.
For example, the primary service of an e-mail server is to provide e-mail. In addition, this server may also provide monitoring services, backup services, antivirus services and so on, but those are secondary, tertiary and similar types of service. If the e-mail server stops being accessed for e-mail, the monitoring, backup and antivirus services may no longer be necessary, but the server may still continue to provide them. So from a CPU-utilization standpoint, the unused server may appear to be busy, but that may only be secondary or tertiary processing. Hence, CPU utilization as a measure will become ineffective.
Another way of determining whether a server is actively being used is server compute efficiency (ScE). The ScE metric measures CPU usage, disk and network I/O, incoming session-based connection requests, and interactive logins to determine if the server is providing primary services. The ScE metric can provide data center managers with the ability to determine which servers are being actively used for primary services relative to ghost servers, which may be good candidates for virtualization or consolidation.
Power Down Servers When Not in Use
Although the majority of servers in data centers may be utilized around the clock, some servers may only be used during certain parts of the day or week. These servers should be turned off when not in use to save power.
Enable Power Management
To reduce usage of power in servers, data center managers should employ demand-based switching (DBS) to attain significant savings in the data center.
Replace Inefficient Servers
Once a server is purchased, it is considered “sunk cost.” Not taken into account are the ongoing operational costs to power the server, including power, cooling, software licensing and so on. A new multi-core server may replace as many as 15 single-core servers, saving as much as 93% of the power usage. In addition to the power savings, software licensing and other maintenance costs can also be considerably reduced. Additional savings include a reduction in data center cooling costs and the potential to reclaim valuable rack space.
There are various compelling reasons for virtualizing servers. From a business continuity viewpoint, virtual machines can be isolated from the physical system to augment system availability in the event of failures. In addition, parallel virtual environments enable an easier transition to a backup facility. From an energy efficiency viewpoint, virtualization provides numerous opportunities for energy savings.
Virtual machines provide minute control over workloads and can be moved to additional active servers as demand increases. Overall, virtualization can increase server CPU usage by 40–60%. As CPU usage is increased, the energy efficiency of the server power supply will also increase.
The power usage effectiveness (PUE) metric provides valuable information for measuring data center energy efficiency. But it represents only one component in a comprehensive energy management program. Although data center managers are under tremendous pressure to reduce PUE, doing so without a full understanding of power usage in the data center might actually be detrimental.
Data center managers must consider other metrics such as energy usage at the IT device level and server compute efficiency to effect sustained reductions in energy usage.
About the Author