Virtualization, big data and the cloud are major drivers changing the rules for flexibility and elasticity in data centers. Data center growth is escalating, costs are increasing and there is massive demand to support data that has to move quickly and efficiently.
Demand driven by ubiquitous access to content from consumers and businesses is stretching resources. Performance of systems in the data center is a consuming priority, and issues of capacity, cost and power are major considerations. Server memory plays a vital role in enabling these goals. Knowing how to choose the right type of memory configuration required to achieve the results desired is not easily understood. And often memory is overlooked as a solution to improving performance—the immediate reaction is to add more servers without analyzing the optimal way to maximize the underutilized servers already in place.
Balancing low power with capacity and performance requires an understanding of the role that server memory plays. Memory has evolved to become one of the most important components of the data center. Server processors (often underutilized) are able to process multiple threads over many cores to maximize the utilization of a single server. Without sufficient or optimally configured memory, performance degrades and servers do not reach full potential. Rather than automatically adding more servers to improve performance, most of the time additional memory will address the issues and reduce complexity and costs.
The following are considerations and recommendations to identify how additional server memory can help data centers efficiently improve overall performance and how to ensure new memory eases resource allocation without business disruption.
- First, identify the role and goals for a given server or servers in the data center. Prioritize the importance of better performance and speed, reduction of power consumption, or increased capacity. While not mutually exclusive, the prioritization of these factors will dictate the optimal memory choices. For example, if lower power is the goal, then low-voltage memory would be the right choice. Additionally, different module types, such as quad rank or dual rank can save power over single-rank designs. Minimizing memory power consumption can save between 5 percent and 10 percent or more of a server’s total power draw, which gets multiplied across a data center.
- Check your processor’s capability to determine the maximum memory speed it can support. Both Intel Xeon and AMD Opteron processors dictate how fast the memory can operate. Mixing and matching memory speeds is never a good idea and will lead to performance issues. Additionally, check the memory support rules for the server platform you are configuring to achieve maximum performance. There are some configurations that may maximize total memory capacity installed, but the memory speed may drop to a lower level.
- Although memory is considered a commodity today with industry standards in place, it doesn’t mean every memory module will be supported by every server. There can be compatibility issues within components on the memory module and your server.Fundamentally there are no differences between branded servers and white-box servers. There may, however, be subtle differences in motherboard design or system height that require usage of a specific memory type. An IBM server, for example, may have height restrictions and require memory with very low-profile (VLP) designs. Or an HP ProLiant server might have compatibility issues with specific register components or DRAM brands. It’s very important to select the memory that is guaranteed to be compatible with the specific server system.
- Ensure that new memory is installed correctly and follow the server’s channel architecture guidelines. For example, IT has become accustomed to installing memory in pairs. When triple- and quad-channel servers were introduced, IT assumed they could continue to install in pairs without knowing that when they incorrectly fill the memory channels, they compromise potential performance.Memory problems typically manifest themselves as system lockups, blue screens or error-correction logs, indicating bad chips or incompatibility. Memory performance, however, is not so easy to diagnose. To fully understand if the memory is performing as desired, IT must benchmark or closely monitor the reported speed/bandwidth of the memory to determine if the full potential of the memory subsystem is being reached.
- Analyst firm Aberdeen reports that the number of applications currently running in virtualized environments has passed 50 percent. In these environments each server runs multiple applications, and server utilization typically has moved from the 10 percent range up to 80 percent. This makes the requirement for memory support even more critical. Overcommitted memory in a VMware virtualized environment causes VM latency—virtual machines may resort to swapping to disk in order to make up for their memory shortfall when ESX memory ballooning and TPS fail to recover enough memory from all VMs. Solid-state drives (SSDs) can significantly increase throughput, specifically for applications that require high transaction input/output operations per second (IOPs). Storage is no substitute for the enormous bandwidth and performance that physical memory provides, however.
- Choosing the cheapest solution may not be the wisest choice to meet long-term goals. For example, data center managers, when evaluating new memory, may see that 8GB DIMMs are fairly inexpensive, and purchasing 16 of them for the server can achieve their capacity goals of 128GB. The other option would be to choose eight 16GB DIMMs instead, which may be more costly at the outset but will provide savings over the long term on energy consumption (fewer DIMMs drawing less power) and provide headroom (open sockets) to expand memory in the future.
In summary, memory is the core element to better performance, more utilization and power management in today’s demanding data centers. Understanding the impact of decisions in choosing the right server memory will make a big difference regarding whether the server goals are achieved successfully.
About the Author
Mike Mohney is Sr. Technology Manager for Kingston Technology Company, Inc.