Justifying New Projects in a Changing Data Center Environment

October 3, 2012 6 Comments »
Justifying New Projects in a Changing Data Center Environment

IT organizations are desperate to make their current data centers more flexible and higher performing before resigning themselves to building out new facilities.

The capacity (power, space, compute and network) and utilization/load profiles allocated for a facility when it was designed are almost certainly out of date within its first two years of operation. As a result, data center professionals are truly challenged to stay within their capacity constraints when estimating the impact of new projects. Given the rate of change in enterprise IT, this represents a significant problem, as it directly affects a business’s ability to compete effectively through strategic technology adoption and deployment.

New technology solutions such as the cloud, mobility and “big data” are changing the playing field for many companies—and, in turn, forcing them to modernize and evolve their infrastructures. These next-wave enterprise solution technologies are vastly more resource intensive than the solutions for which their data centers were originally planned. Moreover, although demand for these services is increasing exponentially and additional pressure is being placed on the infrastructure to deliver these services, IT budgets remain static and the data center itself understaffed.

The critical challenge facing IT is to deliver new, state-of-the-art technology solutions that successfully position an enterprise for growth in the highly rigid and inflexible environment of the data center. How can IT effectively modernize the facility and increase its overall capacity without affecting availability and without new capital expenditures?

Growth in the data center

Virtualization Is Key, but Not The Key

Highly efficient data centers have implemented significant virtualization initiatives across their servers, storage and network environments. In fact, according to a July survey of 5,000 data center professionals, 85 percent of all respondents ranked virtualization as their number-one overall data center initiative regardless of data center size. Virtualization is now considered the table-stakes technology designed for increasing overall data center capability and capacity and making them easier to manage.

In addition, according to the same survey:

  • 48 percent of all servers are virtualized, compared with 27 percent in less advanced environments.
  • 93 percent use virtualized storage, versus 21 percent in less advanced environments.
  • Data centers that are heavily virtualized can achieve significantly higher staff productivity by managing 8.2 virtual machines (VMs) per server, compared with 4.5 VMs per server for less virtualized data centers.

Through asset consolidation and more-effective capacity utilization, virtualization has proven to be an extremely effective method for increasing data center efficiency. Well-executed virtualization enables organizations to better use existing hardware, while supporting business growth and reducing overall server, storage, space, power and cooling demands. But virtualization alone is only part of the solution.

Private Cloud Is the Path for Data Center Growth

Survey respondents recognized the burgeoning importance of private clouds, which overtook virtualization as the top strategy to support growth for data centers. In fact, more than 75 percent of all participants highlighted private cloud as their primary data center growth strategy, with a plurality of responses coming from organizations with the largest data centers holding 250 to 500 racks.

In fact, cloud infrastructure appears to have become the ideal data center application-deployment architecture. In fact, Gartner estimates that by 2014, the market for cloud-services revenue will skyrocket to $148 billion.

Cloud architectures appear to solve three fundamental challenges facing today’s data centers:

  • Delivering elastic services to accommodate a wide range of demand fluctuations
  • Automatically provisioning and maintaining a common pool of compute, network and storage resources rather than the current static provisioning model
  • Extending limited capacity headroom

Cloud architectures for the data center hold out the promise of creating a dramatically simpler and more agile IT infrastructure that reduces the complexity of deploying and managing power, applications, servers, storage and networking, along with a reduction in their associated costs.

The broader capacity savings in the form of power efficiency are also significant. The Open Data Center Alliance predicts that the overall power savings from cloud adoption could add up to approximately 45GW by 2015, which is enough energy to power up to 15 million homes.

Because of the inherent elasticity that a cloud infrastructure provides—allocating capacity and resources to meet unpredictable demand-fluctuating loads—data centers can achieve significant performance and cost optimization while removing the risk of service interruptions. More importantly, cloud deployments give IT greater agility to embrace new strategic initiatives while maintaining existing business-critical systems.

Decision Time

Whether the decision is between cloud or virtualized deployments, data center executives require deep visibility into their existing infrastructure and assets to effectively assess the impact of new systems and solutions.

Until just three years ago, IT personnel had to calculate performance and consumption metrics manually, a time-consuming, personnel-intensive, and error-prone process. Luckily, new technologies have become available to help data center executives easily construct “what if” scenarios using smart visualizations to see the impact of each new decision on their resource consumption—before implementation.

These tools highlight each scenario’s impact not only on the wallet, but also on data center capacity. Whether it’s adding or refreshing hardware, consolidating assets, increasing server memory, or moving applications to VMs or to the cloud, executives can create multiple scenarios that model different loads, applications and even system configurations for any of these projects.

One of the critical differentiators separating many of the vendors offering these tools is the accuracy of the data collected from the facility’s assets. It’s important to ensure that real asset data is being used to generate the metrics, not just nameplate values supplied by the vendors.

By tracking historical data on all assets, analyzing their performance and using actual system data, what-if planning can create an accurate financial picture for each scenario. So, whether the next project is to move physical servers to the cloud, add new hardware or deploy new apps on virtual machines, data center executives can use real scenarios and insight to guide them in substantiating the real business case for their decision.

About the Author

David Appelbaum on data center projectsDavid Appelbaum is vice president of marketing at Sentilla Corporation (www.sentilla.com), headquartered in Redwood City, Calif. Before joining Sentilla, David worked in software-marketing roles at Borland, Oracle, Autonomy, Salesforce.com, BigFix, and Act-On. He is available for questions and comments at david@sentilla.com.

Leading article photo by clayirving

Pin It

6 Comments

  1. Tom McKeown October 11, 2012 at 8:15 pm -

    Virtualization has been a win, and has plenty of potential for more gains in the future. Virtualized storage still is deployed in only “21 percent of less advanced environments.” Perhaps as important as tangible immediate energy savings, complementary human efficiency also grows: the survey found that more-experienced datacenters run with 8.2 virtual machines (VMs) per physical server, compared to a ratio of 4.5 “for less virtualized data centers.”
    Virtualization structured as “private cloud” is the strong new trend the survey unearthed: “more than 75 percent of all participants highlighted private cloud as their primary data center growth strategy”. The article puts it well:

    Cloud architectures appear to solve three fundamental challenges facing today’s data centers:
    -Delivering elastic services to accommodate a wide range of demand fluctuations
    -Automatically provisioning and maintaining a common pool of compute, network and storage
    resources rather than the current static provisioning model
    -Extending limited capacity headroom

    All these results bring us back to the persistent themes: how do we measure performance in meaningful business terms? What decisions can we make or projects undertake to improve operations? Over the next couple of weeks, we’ll look at a few of the best ideas in platform-as-a-service (PaaS) and other cloud approaches, end-user oriented application performance management (APM), and coordination of technical opportunities and business goals. http://bit.ly/QfkM9G

Add Comment Register



Leave a Reply