Understanding the performance of your IT infrastructure is a difficult task. In fact, it often feels like IT professionals are forced to wear detective hats when exploring their systems and infrastructures just to understand the most basic of performance issues. Like a detective, IT teams often lack a holistic, full-picture view of data center performance issues and must therefore piece together disparate clues that help tell the story. Each level of the infrastructure stack provides its own set of clues, and without a holistic view of the overall data center environment, IT sleuths must infer what conclusion the clues are leading to as they go. As data center infrastructures become more complex, the clues become harder to read, and reliable performance is difficult to achieve.
One factor that contributes to data center complexity is that although the infrastructures and equipment become dated, they don’t disappear. Enterprises with lean budgets and limited staff continue to operate around these legacy systems, layering on new equipment and attempting to integrate the two into a cohesive system. Combine this situation with higher-than-ever customer expectations for availability and performance, and it’s clear why IT professionals are challenged when seeking holistic visibility. To put it simply, a gap exists between the performance information that is necessary and what is available, and this gap causes frustration for the affected teams.
Clues From the Past: What Your Legacy Equipment Is Saying
Several decades ago, in the 1970s and early 1980s, mainframes were the earliest “clouds.” Only the largest enterprises adopted these systems, as the cost and expertise necessary to implement them made this technology prohibitive for smaller companies.
Because the mainframe was a proprietary “closed system,” however, companies with the expertise had all the data required to ensure the highest levels of performance and availability, making the investment of high value for those who could afford it. The next evolution of technology infrastructure was the advent of client server. But they were the classic “open systems” and lacked the acute management capabilities of the mainframe. As such, this change ushered in the emergence of enterprise systems management (ESM) solutions, which offered some semblance of capacity and configuration management but lacked visibility into overall systems performance.
Fast-forward to today, when virtualization is the norm in data centers everywhere. The ESM and network performance monitoring (NPM) technology of previous decades have largely become irrelevant and of limited use, as modern layers of the stack—each of which is provided by a different vendor, creating a heterogeneous environment—have built-in, layer-specific management tools. Despite this integration, the visibility gap has grown because the tools fail to effectively communicate with one another, and the heterogeneity encourages even lower levels of cooperation among system components.
How the Past Affects Today’s Data Center Performance
No matter what data center IT professionals find themselves monitoring, chances are the evolution of the IT application infrastructure has led to very limited, challenging performance management with device-specific tools monitoring parts and pieces of the overall infrastructure. The disparate, siloed systems often result in equally siloed and incommunicative IT teams—a problem that continues to exacerbate the performance gap, requiring a team of detectives to uncover and decipher the cause of performance issues. The challenge is only compounded by growing customer expectations and the continued investments in new technology that businesses make to address the demand. With IT team staff numbers stagnating and budgets growing at a snail’s pace, data centers need a new solution to close the gap between the information needed and the information available.
So, how can IT teams ensure that they are seeing everything that’s happening in their environments, from the storage array and up the stack, and understand the performance of the entire system? Further, how can these teams get the insights they need in real time, ensuring that performance issues are identified and mitigated early? This level of insight requires performance-management solutions that monitor the operations of the end-to-end system, including the legacy technology that exists in the data center alongside the modern technology deployments.
Gaining Guaranteed Performance Through System Wide Visibility
It’s no secret that the amount of data that goes through an organization is more voluminous than ever before and is only going to increase exponentially each year. This is the time of big data, and IT teams are only beginning to handle this data influx and prevent the massive volumes from affecting system performance. Enterprises that have been handling data for a long time, particularly those that chose to augment legacy systems instead of replacing them with new equipment, are particularly challenged by big data as they seek to process this data on yesterday’s systems.
In the past, companies relied on service-level agreements (SLAs) in the storage or server tier that outlined a vendor’s commitment to performance at that layer of the stack. Like the legacy systems in the data center, this approach to an SLA is no longer adequate. Performance guarantees need to encompass the entire IT environment, because one component of the infrastructure can affect the business’s ability to deliver to its customers and can harm the company’s reputation.
The key to providing guaranteed performance starts with understanding why the system exists as it does today. IT teams aren’t working with a fresh slate or being given the ability to build what they want from the ground up, but that doesn’t mean the existing infrastructure is a lost cause. Instead, the inner detective comes into play as teams patch together various IT components for a whole infrastructure. Second, teams need to continue to shift toward a focus on the end user, which includes both the company’s employees and customers. Gaining performance that addresses the needs of these two constituencies will make for successful businesses. Third, teams should seek performance-monitoring solutions that provide a vendor-agnostic view of the IT application infrastructure. This approach will allow the various infrastructure components, each provided by a different vendor, to work in harmony with a holistic understanding of the operations. Often, this level of visibility is best supported by bringing in outside expertise. Finally, teams need to insist on SLAs that incorporate the entire infrastructure stack, not just one component. Each piece of the system is intricately connected, so isolating them and implementing different SLAs no longer makes sense.
Detectives in popular culture have fascinating jobs, cobbling together various clues to track down criminals and solve cases. While we may appreciate the appeal of detectives in literature or on television, IT professionals should be able to shed that component of their jobs and focus on the mission-critical aspects of their work. Performance management no longer has to be the stuff of detective novels.
Leading article image courtesy of hensin
About the Author
John Gentry is the vice president of marketing and alliances at Virtual Instruments. He has more than 18 years of experience in marketing, sales and sales engineering, and he has established his expertise in the open systems and storage ecosystems. Before Virtual Instruments, Gentry served as senior director of InfiniBand sales and solutions consulting, where he was responsible for all go-to-market activities related to the QLogic InfiniBand product portfolio.