Today, enterprises rely on applications for nearly all business-critical functions, including revenue-generating customer interactions and productivity-enhancing employee processes. Poor application performance can be extremely costly, and with so much riding on fast, reliable enterprise applications, IT teams must be able to find and fix performance problems expeditiously.
Network-centric performance monitoring (NPM) and traditional application performance monitoring (APM) are two techniques that share the same goal—maintaining and restoring performance for applications. But these approaches work in very different ways. While vendors from both ends of the spectrum are attempting to expand their product portfolios’ reach, the bottom line is that the individual methods often fail to successfully correlate data from the network and applications. This can be the key to proactively finding and fixing performance problems and reducing mean time to repair (MTTR) when a problem does occur.
In this article, we’ll examine how and why an increasing number of enterprise applications are becoming mission-critical, creating demand for solutions that ensure strong application performance. We’ll explain how current siloed NPM and legacy APM approaches work, and why each one alone is insufficient. Finally, we’ll examine a new and modern approach that bridges the gap to successfully correlate data from the network and applications for an optimal approach to ensuring fast, reliable enterprise applications.
Mission Everything’s Critical
The number of applications considered to be mission critical in today’s business environments is at an all-time high. Put simply, mission-critical applications are those deemed essential to the core function of an organization, where downtime can mean lost profits and employee productivity. Basically, if any mission-critical application fails for any length of time, the impact to business viability is real.
In recent years, many organizations have broadened their definition of mission-critical to include a much broader array of applications. Traditional mission-critical applications have included ERP, finance, HR/payroll and others. Today, many businesses are dependent on the Internet to sell to customers and work with partners, and web applications and portals are increasingly viewed as essential.
But it’s not just downtime for mission-critical applications that organizations have to worry about. Degradations in application speed—even by mere fractions of a second—can be just as troublesome. Today’s application end users expect all the websites and applications they interact with to be as fast and reliable as Google; this phenomenon, known as the “Google Effect,” applies to both customer-facing and employee-facing applications. For the average user, 0.1 seconds is an instantaneous, acceptable response, similar to what they experience with a Google search. As response times increase, interactions begin to slow and dissatisfaction rises. For example, Amazon has calculated that a page load slowdown of just one second could cost it $1.6 billion in sales each year. Google itself found that slowing search-response times by just four-tenths of a second would reduce the number of searches by eight million per day—a sizable amount. Finally, an IBM study recently showed that the average cost of downtime for an SAP system ranges from $535,780 to $838,110 per hour.
To manage application performance, many IT teams have chosen a network-centric performance monitoring (NPM) or traditional application performance monitoring (APM) tool. Although both provide important information, neither is sufficient on its own to fully address modern-day application-performance challenges.
Why NPM Alone Isn’t Enough
NPM works by providing base-level reporting on network operations such as delay, packet loss and throughput. This data is then ascribed to applications in an effort to infer application performance. Different attributes are permissible for different types of applications on the basis of the application’s nature. For example, streaming video or voice can be unreliable with brief moments of static, but it needs to have very low latency so that lags don’t occur.
There are several challenges with NPM solutions. First, many NPM solutions manually identify applications and end users as discrete IP addresses on the network. Given the number of end-users, transactions and the frequency of change in most IT environments, this manual approach comes with scaling and accuracy issues. In addition, by treating each application as a disparate “conversation” without a broader context, it becomes much harder to identify business operations and the trends of their success and failure. The inability to see correlated trend data on business transactions in context makes it nearly impossible for IT teams to proactively identify and fix performance bottlenecks—such as when one server turns into a hotspot for all the applications running through it. This puts IT teams in reactive mode.
Second, NPM metrics are concentrated on the first tier of the data center, meaning that performance problems stemming from back-end database interactions, the cloud or third-party services from beyond the firewall are not reflected. This leaves IT teams with huge blind spots in the complete end-to-end application-delivery chain. Just because the lights in their data center are green does not mean that end users are having a satisfactory experience.
Why Traditional APM Is Incomplete
As we’ve discussed, a well-performing network does not guarantee strong application performance. NPM approaches have traditionally lacked insight into what really matters—the end-user experience. But traditional APM solutions that focus solely on application performance are also insufficient, because they neglect to assess how the underlying infrastructure affects the application.
Put another way, solutions that focus on the performance of an individual application may measure criteria like end-to-end transaction time. But they do not allow for troubleshooting that would indicate why a transaction may be stalling in a network-infrastructure tier. This situation can lead to excessive guesswork, finger-pointing and longer MTTR, when every second of a slowdown directly translates to revenue and profitability loss.
To address the full scope of performance problems wherever they may lie, IT teams must be able to first discern an end-user performance drop-off, and then trace back and fix root performance issues across the complete application delivery chain, as quickly as possible. The range of issues to be addressed must span the end user’s browser through the cloud and the Internet, through the data center and all the way down to a line of application code.
Conclusion: Combining NPM and APM in a New, Holistic Approach
Today, requests for faster, always available applications are constantly increasing. To meet this need, the question is no longer, “Shall we adopt NPM or APM?” Rather, the question needs to be, “How can we combine the best of both approaches?”
An approach known as “application-aware network monitoring” combines an end-user focused view of applications with deep-dive diagnostics to surmount the shortcomings of the individual approaches. This approach monitors all applications 24x7 to detect anomalies, aggregates application-performance trend data and precisely traces the root cause of application-performance problems across an extremely wide range of variables. In summary, by enabling faster, more accurate MTTR, application-aware network monitoring helps IT teams be more effective and proactive in delivering application performance in an increasingly mission-critical world.
Leading article photo courtesy of Docklandsboy
About the Author
Matthew Zanderigo is a product marketing manager with Compuware’s Application Performance Management (APM) business unit. He specializes in market intelligence, business development and market research. Matthew can be reached at firstname.lastname@example.org.