The New World of Application Performance Management

March 23, 2012 2 Comments »
The New World of Application Performance Management

Application performance management spending grows—while IT slows

Application performance management (APM) refers to the IT discipline that focuses on monitoring and managing the speed and availability of software applications. In their most recent Magic Quadrant for APM, Gartner notes that in 2011, $2 billion was spent on APM licenses, a 15 percent yearly increase. This is five times the 2.9 percent increase in 2011 overall worldwide IT spending.

One of the drivers for the increased interest in APM has been a greater awareness of the direct impact that application performance has on revenue generation. Consider that conversion rates increase 74 percent when page load times decrease from eight to two seconds.

Similarly, AOL recently researched the revenue impact of download delays by comparing differences in the number of web page views per visit, on the basis of whether visitors had a fast or slow experience. Visitors in the top 10 percent of fastest page load times viewed an average of 7.5 pages per visit. In the bottom 10 percent of page load speeds, pages per visit dropped to about five. The lesson is clear: the slower the end-user experience is, the fewer pages your visitors will view, and the less revenue you are apt to generate. Over the past several years, however, it has become increasingly difficult to manage the performance (particularly speed) of websites and applications.

This article will explore the reasons for this and demonstrate why a new approach to APM is needed—one that is based on an understanding of the true end-user experience, covers the entire application delivery chain and offers deep-dive diagnostics to help quickly pinpoint the source of performance problems, both within and beyond the data center.

Modern Websites and Applications Are Increasingly Complex

We live in an increasingly multi-sourced world, where numerous elements converge for the first time in the form of the end-user experience. Consider an online retail application comprising numerous functions derived from within their data center as well as third-party services beyond the firewall, like a shopping cart, preference engine and ad networks. Today, the average website connects to more than eight hosts before ultimately being served to the end user. Although extensive third-party functions can enable a richer online customer experience, they can also introduce performance risks, because any one component can cause a website or application to slow down. Today, serving a website or application means making sure all these pieces assemble in a way that yields the best possible service to the end user.

In addition to third-party services, there are many other performance-affecting elements standing between your data center and your end users around the world. These include cloud service providers, regional ISPs, local ISPs, content delivery networks and even browsers and devices. Taken together, all of these elements constitute the web application delivery chain, and poor performance anywhere in the chain will degrade the cumulative end-user experience and reflect poorly on you, the website or application owner, regardless of the actual cause.

End-User Experience: The New Metric for IT

With so much going on beyond your own data center, the reality of modern web applications is that even if your tools inside the firewall indicate that everything is running okay, that’s no guarantee your end users are happy. You can no longer just manage the elements and application components inside your firewall because this gives you only partial coverage and leaves you with significant blind spots. Many aspects of the end-user experience will not be inferable from data collection points within the data center. The point at which the end user accesses a composite application is the only place where true application performance can be understood.

Today, it becomes more critical to assess the end-user experience as part of an overall performance management strategy canvassing all performance-impacting elements—from the end user’s browser all the way back to the multiple tiers of the data center and everything in between. This is the key to identifying and fixing any weak links. Some businesses believe they can’t manage the performance of the entire application delivery chain because many of these components are outside the firewall and beyond their direct control. But if you can manage application performance from the end user’s point of view and include the entire web application delivery chain, then you really are in a stronger position of control.

Deep-Dive Diagnostics

Managing application performance from the end-user perspective becomes even more powerful when combined with advanced diagnostics spanning the “Last Mile” (the end user’s browser), all the way back to the “First Mile” (the data center, including code-level issues and infrastructure). The end result is First Mile–to–Last Mile coverage and visibility supporting the full complexity of Web 2.0 interdependencies and also enabling organizations to quickly identify and resolve problems inside the data center.

Managing application performance in such a way has never been more important. Business leaders want more application functions, faster. Dependencies on third-party services have increased. Blind spots are increasing, but there’s less time to test, and the impact of application problems in production is more painful than ever. Deep-dive diagnostics allow organizations to precisely pinpoint the source of problems without confusion or delay.

Conclusion

Today, managing application performance is not just about the objective availability of individual application infrastructure components and more about understanding the ultimate subjective outcome—the end-user experience. To deliver a strong application experience to an end user, many pieces and parts must all come together and work well.

The good news is today, organizations can ensure well-performing websites and applications through on-demand, realistic end-user experience monitoring. Two complementary approaches are necessary to properly assess the end-user experience. Synthetic monitoring uses geographically dispersed agents to collect performance data from scheduled tests that simulate the way users interact with your websites and applications. Real-user monitoring complements synthetic monitoring, providing insights into how the data observed with synthetic monitoring affects your real end users (i.e., user satisfaction and conversion rates).

Organizations with advanced APM capabilities use a combination of these approaches to achieve broad end-user visibility and deep-dive diagnostic capabilities across the complete application delivery chain, from the end user’s browser all the way back to the multiple tiers of the data center. This is the most comprehensive approach to finding, prioritizing and fixing performance problems across the entire web application delivery chain.

About the Author

Kieran Taylor is Director of Product Marketing of Compuware’s APM Business Unit.

Photo courtesy of Andrew*

Pin It

2 Comments

Add Comment Register



Leave a Reply