Application performance management (APM) is defined as the monitoring and management of the performance (speed, availability and reliability) of software applications, whether they are internal facing (i.e., employee productivity) or customer facing (e-commerce transactions). APM strives to detect and diagnose application problems to maintain an expected level of service for users. But is this definition sufficient in 2016?
End-user demands for exceptional performance have become increasingly stringent in recent years. Unreliable, sluggish applications can severely hamper worker productivity as well as annoy customers, threatening brand and revenue. According to Nielsen Norman Group, even an extra second or two of delay in load time can create an unpleasant user experience, causing a transaction-oriented site to lose sales.
The growing complexity of Internet assets has added to the challenge of managing application performance. With more layers of complexity standing between the data center and end users, you can no longer assume end users are having a great experience just because your servers are up and running. Organizations must closely examine and in many cases widen the scope of their APM strategies to consistently satisfy end users standing on the opposite end of this complexity. Here are five important considerations.
1. Accurate performance measurements depend on getting as geographically close to end users as possible.
If you have remote offices, even if your headquarters-based data center systems are running great, a poorly performing regional ISP can degrade your remote users’ experiences. Or, maybe you have customers in a faraway region who are accessing your website content via a content-delivery network (CDN). If the CDN goes awry, the online customer’s experience suffers.
2. Consider the pros and cons of in-house and cloud-based APM deployments.
APM deployments that are in house or cloud based each come with their own pros and cons. In-house deployments are secure and scalable, and since they typically sit at a central part of the organization’s network, connectivity to the apps being monitored is fairly consistent and reliable. These deployments, however, can be expensive and require significant IT oversight. Cloud-based deployments offer ease of deployment and cost advantages, but they can be less secure; connectivity can be intermittent, and available bandwidth is not guaranteed as an organization’s application portfolio grows.
Today a third option is emerging: on-premises application-monitoring tools that can be deployed quickly, easily and cost effectively. These solutions are ideal for smaller companies with limited IT resources and/or those with remote offices. Determining the right approach requires businesses to consider their whole picture and unique needs.
3. Reactive app monitoring is dead.
The days of learning about an application problem from a disgruntled customer call or an annoyed employee are over. Today’s forward-thinking organizations are applying advanced analytics to a wealth of performance data in order to identify growing hot spots and address them proactively, versus waiting for a problem to occur. For example, an organization may see that at a certain level of CPU utilization, performance for a specific application running on that server begins to drop off.
The benefits of this type of approach are twofold. First, organizations can stave off problems before they occur. Second, this approach delivers a new kind of IT operational excellence, where the goal is not to just deliver “good enough” performance using the least amount of resources, but to tweak and apply systems in a manner that best supports the end-user experience. The most advanced APM deployments can tailor the end-user experience to the value of the customer, for example, ensuring the fastest page-load and transaction times for customers who spend the most.
4. Avoid information overload.
Having so much IT complexity, an APM system must be able to not just produce data, but analyze and make sense of it. Otherwise, IT teams may find themselves drowning in data and “noise,” but with no real actionable information. One example in the data center is virtualized infrastructures. Although virtualization has many benefits in resource utilization, flexibility and agility, it can lead to server sprawl and, in turn, data sprawl, making it extremely hard to manually pinpoint the source of a particular application’s problems.
5. Be careful who you let into your house—and monitor them carefully.
This rule applies to platform resources outside of the data center, like cloud-service providers and CDNs as well as third-party services and elements that augment websites and applications, such as social-media plug-ins and marketing tags. The advanced analytics described above can pinpoint when a performance issue is the result of external-service degradation, yielding several benefits: First, no time is wasted “war-rooming” when a problem is external and beyond an organization’s direct control. Second (and in a similar vein), organizations can know right away when it’s time to enact a contingency plan, such as removing the service or enlisting a backup, to minimize end-user impact. And third, organizations have the data they need to enforce SLAs.
Increased IT complexity—the cloud, CDNs, external DNS and other third-party services—is the result of our industry wanting to deliver stronger end-user experiences. Ironically, they introduce more points of failure and can make managing application performance much more challenging. As organizations reevaluate their APM processes, many approaches and options are available. But one thing that’s not optional is their need to consistently satisfy end users. This fact mandates organizations to rethink their APM strategies and make them much more extensive. Those that fail to do so can surely expect to hear about it from their customers on Twitter.
About the Author
Dennis Callaghan, Director of Industry Innovation at Catchpoint Systems, provides market insights to accelerate the company’s product vision, while expanding its influencer program with industry analysts, thought leaders and early adopters. Dennis was previously an analyst at 451 Research for 10 years, where he was recognized as an expert in IT performance monitoring and management. He has also held numerous editorial positions at leading IT trade publications, including eWeek.