Website performance monitoring is the process of ensuring that end users enjoy a smooth and painless experience with websites, web applications and web services whenever they interact with them.
With more than 20 percent of all consumer spending in the United States occurring online, companies cannot afford to lose sales opportunities because of poorly performing or malfunctioning websites. Indeed, slow-loading websites cost retailers approximately $72.6 billion per year, one study found. Nine out of 10 U.S. shoppers say they would leave a site if it didn’t load in a reasonable amount of time, 57 percent would leave to make a purchase from another retailer, and 21 percent would never return, according to a recent survey.
Web performance monitoring tools help prevent such misfortunes with features that let network administrators view, gauge, manage and fine-tune the health of their online properties. Specifically, these tools enable administrators to gather metrics to monitor website performance on key performance indicators — such as website speed, content load speeds, website uptime, connection and latency — to isolate and address issues before they affect customers.
There are two main types of performance monitoring technologies: synthetic monitoring, which proactively seeks out current or short-term website issues, and real user monitoring (RUM), a passive or reactive approach for gathering information to understand long-term trends.
Website performance monitoring is rapidly becoming an indispensable tool for improving online customer experience and minimizing the rise of poorly performing sites leading to lost sales. This article offers an in-depth examination of how website performance monitoring works, how organizations can implement it and the best practices for getting started.
Website performance is all about speeds and feeds. How well are web pages loading after a user clicks on a link or types in a URL address? How is the page size affecting usability of the site, and by association, user satisfaction? Do they pop up faster than a user can blink? Or do pages appear glitchy, fail to load images or crash altogether? Also, how do the web pages perform on different devices, such as desktop PCs or smartphones?
When evaluating how customers might experience a website, network administrators try to assess many questions such as these. Many industry experts say ideally an average website should load in less than three seconds. This target likely exists because nearly half (47 percent) of U.S. consumers say that’s how long their patience will last before they decide to abandon a slow-loading website.
It is difficult to know if companies are meeting those targets, as conclusions from related studies vary widely. Some indicate average page load times are slightly above 3 seconds. Others put average times at around 10 seconds for desktop PCs and as high as 22 seconds for mobile devices.
What is known is that most brands are still striving to improve those numbers, and many also try to track and influence how customers subjectively perceive the performance of their site. In many ways, how fast a website feels to a user can have an even greater impact on their brand experiences than the measurable reality. Some brands address perception gaps with design tricks, such as displaying loading spinners while users wait for an entire page to emerge or providing jokes and tips to entertain them during the loading interlude. These approaches can help pass the time for customers and keep them around longer.
Learn how PUMA improved its e-commerce experience with Splunk to boost revenue by $10,000 per hour in the customer story.
Google’s Core Web Vitals, a subset of its Web Vitals, have become a significant way of ranking how well web pages are delivering quality user experiences. Metrics in Core Web Vitals look at key performance indicators, such as connection speed, content loading speeds and page speeds, interactivity, bottlenecks and visual stability. Google is continually updating Core Web Vitals to consider more granular details influencing site performance. Recently, for example, it added capabilities for examining: Largest Contentful Paint (LCP), which measures the time it takes large images to render after a user tries to load a page; First Input Delay (FID), which measures the time between a user’s first interaction with a site and when a browser is able to respond to that interaction; and Cumulative Layout Shift (CLS), which is a measure of visual stability.
With website performance monitoring, organizations can proactively and reactively run tests to measure how well their web properties are meeting Core Web Vitals criteria. Uptime monitoring, for example, can determine the ratio of uptime, directly corresponding to user experience and satisfaction. If a website isn’t quite up to snuff, some solutions will even provide relevant best practices and insights for improving user experiences.
Website performance monitoring tools make administrators’ lives easier by gathering meaningful data on website performance, creating visuals on dashboards and providing notifications when a metric is not up to par. These include:
While a major advantage of website performance monitoring is that it allows organizations to head off problems, it also provides numerous business and operational benefits. These include:
There are many technical and design issues creating challenges that some — but not all — website performance monitoring tools attempt to solve. Here are a few:
Website performance monitoring has many business and operational advantages but can be challenging in light of complex architectures and lack of intelligent alerting.
Synthetic monitoring (also known as synthetic testing and active monitoring) simulates user transactions by relying on behavioral scripts that emulate flows and measure availability, functionality and performance for critical endpoints or across the entire user journey. Because this technique stages and directs an artificial user experience, it’s classified as active monitoring, whereas real user monitoring is considered passive monitoring. In practice, synthetic monitoring works like this: administrators (likely teams responsible for SLA uptime) define several checkpoints and select performance metrics. A robot client follows the predefined user journey through the app, simulating transactions that mimic human behavior, and sending information back on a page’s availability (did the URL respond?), functionality (did all the buttons work?) and performance (how fast were page resources loaded?). Typically, teams set up alerts to notify them of outages for critical service endpoints, which can trigger their incident response.
In a nutshell, synthetic monitoring allows organizations to proactively:
Synthetic monitors real transactions by simulating user flows, while measuring availability, functionality and performance.
Real User Monitoring (RUM) (also known as end user monitoring or end user experience monitoring) is a method used to measure the end user experience in application performance management. It provides visibility into user experience with websites or web apps by passively collecting and analyzing timing, errors and dimensional information on end users in real time. RUM helps developers understand how their code impacts page performance, user experience and other performance issues.
RUM offers comprehensive views into the customer experience as opposed to simple uptime/downtime monitoring, which only measures availability. For example, an e-commerce website’s home page might be available, but the page might be delivering content or images slowly. It might also be experiencing delays when processing a user’s click or keystroke, resulting in site or shopping cart abandonment. RUM captures the customer’s experience after the web server has sent the initial HTML document. This information is valuable because 80 percent to 90 percent of end user wait time is experienced on the client-side in the web browser, rather than on the back end.
RUM is an increasingly important tool for understanding and optimizing user experience, as well as alerting administrators to issues and achieving business objectives.
As with most technical disciplines, the best way to get started with website performance monitoring is to first determine your goals. What are you hoping to accomplish? What opportunities would you like to explore with your web properties? What performance challenges have you faced, and how have they been affecting customer experience? And what kind of budget do you have to upgrade your current situation?
Once you have solid answers to these questions, it’s time to evaluate the various website performance tools and short list those that meet your specific requirements. You’ll likely want to prioritize recently built tools as opposed to legacy products that might struggle to keep up with the complexity of modern web architectures. Similarly, look at solutions that integrate the latest real-time, intelligent alerting capabilities. It will also be important to evaluate offerings that have been designed or updated to help organizations meet or exceed Google Core Web Vitals standards. Finally, look for platforms that offer multiple types of web performance monitoring approaches, like synthetic monitoring and RUM. Most organizations want to blend these capabilities, so you are better off choosing a platform offering several of the more useful ones.
Customers today expect companies to do everything they can to deliver consistently superior experiences across every physical and digital touchpoint. That’s because so many businesses are online now that customers have the power to be more selective than ever about the brands they patronize. As part of that, if a website’s design or performance issues lead to unnecessarily unpleasant experiences, customers will most likely take their business elsewhere — quite possibly for good.
As such, organizations can never take website performance monitoring lightly. Website architectures and user expectations are evolving too quickly for that, and even one misstep can lead to significantly higher costs, shrinking revenues and lost customer trust. It is vital, therefore, to find a website performance monitoring solution that will not only meet your immediate needs but also be flexible enough to serve future requirements. Your business could truly depend on it.
See an error or have a suggestion? Please let us know by emailing ssg-blogs@splunk.com.
This posting does not necessarily represent Splunk's position, strategies or opinion.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.