We all know that faster is better. Research and results clearly indicate that faster experiences with fewer errors result in increased usage, conversion, and revenue. With the desire to improve business metrics in mind, organizations often seek immediate improvements in customer experience across digital properties. However, without proper planning and coordination, these attempts consistently fail. In this post we will discuss the Performance Maturity Curve, and how organizations can use this to systematically improve their performance and business in a sustainable way.
Optimizing web performance is a journey that puts user and customer experience first. It starts with recognizing the need to go beyond basic uptime, leverages Digital Experience Monitoring to establish benchmarks for page performance and end-user experience, and ultimately incorporate performance measurements into the software development release cycle. Progressing on this journey requires increasing levels of maturity, both technically, and organizationally, that build and depend on each previous phase. For example:
Much like Maslow's Hierarchy of Needs, companies must adopt the technical and organizational practices of one phase before attempting the next.
Let’s take a closer look at the performance maturity curve.
You cannot improve what you aren’t measuring. You can’t resolve what you don’t know about. The focus of the first level is for organizations to get that basic visibility into the availability and site stability as a catch-all for performance. In this phase, achieving SLAs for business services is the top priority, and customer experience is managed reactively, often after support tickets have been submitted and escalated.
Often, IT or SRE teams are strapped for resources, both people and time. IT and engineering teams everywhere are undergoing massive cloud migrations and modernizations, and simply maintaining SLAs for uptime may be seen as good enough. Also, digital businesses overhauling their infrastructure and backend services may not be aware of how simple benchmarking and measuring end-user experience can be. This approach mirrors many organizations that are early in the process of modernizing their architecture – teams are in silos and don’t have the time or ability to look at the bigger picture of user experience. While individual pages and APIs may be measured for general uptime and performance, the complete picture of customer happiness is missed.
What to do:
A user’s experience is more nuanced than “Is a service available or not?” Phase two of the maturity curve starts with the understanding that, for the business, a slow service is just as bad as an unavailable one. Teams need to grow from a reactive, uptime-based approach to a user experience focused approach using richer, industry-researched measurements for page performance like Google’s Core Web Vitals. Since web vitals are now standard with modern Real User Monitoring and synthetic monitoring solutions, businesses can measure user-experience alongside uptime, and alert on poorly performing pages.
What to do:
Let’s explore these in depth.
Teams establish baseline measurements on user experience across critical pages or services. Instead of traditional “page load time,” Google’s Core Web Vitals provide three measurements to help accurately identify how an end-user experiences a page: Largest Contentful Paint, First Input Delay, and Cumulative Layout Shift. These three metrics help quantify how quickly a page displays content, page load responsiveness to user interactivity, and a page’s visual stability. Having more precise, quantifiable measurements for user experience helps teams quantify, trend, and more towards optimizing digital experience with their service level objectives (SLOs).
While individual pages (eg. a home page or product page) are critical, business transactions like the user log-in flow, user authentication, or checkout process often have several technological dependencies. Modern pages rely on multiple APIs and third parties to fetch data, or move a user through a service. Monitoring business transactions enables teams to detect availability or performance issues that arise as data flows through a transaction, which cannot be seen when monitoring the individual parts. leverage synthetic monitoring to simulate user behavior and measure page performance and functionality.
Example of simulating and measuring multiple pages of the entire business transaction
Application teams trending performance across users will group traffic into specific buckets using dimensional data like location, browser, or connection type, and then use percentiles to measure performance. Percentiles help identify how the majority of end-users experience page performance. The 75th percentile, for example, is helpful to inform page performance and user-experience without being overly influenced by slow outliers. Dimensional data helps teams understand the characteristics of users navigating through their site. Collecting this data enables richer insight and more accurate measurement of the user’s experience.
Now that they can quantify performance and user experience across their audience, Phase two companies can establish benchmarks and baselines which helps IT and engineering teams measure the impact of different performance optimizations on application KPIs. What was the impact of adopting that CDN or refactoring how JavaScript loads? Now you know.
In phase 3, companies understand at an organizational level that performance and UX has a direct impact on their business. Knowing this, companies can use the performance data from their established baselines and benchmarks for key pages and business transactions, during the development process to ensure they are only improving performance, and not regressing.
What to do:
Example of performance within the deployment and DevOps process
Since automation is such a large and critical component to modern enterprises, embedding performance into CI/CD practices helps engineering teams identify and resolve performance problems earlier in the development lifecycle. Frontend development or engineering teams who have agreed to performance budgets can automatically pass or fail builds based on specific page criteria like: page weight, size of images or scripts, total number of external resources, to name a few.
Prior to pushing a new deployment, engineering teams can measure the impact of new code, feature improvements, or services against previous builds or versions. A/B testing with synthetic monitoring helps identify the performance impact of new code from one page to another, which is helpful in catching problems before pushing to production. A/B testing by combining synthetic monitoring with Real User Monitoring helps measure optimal conditions on fast networks, against what real users actually experience.
The larger business goal with any Digital Experience Monitoring solution is to help correlate web performance to business outcomes. While there is no easy way to exactly correlate page speed or user-experience to revenue, conversions, or usage, tracking performance alongside business results helps provide guidance and direction to engineering, IT, and digital business leaders. Increasingly, digital businesses are aligning on Google’s Core Web Vitals as a baseline for user-experience. Going even further, some companies set up “speed teams” or “centers of excellence” to benchmark page performance across their user journey, benchmark against industry standards and competitors, and continuously improve deployments.
For time and resource constrained IT and engineering teams, establishing performance best practices may feel daunting. However, simple steps like establishing baselines for user-experience, standardizing on Core Web Vitals, and tracking the performance of new features and improvements, can result in less abandonment, increased usage, and higher revenues.
Ready to take the next step in your performance maturity? Start your free trial of Splunk Synthetic Monitoring today.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.