W3C Web Performance Working Group—have been working to standardize a set of new APIs and metrics that more accurately measure how users experience the performance of a web page.
To help ensure the metrics are relevant to users, we frame them around a few key questions:
Is it happening? | Did the navigation start successfully? Has the server responded? |
Is it useful? | Has enough content rendered that users can engage with it? |
Is it usable? | Can users interact with the page, or is it busy? |
Is it delightful? | Are the interactions smooth and natural, free of lag and jank? |
Performance metrics are generally measured in one of two ways:
Neither of these options is necessarily better or worse than the other—in fact you generally want to use both to ensure good performance.
Testing performance in the lab is essential when developing new features. Before features are released in production, it's impossible to measure their performance characteristics on real users, so testing them in the lab before the feature is released is the best way to prevent performance regressions.
On the other hand, while testing in the lab is a reasonable proxy for performance, it isn't necessarily reflective of how all users experience your site in the wild.
The performance of a site can vary dramatically based on a user's device capabilities and their network conditions. It can also vary based on whether (or how) a user is interacting with the page.
Moreover, page loads may not be deterministic. For example, sites that load personalized content or ads may experience vastly different performance characteristics from user to user. A lab test will not capture those differences.
The only way to truly know how your site performs for your users is to actually measure its performance as those users are loading and interacting with it. This type of measurement is commonly referred to as Real User Monitoring—or RUM for short.
There are several other types of metrics that are relevant to how users perceive performance.
Given all the above types of performance metrics, it's hopefully clear that no single metric is sufficient to capture all the performance characteristics of a page.
While this list includes metrics measuring many of the various aspects of performance relevant to users, it does not include everything (e.g. runtime responsiveness and smoothness are not currently covered).
In some cases, new metrics will be introduced to cover missing areas, but in other cases the best metrics are ones specifically tailored to your site.
The performance metrics listed above are good for getting a general understanding of the performance characteristics of most sites on the web. They're also good for having a common set of metrics for sites to compare their performance against their competitors.
However, there may be times when a specific site is unique in some way that requires additional metrics to capture the full performance picture. For example, the LCP metric is intended to measure when a page's main content has finished loading, but there could be cases where the largest element is not part of the page's main content and thus LCP may not be relevant.
To address such cases, the Web Performance Working Group has also standardized lower-level APIs that can be useful for implementing your own custom metrics:
Refer to the guide on Custom Metrics to learn how to use these APIs to measure performance characteristics specific to your site.