Chrome usage data shows that 90% of a user's time on a page is spent after it loads, Thus, careful measurement of responsiveness throughout the page lifecycle is important. This is what the INP metric assesses.
Good responsiveness means that a page responds quickly to interactions made with it. When a page responds to an interaction, the result is visual feedback, which is presented by the browser in the next frame the browser presents. Visual feedback tells you if, for example, an item you add to an online shopping cart is indeed being added, whether a mobile navigation menu has opened, if a login form's contents are being authenticated by the server, and so forth.
Some interactions will naturally take longer than others, but for especially complex interactions, it's important to quickly present some initial visual feedback as a cue to the user that something is happening. The time until the next paint is the earliest opportunity to do this. Therefore, the intent of INP is not to measure all the eventual effects of the interaction (such as network fetches and UI updates from other asynchronous operations), but the time in which the next paint is being blocked. By delaying visual feedback, you may be giving users the impression that the page is not responding to their actions.
The goal of INP is to ensure the time from when a user initiates an interaction until the next frame is painted is as short as possible, for all or most interactions the user makes.
In the following video, the example on the right gives immediate visual feedback that an accordion is opening. It also demonstrates how poor responsiveness can cause multiple unintended responses to input because the user thinks the experience is broken.
This article explains how INP works, how to measure it, and points to resources for improving it.
What is INP?
INP is a metric that assesses a page's overall responsiveness to user interactions by observing the latency of all click, tap, and keyboard interactions that occur throughout the lifespan of a user's visit to a page. The final INP value is the longest interaction observed, ignoring outliers.
As stated above, INP is calculated by observing all the interactions made with a page. For most sites the interaction with the worst latency is reported as INP. However, for pages with large numbers of interactions, random hiccups can result in an unusually high interaction on an otherwise responsive site. The more interactions, the more likely this is to happen. To counter this, and give a better measure of the actual responsiveness for those types of pages, we ignore one highest interaction for every 50 interactions. The vast majority of page experiences do not have over 50 interactions so will report the worst interaction. The 75th percentile of all the page views is then reported as usual, which further removes outliers to give a value that the vast majority of users experience or better.
An interaction is a group of event handlers that fire during the same logical user gesture. For example, "tap" interactions on a touchscreen device include multiple events, such as
An interaction's latency consists of the single longest duration of a group of event handlers that drives the interaction, from the time the user begins the interaction to the moment the next frame is presented with visual feedback.
What is a good INP score?
Pinning labels such as "good" or "poor" on a responsiveness metric is difficult. On one hand, you want to encourage development practices that prioritize good responsiveness. On the other hand, you must account for the fact that there's considerable variability in the capabilities of devices people use to set achievable development expectations.
To ensure you're delivering user experiences with good responsiveness, a good threshold to measure is the 75th percentile of page loads recorded in the field, segmented across mobile and desktop devices:
- An INP below or at 200 milliseconds means that your page has good responsiveness.
- An INP above 200 milliseconds and below or at 500 milliseconds means that your page's responsiveness needs improvement.
- An INP above 500 milliseconds means that your page has poor responsiveness.
What's in an interaction?
As far as INP goes, only the following interaction types are observed:
- Clicking with a mouse.
- Tapping on a device with a touchscreen.
- Pressing a key on either a physical or onscreen keyboard.
Interactions may consist of two parts, each with multiple events. For example, a keystroke consists of the
keyup events. Tap interactions contain
pointerdown events. The event with the longest duration within the interaction is chosen as the interaction's latency.
INP is calculated when the user leaves the page, resulting in a single value that is representative of the page's overall responsiveness throughout the entire page's lifecycle. A low INP means that a page is reliably responsive to user input.
How is INP different from First Input Delay (FID)?
Where INP considers all page interactions, First Input Delay (FID) only accounts for the first interaction. It also only measures the first interaction's input delay, not the time it takes to run event handlers, or the delay in presenting the next frame.
Given that FID is also a load responsiveness metric, the rationale behind it is that if the first interaction made with a page in the loading phase has little to no perceptible input delay, the page has made a good first impression.
INP is more than about first impressions. By sampling all interactions, responsiveness can be assessed comprehensively, making INP a more reliable indicator of overall responsiveness than FID.
What if no INP value is reported?
It's possible that a page can return no INP value. This can happen for a number of reasons:
- The page was loaded, but the user never clicked, tapped, or pressed a key on their keyboard.
- The page was loaded, but the user interacted with the page using gestures that did not involve clicking, tapping, or using the keyboard. For example, scrolling or hovering over elements does not factor into how INP is calculated.
- The page is being accessed by a bot such as a search crawler or headless browser that has not been scripted to interact with the page.
How to measure INP
In the field
Ideally, your journey in optimizing INP will start with field data. At its best, field data from Real User Monitoring (RUM) will give you not only a page's INP value, but also contextual data that highlights what specific interaction was responsible for the INP value itself, whether the interaction occurred during or after page load, the type of interaction (click, keypress, or tap), and other valuable information.
If your website qualifies for inclusion in the Chrome User Experience Report (CrUX), you can quickly get field data for INP via CrUX in PageSpeed Insights (and other Core Web Vitals). At a minimum, you can get an origin-level picture of your website's INP, but in some cases, you can also get page-level data as well.
However, while CrUX is useful to tell you that there is a problem at a high level, it often doesn't provide enough detail to help fully understand what the problem is. A RUM solution can help you drill down into more detail as to the pages, users or user interactions which are experiencing slow interactions. Being able to attribute INP to individual interactions avoids guesswork and wasted effort.
In the lab
Optimally, you'll want to start testing in the lab once you have field data that suggests you have slow interactions. In the absence of field data, however, there are some strategies for reproducing slow interactions in the lab. Such strategies include following common user flows and testing interactions along the way, as well as interacting with the page during load—when the main thread is often busiest—in order to surface slow interactions during that crucial part of the user experience.
How to improve INP
A collection of guides on optimizing INP is available to guide you through the process of identifying slow interactions in the field and using lab data to drill down and optimize them in a variety of ways.
Occasionally, bugs are discovered in the APIs used to measure metrics, and sometimes in the definitions of the metrics themselves. As a result, changes must sometimes be made, and these changes can show up as improvements or regressions in your internal reports and dashboards.
To help you manage this, all changes to either the implementation or definition of these metrics will be surfaced in this Changelog.
If you have feedback for these metrics, you can provide it in the web-vitals-feedback Google group.