Audit | Weight |
---|---|
First Contentful Paint | 15% |
Speed Index | 15% |
Largest Contentful Paint | 25% |
Time to Interactive | 15% |
Total Blocking Time | 25% |
Cumulative Layout Shift | 5% |
Audit | Weight |
---|---|
First Contentful Paint | 20% |
Speed Index | 27% |
First Meaningful Paint | 7% |
Time to Interactive | 33% |
First CPU Idle | 13% |
Once Lighthouse is done gathering the performance metrics (mostly reported in milliseconds), it converts each raw metric value into a metric score from 0 to 100 by looking where the metric value falls on its Lighthouse scoring distribution. The scoring distribution is a log-normal distribution derived from the performance metrics of real website performance data on HTTP Archive.
For example, Largest Contentful Paint (LCP) measures when a user perceives that the largest content of a page is visible. The metric value for LCP represents the time duration between the user initiating the page load and the page rendering its primary content. Based on real website data, top-performing sites render LCP in about 1,220ms, so that metric value is mapped to a score of 99.
Going a bit deeper, the Lighthouse scoring curve model uses HTTPArchive data to determine two control points that then set the shape of a log-normal curve. The 25th percentile of HTTPArchive data becomes a score of 50 (the median control point), and the 8th percentile becomes a score of 90 (the good/green control point). While exploring the scoring curve plot below, note that between 0.50 and 0.92, there's a near-linear relationship between metric value and score. Around a score of 0.96 is the "point of diminishing returns" as above it, the curve pulls away, requiring increasingly more metric improvement to improve an already high score.
As mentioned above, the score curves are determined from real performance data. Prior to Lighthouse v6, all score curves were based on mobile performance data, however a desktop Lighthouse run would use that. In practice, this led to artificially inflated desktop scores. Lighthouse v6 fixed this bug by using specific desktop scoring. While you certainly can expect overall changes in your perf score from 5 to 6, any scores for desktop will be significantly different.
The metrics scores and the perf score are colored according to these ranges:
To provide a good user experience, sites should strive to have a good score (90-100). A "perfect" score of 100 is extremely challenging to achieve and not expected. For example, taking a score from 99 to 100 needs about the same amount of metric improvement that would take a 90 to 94.
First, use the Lighthouse scoring calculator to help understand what thresholds you should be aiming for achieving a certain Lighthouse performance score.
In the Lighthouse report, the Opportunities section has detailed suggestions and documentation on how to implement them. Additionally, the Diagnostics section lists additional guidance that developers can explore to further improve their performance.