Maybe you’re here because you’re a die-hard fan of performance metrics. Or maybe you don’t know what Lighthouse is and are too afraid to ask.
Either is an excellent option. Welcome!
For this merry adventure into demystifying developer documentation, I’ve recruited Technical SEO and Google Data Studio nerd Rachel Anderson.
Together, we’re hoping to take your performance improvement efforts from “make all the numbers green“ to some clear and meaningful action items.
We’re going to look at:
What Is Lighthouse?
Lighthouse is an open-source auditing tool that provides standardized scores across five areas:
For the purposes of this article, we’re going to use the name Lighthouse to refer to the series of tests executed by the shared Github repo, regardless of the execution method.
Lighthouse runs performance tests using emulated data, also known as lab data.
This is performance data collected within a controlled environment with predefined device and network settings.
Lab data is helpful for debugging performance issues. It does not mean that the experience on your local machine in a controlled environment represents the experiences of real humans in the wild.
Updates to Lighthouse: Web Core Vitals
On May 5, 2020, the Chromium project announced a set of three metrics with which the Google-backed open-source browser would measure performance.
The metrics, known as Web Vitals, are part of a Google initiative designed to provide unified guidance for quality signals.
The goal of these metrics is to measure web performance in a user-centric manner.
Within two weeks, Lighthouse v6 rolled out with a modified version of Web Core Vitals at the heart of the update.
July 2020 saw Lighthouse v6’s unified metrics adopted across Google products with the release of Chrome 84.
Chrome DevTools Audits panel was renamed to Lighthouse. Pagespeed insights and Google Search Console also reference these unified metrics.
Web Core Vitals comprise 55% of Lighthouse’s weighted performance score. This change in focus sets new, more refined goals.
Overall, most pages saw minimal impact with 83.32% of tests shifting ten points or less on the shift to v6.
Version 7 is currently out on Github and slated for large scale rollout with the stable Chrome 89 release in March 2021.
How to Test Performance Using Lighthouse
Methodology Matters
Out of the box, Lighthouse audits a single page at a time.
A single page score doesn’t represent your site, and a fast homepage doesn’t mean a fast site.
Test multiple page types within your site.
Identify your major page types, templates, and goal conversion points (signup, subscribe, and checkout pages).
Example Page Testing Inventory
URL | Page Type |
/ | Homepage |
/tools | Category Template |
/tools/screwdrivers | Product Listing Page Template |
/acme/deluxe-anvil | Product Detail Page Template |
/cart | Cart |
/checkout | Checkout |
/order-confirmation | Order confirmation |
/blog | Blog Root |
/blog/roadrunners-101 | Blog Template |
Before you begin optimizing, run Lighthouse on each of your sample pages and save the report data.
Record your scores and the to-do list of improvements.
Prevent data loss by saving the JSON results and utilizing Lighthouse Viewer when detailed result information is needed.
Get Your Backlog to Bite Back Using ROI
Getting development resources to action SEO recommendations is hard.
An in-house SEO could destroy their pancreas by having a birthday cake for every backlogged ticket’s birthday. Or at least learn to hate cake.
In my experience as an in-house enterprise SEO, the trick to getting performance initiatives prioritized is having the numbers to back the investment.
This starting data will become dollar signs that serve to justify and reward development efforts.
Chances are you’re going to have more than one area flagged during tests. That’s okay!
If you’re wondering which changes will have the most bang for the buck, check out the Lighthouse Scoring Calculator.
How to Run Lighthouse Tests
This is a case of many roads leading to Oz. Sure, some scarecrow might be particularly loud about a certain shade of brick but it’s about your goals.
Looking to integrate SEO tests into the release process? Time to learn some NPM.
Have less than five minutes to prep for a prospective client meeting? A couple of one-off reports should do the trick.
Whichever way you execute, default to mobile unless you have a special use-case for desktop.
For One-Off Reports: Chrome Devtools
Test one page at a time with the Lighthouse panel in Chrome DevTools. Because the report will be emulating a user’s experience using your browser, use an incognito instance with all extensions, and the browser’s cache disabled.
Pro tip: Create a Chrome profile for testing. Keep it local (no sync enabled, password saving, or association to an existing Google account) and don’t install extensions for the user.
How to Run a Test Lighthouse Using Chrome DevTools
Pros of Running Lighthouse From DevTools
Cons of Running Lighthouse From DevTools
For Testing the Same Pages Frequently: web.dev
It’s just like DevTools but you don’t have to remember to disable all those pesky extensions!
Pros of Running Lighthouse From web.dev
Cons of Running Lighthouse From web.dev
For Testing at Scale (and Sanity): Node Command Line
Pros of Running Lighthouse From Node
Cons of Running Lighthouse From Node
Lighthouse Performance Metrics Explained
In versions 6 and 7, Lighthouse’s performance score made of seven metrics with each contributing a percentage of the total performance score.
Metric Name | Weight |
Largest Contentful Paint (LCP) | 25% |
Total Blocking Time (TBT) | 25% |
First Contentful Paint (FCP) | 15% |
Speed Index (SI) | 15% |
Time To Interactive (TTI) | 15% |
Cumulative Layout Shift (CLS) | 5% |
Largest Contentful Paint (LCP)
What it represents: A user’s perception of loading experience.
Lighthouse Performance score weighting: 25%
What it measures: The point in the page load timeline when the page’s largest image or text block is visible within the viewport.
How it’s measured: Lighthouse extracts LCP data from Chrome’s tracing tool.
Is Largest Contentful Paint a Web Core Vital? Yes!
LCP Scoring
Goal: achieve LCP in < 2.5 seconds.
LCP time (in milliseconds) |
Color-coding |
0–2,500 | Green (fast) |
2,501-4,000 |
Orange (moderate)
|
Over 4,000 | Red (slow) |
What Elements Can Be Part of LCP?
What Counts as LCP on Your Page?
It depends! LCP typically varies by page template.
This means that you can measure a handful of pages using the same template and define LCP.
How to Define LCP Using Chrome Devtools
What Causes Poor LCP?
Poor LCP typically comes from four issues:
How to Fix Poor LCP
If the cause is slow server response time:
If the cause is render-blocking JavaScript and CSS:
If the cause is resource load times:
If the cause is client-side rendering:
Total Blocking Time (TBT)
What it represents: Responsiveness to user input.
Lighthouse Performance score weighting: 25%
What it measures: TBT measures the time between First Contentful Paint and Time to Interactive. TBT is the lab equivalent of First Input Delay (FID) – the field data used in the Chrome User Experience Report and Google’s upcoming Page Experience ranking signal.
How it’s measured: The total time in which the main thread is occupied by tasks taking more than 50ms to complete. If a task takes 80ms to run, 30ms of that time will be counted toward TBT. If a task takes 45ms to run, 0ms will be added to TBT.
Is Total Blocking Time a Web Core Vital? Yes!
TBT Scoring
Goal: achieve TBT score of less than 300 milliseconds.
TBT time (in milliseconds) |
Color-coding |
0–300 | Green (fast) |
301-600 |
Orange (moderate)
|
Over 600 | Red (slow) |
First Input Delay, the field data equivalent to TBT, has different thresholds.
FID time (in milliseconds) |
Color-coding |
0–100 | Green (fast) |
101-300 |
Orange (moderate)
|
Over 300 | Red (slow) |
TBT measures long tasks—those taking longer than 50ms.
When a browser loads your site, there is essentially a single line queue of scripts waiting to be executing.
Any input from the user has to go into that same queue.
When the browser can’t respond to user input because other tasks are executing, the user perceives this as lag.
Essentially, long tasks are like that person at your favorite coffee shop who takes far too long to order a drink.
Like someone ordering a 2% venti four-pump vanilla, five-pump mocha whole-fat froth, long tasks are a major source of bad experiences.
What Causes a High TBT on Your Page?
That’s it.
How to See TBT Using Chrome Devtools
How to Fix Poor TBT
First Contentful Paint (FCP)
What it represents: FCP marks the time at which the first text or image is painted (visible).
Lighthouse Performance score weighting: 15%
What it measures: The time when I can see the page I requested is responding. My thumb can stop hovering over the back button.
How it’s measured: Your FCP score in Lighthouse is measured by comparing your page’s FCP to FCP times for real website data stored by the HTTP Archive. Your FCP increases if it is faster than other pages in the HTTP Archive.
Is First Contentful Paint a Web Core Vital? No
Goal: achieve FCP in < 2 seconds.
(in seconds)
What Elements Can Be Part of FCP?
The time it takes to render the first visible element to the DOM is the FCP. Anything that happens before an element that renders non-white content to the page (excluding iframes) is counted toward FCP.
Since iframes are not considered part of FCP, if they are the first content to render, FCP will continue counting until the first non-iframe content loads, but the iframe load time isn’t counted toward the FCP.
The documentation around FCP also calls out that is often impacted by font load time and there are tips for improving font loads.
How to Define FCP Using Chrome Devtools
How to Improve FCP
In order for content to be displayed to the user, the browser must first download, parse, and process all external stylesheets it encounters before it can display or render any content to a user’s screen.
The fastest way to bypass the delay of external resources is to use in-line styles for above-the-fold content.
To keep your site sustainably scalable, use an automated tool like penthouse and Apache’s mod_pagespeed. These solutions will come with some restrictions to functionalities, require testing, and may not be for everyone.
Universally, we can all improve our site’s time to First Contentful Paint by reducing the scope and complexity of style calculations.
If a style isn’t being used, remove it. You can identify unused CSS with Chrome Dev Tool’s built-in Code Coverage functionality.
Use better data to make better decisions.
Similar to TTI, you can capture real user metrics for FCP using Google Analytics to correlate improvements with KPIs.
What it represents: how much is visible at a time during load.
Lighthouse Performance score weighting: 15%
What it measures: The Speed Index is the average time at which visible parts of the page are displayed.
How it’s measured: Lighthouse’s Speed Index measurement comes from a node module called Speedline.
You’ll have to ask the kindly wizards at webpagetest.org for the specifics but roughly, Speedline scores vary by the size of the viewport (read as device screen) and has an algorithm for calculating the completeness of each frame.
Is Speed Index a Web Core Vital? No
Goal: achieve SI in < 4.3 seconds.
SI time
(in seconds) |
Color-coding | FCP score (HTTP Archive percentile) |
0–4.3 | Green (fast) | 75-100 |
4.4-5.8 | Orange (moderate) | 50-74 |
5.8+ | Red (slow) | 0-49 |
(in seconds)
How to Improve SI
Speed score reflects your site’s Critical Rendering Path. A “critical” resource means that the resource is required for the first paint or is crucial to the page’s core functionality.
The longer and denser the path, the slower your site will be to provide a visual page. If your path is optimized, you’ll give users content faster and score higher on Speed Index.
Time to Interactive
What it represents: Load responsiveness; identifying where a page looks responsive but isn’t yet.
Lighthouse Performance score weighting: 15%
What it measures: The time from when the page begins loading to when its main resources have loaded and are able to respond to user input.
How it’s measured: TTI measures how long it takes a page to become fully interactive. A page is considered fully interactive when:
Is Time to Interactive a Web Core Vital? No
TTI Scoring
Goal: achieve TTI score of less than 3.8 seconds.
TTI Score
(in seconds) |
Color-coding |
0–3.8 | Green (fast) |
3.8 – 7.3 | Orange (moderate) |
7.3+ | Red (poor) |
(in seconds)
Cumulative Layout Shift (CLS)
What it represents: A user’s perception of a page’s visual stability.
Lighthouse Performance score weighting: 5%*
* Expect CLS to increase in weighting as they work the bugs out. Smart bet says Q4 2021.
What it measures: It quantifies shifting page elements through the end of page load.
How it’s measured: Unlike other metrics, CLS isn’t measured in time. Instead, it’s a calculated metric based on the number of frames in which elements move and the total distance in pixels the elements moved.
CLS Scoring
Goal: achieve CLS score of less than 0.1.
CLS Score | Color-coding |
0–0.01 | Green (good) |
0.1-0.25 | Orange (needs improvement) |
0.25+ | Red (poor) |
What Elements Can Be Part of CLS?
Any visual element that appears above the fold at some point in the load.
That’s right – if you’re loading your footer first and then the hero content of the page, your CLS is going to hurt.
Causes of Poor CLS
How to Define CLS Using Chrome Devtools
How to Improve CLS
Once you identify the element(s) at fault, you’ll need to update them to be stable during the page load.
For example, if slow-loading ads are causing the high CLS score, you may want to use placeholder images of the same size to fill that space as the ad loads to prevent the page shifting.
Some common ways to improve CLS include:
The complexity of performance metrics reflects the challenges facing all sites.
We use performance metrics as a proxy for user experience – that means factoring in some unicorns.
Tools like Google’s Test My Site and What Does My Site Cost? can help you make the conversion and customer-focused arguments for why performance matters.
Hopefully, once your project has traction, these definitions will help you translate Lighthouse’s single performance metric into action tickets for a skilled and collaborative engineering team.
Track your data and shout it from the rooftops.
As much as Google struggles to quantify qualitative experiences, SEO professionals and devs must decode how to translate a concept into code.
Test, iterate, and share what you learn! I look forward to seeing what you’re capable of, you beautiful unicorn.
All screenshots taken by author, January 2021
This content was originally published here.