Two Ways to Measure Performance
When you test your site’s speed, you get one of two types of data. Lab data comes from running a synthetic test — a tool loads your page in a controlled environment and reports what happened. Field data comes from real visitors using your site in their actual browsers, on their actual devices, over their actual network connections. These two types of data often tell very different stories, and understanding the difference is essential for making good performance decisions.
The critical distinction: field data is what Google uses for search rankings. Your PageSpeed Insights score is a lab test. Your Core Web Vitals assessment in Search Console is field data. They measure different things in different ways, and they frequently disagree.
What Lab Data Is
Lab data comes from tools like Lighthouse (which powers PageSpeed Insights), WebPageTest, and Chrome DevTools. These tools load your page on a simulated device with a fixed network speed and measure what happens. The result is a snapshot: one device, one connection speed, one location, one moment in time.
Lab data is useful for debugging. It gives you a detailed timeline of how your page loaded, identifies specific bottlenecks, and produces actionable diagnostics. It is consistent and reproducible — you can run the same test before and after making a change to see if it helped.
The limitation is that lab data reflects a single artificial scenario. It does not account for the diversity of your real visitors: their devices, their network speeds, their geographic locations, or how they actually interact with the page.
What Field Data Is
Field data comes from real users. Google collects performance data from Chrome browsers (with user opt-in) and aggregates it into the Chrome User Experience Report (CrUX). This dataset covers millions of websites and reflects what visitors actually experience.
Field data captures things lab tests cannot: the visitor in rural Australia on a 3G connection, the person using a five-year-old Android phone, the user who scrolls immediately and triggers layout shifts that a lab test’s static page load would miss. It also captures INP, which requires real user interactions that synthetic tests cannot reproduce.
Google evaluates your Core Web Vitals at the 75th percentile of field data. This means 75% of your visitors need to have a good experience for Google to consider a metric passing. One slow visitor does not sink you, but a consistent pattern of poor experiences will.
Why They Disagree
A site can score 95 on PageSpeed Insights (lab data) and still fail Core Web Vitals (field data). This happens more often than you might expect, for several reasons:
Different device profiles. Lighthouse simulates a mid-range phone on a moderate connection. If most of your visitors use slower devices or weaker connections, field data will be worse than the lab prediction.
Geographic distribution. Lab tests typically run from a single server location. If your visitors are spread across multiple continents but your server is in one region, many visitors experience higher latency than the lab test suggests.
Real interaction patterns. Lab tests load a page and stop. Real visitors scroll, click, navigate between pages, and interact in ways that trigger JavaScript execution and layout shifts that lab tests never encounter.
Third-party variability. Ad networks, chat widgets, and analytics scripts behave differently under real-world conditions than in controlled lab environments. They may load more resources, take longer to initialize, or inject content that causes layout shift.
Which One to Trust
For SEO and understanding your visitors’ real experience, trust field data. It is what Google uses, and it reflects actual conditions.
For diagnosing specific problems and verifying that changes improved things, use lab data. It gives you the detailed breakdowns and before-and-after comparisons you need for debugging.
The ideal workflow uses both: field data tells you whether you have a problem, and lab data helps you figure out what the problem is. One common mistake is optimizing purely for a lab score without checking whether field data confirms the issue — or the improvement.
Further Reading
- Why Lab and Field Data Can Be Different (web.dev) — Google’s explanation of the causes and implications of lab/field divergence.
- Chrome User Experience Report (Chrome for Developers) — Documentation for the CrUX dataset that provides field data for Core Web Vitals.
