Core Web Vitals in 2025: what Google actually measures
A technical and practical explanation of LCP, INP and CLS. What thresholds Google uses, how they're measured in the field vs lab, and techniques that actually work.
Core Web Vitals are the set of metrics Google uses to evaluate the technical experience of a website. Since March 2024, INP (Interaction to Next Paint) replaced FID as the third official metric. If your site does not pass all three green thresholds, you are competing in SEO with one hand tied.
The three metrics that matter
LCP — Largest Contentful Paint
What it measures: how long it takes to paint the largest visible element in the first viewport (usually the hero image or H1).
Green threshold: ≤ 2.5 seconds at the 75th percentile of real users.
Common LCP issues:
- Hero image without
preloadand withoutfetchpriority="high". - Heavy JPEG hero without AVIF/WebP variants.
- Blocking web fonts without
font-display: swap. - Critical CSS not inlined.
- Hosting on a distant server without CDN.
INP — Interaction to Next Paint
What it measures: across the user session, the longest delay between an interaction (click, tap, keypress) and the next paint.
Green threshold: ≤ 200 ms at the 75th percentile.
Common causes:
- Heavy JavaScript on the main thread (sliders, chat widgets, analytics).
- Event handlers without
requestIdleCallbackorscheduler.yield. - Late hydration in SPA frameworks.
- Interactions that trigger large reflows.
CLS — Cumulative Layout Shift
What it measures: how much content shifts after load. A weighted sum of “how big is the jump” by “how big is the movement.”
Green threshold: ≤ 0.1.
Common causes:
- Images without
widthandheightin HTML. - Web fonts without
size-adjustthat change metrics on load. - Ads or banners injected late.
- Modals that push content instead of overlaying.
How Google actually measures: field vs lab
This is where a lot of confusion lives. There are two data sources:
Field data (CrUX): real Chrome users send anonymous metrics. This is what Google Search actually uses for ranking. It is aggregated at the 75th percentile over the last 28 days. If your site has low traffic, you do not appear.
Lab data (Lighthouse, PageSpeed Insights): controlled simulation. Useful to iterate, but not what Google uses for ranking.
Takeaway: a site can be 100/100 in Lighthouse and still fail in CrUX because your worst 25% of real users are on 3G with an old phone.
What actually works
For LCP
- Serve everything behind a CDN with HTTP/3 and Brotli.
- Preload the main font and hero image.
- Convert all images to AVIF with WebP fallback.
- Inline critical CSS (Astro does this).
- Minimize TTFB with SSG.
For INP
- Remove third-party scripts that do not earn their weight.
- If you use a SPA framework, consider an islands model (Astro, Qwik).
- Split any task over 50 ms with
scheduler.yield()orsetTimeout(0). - Defer non-critical work with
requestIdleCallback.
For CLS
- Always
widthandheighton images. - Use CSS
aspect-ratiofor dynamic containers. - Reserve space for ads.
- Declare
font-display: swap+size-adjustfor fonts.
How to monitor
The most common mistake: relying only on occasional audits.
Recommended free tools:
- Search Console → Experience — official Google field data.
- PageSpeed Insights — combines lab and field for a single URL.
- Chrome DevTools → Performance Insights — debug in your own session.
- web-vitals library — if you want to send real metrics to your own system (GDPR-compliant).
Closing
Core Web Vitals are not a one-time exam, they are a continuous monitoring system. A site that passes thresholds today, unmonitored, can be in the red in six months without anyone noticing. If you want a complete technical audit with a concrete roadmap, get in touch: we deliver an actionable report with priorities and estimated impact.