Barry Pollard from Google did an extended rationalization on Bluesky on why Google Search Console says an LCP is unhealthy however the person URLs are advantageous. I do not wish to mess it up, so I’ll copy what Barry wrote.
Here’s what he wrote throughout a number of posts on Bluesky:
Core Net Vitals thriller for ya:
Why does Google Search Console similar my LCP is unhealthy, however each instance URL has good LCP?
I see builders asking: How can this occur? Is GSC fallacious? (I am prepared to wager it isn’t!) What are you able to do about it?
That is admittedly complicated so let’s dive in…
First it is essential to grasp how this report and CrUX measures Core Net Vitals, as a result of when you do, it is extra comprehensible, although nonetheless leaves the query as to what you are able to do about this (we’ll get to that).
The problem is much like this earlier thread:
“How is it doable for CrUX to say 90% of web page hundreds are good, and Google Search Console to say solely 50% of URLs are good. Which is true?”
It is a query I get about Core Net Vitals and I admit it is complicated, however the fact is each are right as a result of they’re completely different measures…
1/5 🧵
— Barry Pollard (@tunetheweb.com) August 19, 2025 at 6:32 AM
CrUX measures web page hundreds and the Core Net Vitals quantity is the seventy fifth percentile of these web page hundreds.
That is a flowery approach of claiming: “the rating that a lot of the web page views get at the least” — whee “most” is 75%.
Philip Walton covers it extra on this video:
For an ecommerce website with LOTS of merchandise, you may have some extremely popular merchandise (with a lot of web page views!), after which an extended, lengthy tail of many, many much less widespread pages (with a small variety of web page views).
The problem arises when the lengthy tail provides as much as be over 25% of your whole web page views.
The favored ones usually tend to have page-level CrUX information (we solely information once we cross a private threshold) so are those extra more likely to be proven as examples in GSC due to that.
In addition they are those you most likely SHOULD be concentrating on—they’re those that get the site visitors!
However widespread pages have one other attention-grabbing bias: they’re typically sooner!
Why? As a result of they’re typically cached. In DB caches, in Varnish caches, and particularly at CDN edge nodes.
Lengthy tail pages are MUCH extra more likely to require a full web page load, skipping all these caches, and so will probably be slower.
That is true even when the pages are constructed on the identical know-how and are optimised the very same approach with all the identical coding strategies and optimised photos…and so on.
Caches are nice! However they’ll masks slowness that’s solely seen for “cache misses”.
And that is typically why you see this in GSC.
So the best way to repair?
There’s all the time going to be a restrict to cache sizes and priming caches for little-visited pages would not make a lot sense, so it is advisable to scale back the load time of uncached pages.
Caching ought to be a “cherry on prime” to spice up pace, quite than the one cause you may have a quick website.
A technique I like examine that is so as to add a random URL param to a URL (e.g. ?take a look at=1234) after which rerun a Lighthouse take a look at on it altering the worth every time. Normally this ends in getting an uncached web page again.
Evaluate that to a cached web page run (by operating the conventional URL a few occasions).
If it’s a lot slower you then now perceive the distinction between your cached and uncached and may begin considering of how to enhance that.
Ideally you get it underneath 2.5 seconds even with out cache, and your (cached) widespread pages are merely even sooner nonetheless!
By the way, this can be why advert campaigns (with random UTM params and the like) can be slower.
You’ll be able to configure CDNs to disregard these and never deal with them as a brand new pages. There’s additionally an upcoming normal to permit a web page to specify what params do not matter:
That is cool and been ready for No-Range-Search to flee Hypothesis Guidelines (the place this was initially began) to the extra basic use case.
This lets you say that sure client-side URL params (e.g. gclid or different analytic params) could be ignored and nonetheless use the useful resource from the cache.
— Barry Pollard (@tunetheweb.com) September 26, 2025 at 11:57 AM
Discussion board dialogue at Bluesky.