Description of the bug
I was doing a performance audit of a page. Field data was used and Gemini CLI with 3.1 was working through the performance insights. So far, so good. Here is the relevant output from the perf insight that gemini then uses to drive its further debugging:
Metrics (lab / observed): │
│ - LCP: 482 ms, event: (eventKey: r-47321, ts: 508929901115), nodeId: 671 │
│ - LCP breakdown: │
│ - TTFB: 30 ms, bounds: {min: 508929418828, max: 508929448856} │
│ - Load delay: 228 ms, bounds: {min: 508929448856, max: 508929676926} │
│ - Load duration: 2 ms, bounds: {min: 508929676926, max: 508929679270} │
│ - Render delay: 222 ms, bounds: {min: 508929679270, max: 508929901115} │
│ - CLS: 0.17, event: (eventKey: s-89564, ts: 508931182251) │
│ Metrics (field / real users): │
│ - LCP: 1404 ms (scope: url) │
│ - LCP breakdown: │
│ - TTFB: 209 ms (scope: url) │
│ - Load delay: 752 ms (scope: url) │
│ - Load duration: 40 ms (scope: url) │
│ - Render delay: 345 ms (scope: url) │
│ - INP: 65 ms (scope: url) │
│ - CLS: 0.02 (scope: url) │
│ - The above data is from CrUX–Chrome User Experience Report. It's how the page performs for real users. │
│ - The values shown above are the p75 measure of all real Chrome users │
│ - The scope indicates if the data came from the entire origin, or a specific url │
│ - Lab metrics describe how this specific page load performed, while field metrics are an aggregation of results from real-world users. Best practice is to prioritize │
│ metrics that are bad in field data. Lab metrics may be better or worse than fields metrics depending on the developer's machine, network, or the actions performed │
│ while tracing.
Notice the difference in local to field metrics, but both are considered "Good". They are also shown as "Good" in DevTools UI.
The AI agent categorized the field metrics as "Needs improvement" though.
===> I think it would be good to add an evaluation to the field metrics, so an agent can focus on the most important things. LEt's make sure DevTools UI and DevTools MCP agree on the evaluation
Reproduction
- Geminini CLI running on Gemini 3.1 auto mode
- Prompt
analyze the lcp of heise.de. make sure to clear the cache before doing it.
Expectation
No response
MCP configuration
No response
Chrome DevTools MCP version
0.18
Chrome version
No response
Coding agent version
No response
Model version
No response
Chat log
No response
Node version
No response
Operating system
None
Extra checklist
Description of the bug
I was doing a performance audit of a page. Field data was used and Gemini CLI with 3.1 was working through the performance insights. So far, so good. Here is the relevant output from the perf insight that gemini then uses to drive its further debugging:
Notice the difference in local to field metrics, but both are considered "Good". They are also shown as "Good" in DevTools UI.
The AI agent categorized the field metrics as "Needs improvement" though.
===> I think it would be good to add an evaluation to the field metrics, so an agent can focus on the most important things. LEt's make sure DevTools UI and DevTools MCP agree on the evaluation
Reproduction
analyze the lcp of heise.de. make sure to clear the cache before doing it.Expectation
No response
MCP configuration
No response
Chrome DevTools MCP version
0.18
Chrome version
No response
Coding agent version
No response
Model version
No response
Chat log
No response
Node version
No response
Operating system
None
Extra checklist