Probeo
probeo

Performance Measurement: Lab Data, Field Data, and Structural Risk

How Probeo evaluates website performance through structural analysis, lab-based measurement, and field data context. Covers Core Web Vitals, loading metrics, and resource optimization.

Last updated 02/08/2026

Performance measurement splits into two fundamentally different data sources: lab data collected in controlled environments, and field data collected from real users. Each answers different questions. Probeo uses structural analysis to identify patterns that reliably predict poor performance outcomes, regardless of which data source confirms them.

What Probeo measures and why

Probeo evaluates performance as structural risk. Rather than running a single synthetic test and reporting a score, it examines the page structure, resource loading patterns, and rendering dependencies that determine how a page will behave under real conditions. The distinction matters because a page can score well in a lab environment and still fail for a meaningful percentage of real visitors.

Lab data versus field data

Lab data comes from synthetic tests run in controlled conditions: fixed device profiles, consistent network speed, no competing processes. It is reproducible and useful for debugging specific issues. It does not reflect what real users experience. Field data comes from real browsers on real devices over real networks. It captures the variance that lab data deliberately eliminates, including slow connections, underpowered hardware, and competing applications. Field data is noisy, inconsistent, and representative. Neither source is sufficient alone. Lab data without field context overestimates performance. Field data without structural analysis obscures root causes.

What the metrics cover

Performance metrics divide into three areas. Core Web Vitals (CLS, INP, LCP) are the metrics Google uses for page experience ranking signals. They measure layout stability, interaction responsiveness, and largest content render time. Loading metrics (TTFB, FCP) track the sequential steps from server response to first visible content. They diagnose where in the delivery chain delays originate. Resource optimization covers total page weight, image format efficiency, compression ratios, and lazy loading coverage. These are the structural factors that determine whether metrics stay within thresholds as traffic scales across device types and network conditions.

Why structural risk matters more than point-in-time scores

A Lighthouse score is a single measurement from a single device profile at a single moment. It tells you how one simulated visit performed. Structural risk assessment looks at the patterns that cause scores to degrade: uncompressed images that load on every page, render-blocking resources in the critical path, layout shifts caused by dynamically injected content. These patterns persist across visits and compound across pages. A site can pass Lighthouse on Tuesday and fail CrUX thresholds by Thursday if traffic shifts toward mobile or a CDN edge node changes behavior.

What becomes visible

  • Which pages carry structural performance risk before field data confirms the failure
  • Where loading bottlenecks originate in the delivery chain
  • Which resource optimization opportunities exist across page weight, image format, and lazy loading
  • How Core Web Vitals risk levels map to specific structural causes on each page

Common questions about performance measurement

Does Probeo replace Lighthouse or PageSpeed Insights?
No. Probeo identifies structural patterns that predict poor performance. Lighthouse measures a single simulated visit. They answer different questions. Probeo surfaces what to investigate; Lighthouse can confirm specific values.
Why not just use field data from CrUX?
CrUX data is aggregated over 28 days and requires sufficient traffic volume. Pages with low traffic have no CrUX data at all. Structural analysis works regardless of traffic volume and surfaces risk before field data accumulates.
Can a page pass Core Web Vitals and still have performance problems?
Yes. Core Web Vitals measure three specific dimensions. A page can have acceptable CLS, INP, and LCP while still loading 8 MB of uncompressed images or blocking rendering with synchronous scripts. The metrics pass but the experience degrades under real conditions.
Why does Probeo frame performance as visibility rather than optimization?
Optimization implies knowing which changes to make. Visibility means understanding current behavior accurately. Most performance work fails not because teams lack optimization techniques but because they lack clear information about what is actually happening across their pages.
How does mobile performance differ from desktop?
Mobile devices have less CPU, less memory, and often slower network connections. The same page structure produces different performance outcomes on different device classes. Structural risk assessment accounts for this variance; a single desktop Lighthouse score does not.
Does fixing all performance issues guarantee good Core Web Vitals?
No. Core Web Vitals are field metrics influenced by user behavior, device distribution, and network conditions. Fixing structural issues reduces risk. It does not eliminate variance from factors outside the page itself.