COORD: 44.21.90
OFFSET: +12.5°
SYS.READY
BUFFER: 99%
FOCAL_PT
BACK TO DEVLOG
BRUBKR

Every Alarming Metric Was a Ghost

A CLI-driven performance audit of an animated homepage kept triggering alarms — a 112KB synchronous script, 76% unused CSS, a canvas that appeared blank. Each turned out to be a misread. The real lesson: performance tools measure what's present, not what's executing.

2026-01-23 // RAW LEARNING CAPTURE
PROJECTBRUBKR

I expected to find problems. The homepage runs a full-viewport canvas animation — simplex noise flow fields, hundreds of particles with ripple effects, grid lines redrawn every frame. That's the kind of thing that should show up in a performance trace. I'd just wired Chrome DevTools into Claude Code via MCP, giving me performance tracing, CPU emulation, network inspection, and script evaluation from the terminal. So I pointed it at the homepage and started measuring.

Nothing was wrong. But more interesting: everything that looked wrong wasn't.

The Animation That Doesn't Cost Anything

The BlueprintV2 component is 560 lines of canvas code — seeded PRNG, simplex noise, particle lifecycle management, double ripple rings with alpha blending. It runs in a requestAnimationFrame loop. On paper, this should be the performance bottleneck.

I injected a 300-frame FPS counter via evaluate_script and measured under CPU throttling:

ConditionAvg FPSDropped FramesMax Frame Time
No throttle60.1017.7ms
4x CPU slowdown58.13 (1%)100ms
6x CPU slowdown59.61 (0.3%)49.9ms

The 6x result is better than 4x. That's the tell. If CPU were the bottleneck, degradation would scale linearly with throttling. Instead, the 100ms spike at 4x was almost certainly a GC pause — unrelated to the animation loop itself. The animation is CPU-irrelevant.

Two design decisions explain why. First, FRAME_INTERVAL = 50 — the loop self-throttles to 20fps, skipping frames when requestAnimationFrame fires faster. The browser's compositor still paints at 60fps, but the canvas only redraws every 50ms. Second, the work per frame is pure arithmetic: noise lookups, coordinate updates, fillRect and arc calls. No layout thrashing, no DOM reads, no style recalculation. The main thread barely notices.

159KB for Everything

The network story was equally surprising. Filtering requests to scripts and stylesheets, only 10 resources loaded for the entire homepage. Brotli-compressed transfer sizes from the response headers:

ChunkTransfer SizeContent
React + DOM + Scheduler66.8 KBFramework runtime
Next.js App Router23.2 KBNavigation/routing
React Flight client22.2 KBRSC streaming
CSS (fonts + styles)13.5 KBAll styles
Page code5.4 KBEverything specific to this page
Framework utilities27.9 KBRouter, internals, chunk loader
Total~159 KB

The page-specific code — the animated canvas, the icons, the layout, the header — compresses to 5.4 KB. The 560-line BlueprintV2 component with its noise implementation, particle system, and SVG schematic is in there. Brotli does well with repetitive mathematical code, but the real savings come from the component's structure: no dependencies, no imports beyond React, and patterns (the particle loop, the grid loop) that compress beautifully because they repeat similar operations.

The Mystery Script

Then I found something that looked genuinely alarming. A script tag in the DOM that hadn't appeared in my network capture: a6dad97d9634a72d.js. It was the only synchronous script on the page — every other script had async=true. 112KB uncompressed. Content analysis showed core-js polyfills: Symbol, WeakMap, Reflect, Iterator, Promise.

First theory: a transitive dependency from Three.js or D3 pulling in core-js. But pnpm why core-js found nothing in the dependency tree. The polyfills weren't coming from my code.

The answer was in .next/build-manifest.json, under the "polyfillFiles" key. Next.js ships its own polyfill-nomodule.js on every page. It's a legacy browser fallback — 112,594 bytes of ES2015+ polyfills that exist solely for IE11-era engines.

The attribute I'd initially skimmed past:

nomodule: true

Browsers that understand <script type="module"> — which is every browser capable of running a Next.js 16 application — never download this file. The nomodule attribute tells modern browsers to skip it entirely. It's a 112KB script that zero real users will ever execute.

Phantom Metrics Everywhere

The pattern repeated. Each measurement that looked concerning dissolved under closer inspection:

CSS coverage showed 24% usage. Of 63 CSS rules, 42 were @font-face declarations for unicode-range subsetting (the browser only fetches the subsets it needs). Of the remaining 21 rules, 16 served other routes — .prose-dark for blog posts, scrollbar utilities, noise textures. The stylesheet is shared across routes with immutable cache headers. Measuring single-page coverage against a shared stylesheet is measuring the wrong thing.

A third-party insight flagged vercel.live at 74.3 KB. DOM inspection showed zero Vercel scripts on the public page. The toolbar only activates in preview deployments for authenticated users. The trace was picking up the site owner's session, not what visitors see.

Canvas pixel sampling returned all zeros. I'd checked getImageData(0, 0, 10, 10) — the top-left corner. The flow field animation renders subtle particles against a near-black background (#0a0a0a), masked by a radial gradient that fades to full transparency at the edges. Sampling the center found 8% non-black pixels. The animation was running fine; I was looking in the wrong place.

Icons in production didn't match the local codebase. The deployed site used Lucide icons. The local code imported from @phosphor-icons/react. An untracked icons.tsx file meant the Phosphor migration hadn't been committed. The audit caught a deployment gap, not a performance issue.

What Performance Tools Actually Measure

Loading diagram...

The homepage is healthy — 60fps animation, 159KB total transfer, CLS 0.00, LCP 218ms cold. But the audit's real value wasn't confirming that. It was the repeated lesson that performance tools report what's present in the document, not what's executing for real users. A synchronous script with nomodule has zero cost. A 74KB third-party resource that loads only in authenticated contexts has zero public impact. A stylesheet with 76% unused rules is doing exactly its job when cached immutably across routes.

The gap between "what the tool reports" and "what the user experiences" is where false alarms live. Every alarming number in this audit was a ghost — a metric without the context to interpret it. The tool did its job perfectly. The interpretation was the bottleneck.

LOG.ENTRY_END
ref:brubkr
RAW