Fixing Navigation and Analytics: When Your Data Lies About User Behavior

Created: • Updated: • 3 min read
A clean browser UI concept image representing navigation and history

I had a classic small-team problem: the site felt great to use, but the data said nobody was reading more than one page per session. The data was wrong. Here's what I found and how I fixed it in an evening.

If you want the broader architectural context for why this site uses fragments and progressive enhancement, start with How This Blog Works and the HAL deep dive.

The real problem: bad data leads to bad decisions

Analytics showed one page view per session, even for readers who were clearly browsing multiple articles. If I'd taken that data at face value, I might have concluded the content wasn't engaging - and made content decisions based on a lie. The root cause was purely technical: HTMX fragment swaps weren't firing page_view events.

This is a pattern I see constantly in small product teams. You build something that works well for users, instrument it naively, then make strategic decisions based on metrics that don't reflect reality. Fixing the instrumentation is often higher-leverage than fixing the product.

What I shipped

1. Nav clicks now behave like article navigation

Article cards already worked correctly - they live in the normal DOM where HTMX can handle fragment swaps and URL pushes. But the site navigation lives inside PrimaryLayout's shadow DOM, which created two concrete bugs:

The fix: PrimaryLayout now intercepts nav clicks, performs the fragment load into #main-content explicitly, and pushes the URL. The result is what users expect:

This also aligns with the mobile UX work I described in Designing Mobile-First Reading Experiences with PrimaryLayout.

2. Analytics now tracks what readers actually do

The fix was straightforward:

Each virtual page view includes fromUrl and toUrl metadata, so I can reconstruct actual reading paths within a session. That's far more useful for content strategy than raw page counts.

The full analytics pipeline - Netlify Functions, Blob Store, GitHub snapshots - is documented in Zero-Server Analytics.

3. Manual rollups are safer

One operational improvement: manual analytics rollups now hit a dedicated endpoint that runs the same logic as the scheduled job without depending on scheduler-specific wrapper behavior. That means I can debug rollups in production without risking the daily scheduled job. Small thing, but it removes a class of "am I going to break my data pipeline?" anxiety when troubleshooting.

Why this matters

These are the kinds of fixes that don't show up in a feature changelog but have outsized impact on decision-making:

The total cost of these fixes was one evening of work and zero new dependencies. For a solo operator, that's the kind of leverage ratio that matters.

Next up

Recommended

Anthropic Trained Its Replacement ai startups founders
Pydantic: The Open Source Layer Quietly Running the AI Economy ai open-source python pydantic anthropic tools
Karpathy Was Wrong: OpenClaw Still Outruns Its 5 Real Alternatives openclaw ai tools security

Recommended

Anthropic Trained Its Replacement ai startups founders
Pydantic: The Open Source Layer Quietly Running the AI Economy ai open-source python pydantic anthropic tools
Karpathy Was Wrong: OpenClaw Still Outruns Its 5 Real Alternatives openclaw ai tools security