Fixing Navigation and Analytics: When Your Data Lies About User Behavior
I had a classic small-team problem: the site felt great to use, but the data said nobody was reading more than one page per session. The data was wrong. Here's what I found and how I fixed it in an evening.
If you want the broader architectural context for why this site uses fragments and progressive enhancement, start with How This Blog Works and the HAL deep dive.
The real problem: bad data leads to bad decisions
Analytics showed one page view per session, even for readers who were clearly browsing multiple articles. If I'd taken that data at face value, I might have concluded the content wasn't engaging - and made content decisions based on a lie. The root cause was purely technical: HTMX fragment swaps weren't firing page_view events.
This is a pattern I see constantly in small product teams. You build something that works well for users, instrument it naively, then make strategic decisions based on metrics that don't reflect reality. Fixing the instrumentation is often higher-leverage than fixing the product.
What I shipped
1. Nav clicks now behave like article navigation
Article cards already worked correctly - they live in the normal DOM where HTMX can handle fragment swaps and URL pushes. But the site navigation lives inside PrimaryLayout's shadow DOM, which created two concrete bugs:
- HTMX threw
htmx:targetErrorwhen it couldn't resolvehx-target="#main-content"across the shadow boundary. - Even when the swap worked, URL history got out of sync - meaning back/forward behavior was broken.
The fix: PrimaryLayout now intercepts nav clicks, performs the fragment load into #main-content explicitly, and pushes the URL. The result is what users expect:
- Click nav item, fragment swaps, URL updates
- Back/forward buttons reload the correct fragment
This also aligns with the mobile UX work I described in Designing Mobile-First Reading Experiences with PrimaryLayout.
2. Analytics now tracks what readers actually do
The fix was straightforward:
- A hard page load fires
page_view(as before). - An HTMX swap into
#main-contentfires an additionalpage_view. - Back/forward (
popstate) also firespage_view.
Each virtual page view includes fromUrl and toUrl metadata, so I can reconstruct actual reading paths within a session. That's far more useful for content strategy than raw page counts.
The full analytics pipeline - Netlify Functions, Blob Store, GitHub snapshots - is documented in Zero-Server Analytics.
3. Manual rollups are safer
One operational improvement: manual analytics rollups now hit a dedicated endpoint that runs the same logic as the scheduled job without depending on scheduler-specific wrapper behavior. That means I can debug rollups in production without risking the daily scheduled job. Small thing, but it removes a class of "am I going to break my data pipeline?" anxiety when troubleshooting.
Why this matters
These are the kinds of fixes that don't show up in a feature changelog but have outsized impact on decision-making:
- Better data means better content decisions - I can now see which articles hold attention and which paths readers actually take.
- Correct navigation means lower bounce from frustrated users hitting broken back buttons.
- Still no framework overhead - this is all handled with event listeners and HTMX attributes. No router library, no state management layer.
The total cost of these fixes was one evening of work and zero new dependencies. For a solo operator, that's the kind of leverage ratio that matters.
Next up
- Polish the mobile nav interaction (auto-close after selection, tighter header spacing).
- Improve the recommended-articles refresh animation so it feels like a transition, not a reload.