Zero-Server Analytics: How I Replaced a SaaS Bill with Netlify Functions and GitHub

Created: • Updated: • 5 min read
Overhead shot of a minimalist desk with charts on a laptop screen

Every analytics SaaS wants $20-50/month, a tracking pixel, and a cookie banner. For a content site that just needs to know what's getting read and where traffic comes from, that's a bad tradeoff - you're paying recurring fees and adding third-party dependencies for a dashboard you check once a week.

So I built a pipeline that runs itself: Netlify Functions collect events, Blob Store queues them, a scheduled function rolls them up into JSON snapshots, and GitHub stores the results in version control. Total monthly cost: zero. Total maintenance: also zero. If you read How My Automated Analytics Reports Work you already know the high-level story; this is the implementation walkthrough.

The business case in 30 seconds

I needed analytics that were private (no third-party cookies), cheap (no monthly SaaS), portable (data I own in a format I control), and automatic (nothing to remember to run). The constraint was that I'm a one-person operation - I can't afford to babysit infrastructure. Anything I build has to work unattended or it doesn't ship.

Architecture overview

  1. A lightweight client script (static/analytics.js) collects page-view events with privacy-friendly payloads.
  2. A Netlify Function (collect-analytics.js) validates each payload and writes it into Netlify Blob Store.
  3. A scheduled Netlify Function (rollup-analytics.js) runs daily at 5 AM, drains the previous day's blobs, builds a summary, commits the result as a JSON file to GitHub, and deletes processed events.
  4. Manual runs (npm run rollup:trigger -- --date YYYY-MM-DD) hit the same function with a token, so I can generate snapshots on demand.

No databases, no third-party analytics services, no dashboards to maintain. Just serverless functions and a Git repo.

Event collection: minimal client footprint

The client script grabs page URL, referrer, UTM params, a hashed visitor ID, and performance metrics, then ships the payload to the collect function. It uses sendBeacon when available so the request doesn't block navigation:

async function sendAnalytics(payload) {
          try {
            const body = JSON.stringify(payload);
            if (navigator.sendBeacon) {
              const blob = new Blob([body], { type: 'application/json' });
              if (navigator.sendBeacon(ANALYTICS_ENDPOINT, blob)) return;
            }
            await fetch(ANALYTICS_ENDPOINT, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body });
          } catch (err) {
            console.warn('analytics send failed', err);
          }
        }
        

Because the script lives in static/, the build process copies it straight to dist/static/ - what I debug locally is what runs in production. No transpiler, no bundler surprises. If you're using HTMX for navigation (as I describe in the architecture piece), you can add a htmx:afterSwap listener to emit virtual page views for client-side transitions.

Collect function: validate, salt, stash

The collect function is under 200 lines. It rejects oversized payloads and missing required fields, salts the visitor hash with ANALYTICS_SALT to prevent reverse-engineering, and stores each event as JSON in Blob Store:

const store = getStore('analytics-events');
        await store.set(`queue/${date}/${crypto.randomUUID()}.json`, entry);
        

Blob Store gives me append-only semantics without provisioning a database. Events are organized by date, which makes the daily rollup straightforward. This is the kind of infrastructure decision that matters when you're a small team: pick the simplest storage model that works, and move on.

Rollup function: the daily crunch

The rollup handler detects whether Netlify triggered it on schedule (via x-netlify-event: schedule) or I called it manually with a shared secret. Scheduled runs default to "yesterday"; manual runs accept a date override.

export const handler = schedule('0 5 * * *', rollupHandler);
        

Inside the handler:

  1. Load every blob under queue/{targetDate}/.
  2. Build summary stats - top paths, referrers, device mix, performance percentiles.
  3. Serialize the dataset and commit it to GitHub via the REST API.
  4. Delete the processed blobs.

File names include the run type so manual spot-checks never overwrite scheduled reports:

const suffix = runType === 'manual' ? '-manual' : '-scheduled';
        const path = `analytics/${year}/${month}/${day}/analytics${suffix}.json`;
        

Why GitHub as the data store?

This is the part that surprises people, but it makes total business sense:

For the broader hosting and deployment story, see How This Blog Works.

Authentication and safety rails

Testing the pipeline

npm run rollup:trigger -- --date 2025-12-05 hits the production function, passing the token and an optional date. It exercises the same code path as the scheduled run, which makes it perfect for verifying environment variables after a deploy.

{
          "meta": {
            "date": "2025-12-05",
            "runType": "manual",
            "events": 63,
            "uniqueVisitors": 41
          },
          "traffic": { ... },
          "performance": { ... },
          "events": [ ... ]
        }
        

Every JSON snapshot includes the raw events array so I can backfill, audit, or replay if I ever need to.

What's next

The takeaway

If you're running a content site or a small SaaS marketing page, you probably don't need a $50/month analytics product. Netlify Functions + Blob Store + GitHub gives you a pipeline that costs nothing, runs unattended, and stores data in a format you'll never have to migrate out of. Start with the collect function, wire up the rollup, run the manual trigger once to prove it works, and move on to the stuff that actually grows the business.

Recommended

Anthropic Trained Its Replacement ai startups founders
Pydantic: The Open Source Layer Quietly Running the AI Economy ai open-source python pydantic anthropic tools
Karpathy Was Wrong: OpenClaw Still Outruns Its 5 Real Alternatives openclaw ai tools security

Recommended

Anthropic Trained Its Replacement ai startups founders
Pydantic: The Open Source Layer Quietly Running the AI Economy ai open-source python pydantic anthropic tools
Karpathy Was Wrong: OpenClaw Still Outruns Its 5 Real Alternatives openclaw ai tools security