The Static Site Playbook: Shipping a Content Product on a Near-Zero Budget
Most JAMstack content either hand-waves about "the modern web" or drowns in serverless buzzwords. This is neither. It's a concrete walkthrough of how I ship a content product with Markdown, a build.js script, and a 35KB JavaScript budget - and why every decision traces back to keeping costs low and velocity high.
For the component and interaction layers, see the companion pieces: Web Components + HTMX architecture and the minimal JavaScript approach.
Content pipeline: Markdown in, dual HTML out
All articles live in content/articles/*.md. Each file uses frontmatter (slug, title, description, image, published, tags) followed by standard Markdown.
build.js processes every article and emits two artifacts:
dist/article/{slug}/index.html- full page with<primary-layout>, so direct visits and crawlers get everything in one request.dist/article/{slug}/fragment.html- content-only HTML used by HTMX for client-side navigation.
articles.forEach((article) => {
const fragment = renderFragment(article);
const fullPage = renderFullPage(article, fragmentAside);
writeFileSync(join(articleDir, 'fragment.html'), fragment);
writeFileSync(join(articleDir, 'index.html'), fullPage);
});
The fragment + full-page split is the key architectural decision. It gives you SPA-speed navigation for returning visitors while serving complete, crawlable HTML for search engines and first-time visitors. Two files per article, one build pass, no runtime complexity.
Homepage: pre-rendered, not client-fetched
The home page is also generated at build time. Instead of fetching JSON at runtime and rendering article cards in the browser, build.js sorts articles by date, renders the hero article, and injects recommendation slots using the same helper the article pages use. That keeps recommendation lists consistent across the site and ensures they exclude the currently viewed article.
Because the homepage is pure HTML, largest contentful paint stays low and everything is immediately crawlable. No loading spinners, no layout shift, no JavaScript required.
No bundler, by choice
There's no Vite, Webpack, or Rollup in this stack. Static assets and Web Components live in static/ and components/. The build script copies them:
copyDir('components', join(distDir, 'components'));
copyDir('static', join(distDir, 'static'));
ES modules are imported directly in the browser. Nothing to transpile, nothing to tree-shake, nothing to debug when a bundler plugin silently breaks. If the project outgrows this approach - say I need TypeScript or a design system with a build step - I'll add tooling deliberately rather than starting with it speculatively.
This isn't an ideological stance against bundlers. It's a business decision: the project doesn't need one yet, and premature tooling is a tax on iteration speed.
Deploy: boring by design
npm run build(runsnode build.js).- Commit Markdown and source changes (never
dist/- it's gitignored). - Netlify runs the same build script in CI, publishes
dist/, and hosts Netlify Functions alongside the static assets.
The deploy doesn't rely on hidden build steps or platform-specific config. I can reproduce production locally with npm run serve at any time. That matters when you're the only person on call.
Redirects are a single file:
/* /index.html 200
One line gives you SPA-style routing fallback on any static host.
Why these choices compound
The individual decisions - Markdown content, static generation, no bundler, cheap hosting - aren't interesting in isolation. What matters is how they interact:
- Version control friendly. Markdown + a build script diff cleanly. No compiled output cluttering pull requests.
- Portable. The same
dist/folder deploys to Netlify, Vercel, GitHub Pages, Cloudflare Pages, or S3. No vendor lock-in, no migration project. - Auditable. Artifacts are deterministic. Debugging is "rebuild and open the HTML file." That's a ten-second feedback loop.
- Cheap. Static hosting plus a couple of Netlify Functions costs effectively nothing. Compare that to running SSR infrastructure or a database for what is fundamentally a content product.
- One-person operable. No CI pipeline to babysit, no containers to patch, no framework upgrades that touch every file. I spend my time on content and product, not infrastructure.
Related reads
- Architecture overview: How This Blog Works
- Component layer: Web Components + HTMX Architecture
- Analytics layer: Zero-Server Analytics Pipeline
- Navigation layer: HAL - Build-Time Link Rewriting
What's next
I'm looking at three improvements, all chosen because they reduce operational friction rather than add features:
- Incremental builds - re-rendering only changed articles to keep build times near-instant as the library grows.
- RSS/Atom feeds - generated from the same Markdown source, zero additional authoring overhead.
- Deploy diff preview - showing how internal links and recommendation slots change with each commit, so I catch broken cross-references before they ship.
The discipline is staying boring. Pre-render everything, keep JavaScript tiny, and treat the build as something explainable on a whiteboard in five minutes. That's what lets the site ship fast, stay cheap, and scale without drama.