How This Site Works: Architecture for a One-Person Team

Created: • Updated: • 6 min read
Code on a computer screen showing modern web development

This site is built by one person. That constraint drives every technical decision. The stack needs to be fast to build, cheap to host, easy to maintain, and resilient when I'm not paying attention to it. Here's what I chose, why I chose it, and what it actually costs.

The problem I was solving

I needed a content site that:

A typical React or Next.js blog would solve some of these problems while creating others: 150-300KB of JavaScript, a complex build pipeline, framework lock-in, and ongoing maintenance overhead. For a content site run by one person, that's paying enterprise rent for a studio apartment.

The stack

Three layers, each chosen for a specific reason:

┌─────────────────────────────────────────┐
        │  Static Site Generation (SSG)           │
        │  Pre-rendered HTML at build time        │
        │  → SEO, performance, zero hosting cost  │
        └─────────────────────────────────────────┘
                      ↓
        ┌─────────────────────────────────────────┐
        │  Web Components                         │
        │  Native browser APIs for interactivity  │
        │  → No framework runtime, no lock-in    │
        └─────────────────────────────────────────┘
                      ↓
        ┌─────────────────────────────────────────┐
        │  HTMX                                   │
        │  Fragment swaps + client-side routing   │
        │  → SPA feel, 14KB total, graceful      │
        │    degradation                          │
        └─────────────────────────────────────────┘
        

Total JavaScript: ~35-40KB. Compare that to 200-300KB for a typical React blog. That's not a philosophical statement -- it's a measurable difference in page speed, hosting cost, and maintenance burden.

Layer 1: Static Site Generation

At build time, a script reads Markdown files and generates two versions of each article:

  1. Full page (/article/{slug}/index.html) -- complete HTML with layout, scripts, and content. Used for direct access, SEO, and social sharing. Works without JavaScript.
  2. Fragment (/article/{slug}/fragment.html) -- just the article content. Used for HTMX swaps. Smaller file, faster navigation.

This dual-output approach is the foundation. Search engines see full pages. Returning visitors get instant fragment swaps. Both paths serve the same content from the same build. For a deeper dive on why SSG beats SSR for content sites, see Why Static Site Generation is Superior for Content Sites.

Why this matters for the business

Layer 2: Web Components

The UI components -- layout, article cards, interactive elements -- are built with native Web Components. No React, no Vue, no Svelte.

class PrimaryLayout extends HTMLElement {
          constructor() {
            super();
            const shadow = this.attachShadow({mode: 'open'});
            shadow.innerHTML = `
              <style>/* Scoped styles */</style>
              <div class="pl-container">
                <header>...</header>
                <main><slot name="pl-content"></slot></main>
                <aside><slot name="pl-aside"></slot></aside>
              </div>
            `;
          }
        }
        customElements.define('primary-layout', PrimaryLayout);
        

Why Web Components instead of a framework?

This isn't a purity argument. It's a pragmatic one:

For a deeper look at Web Components as a business decision, see Web Components as a Business Decision. The tradeoffs are real -- no built-in state management, fewer pre-built component libraries, less community tooling. For a content site, those tradeoffs are easy to accept. For a complex SaaS app, I'd evaluate differently.

Layer 3: HTMX for navigation

HTMX handles client-side navigation using HTML attributes instead of JavaScript:

<article-card
          hx-get="/article/some-article/fragment.html"
          hx-push-url="/article/some-article"
          hx-target="#main-content"
          hx-swap="innerHTML">
        </article-card>
        

When a user clicks an article card:

  1. HTMX fetches the fragment HTML (small, fast).
  2. Content swaps into #main-content (no page reload).
  3. Browser URL updates via history.pushState.
  4. If the user refreshes, the browser loads the full pre-rendered page.

The result is SPA-like navigation powered by ~14KB of library code instead of 100-300KB of framework + router. The build-time link rewriting that makes this seamless is detailed in HAL: Cutting 100-300KB of JavaScript by Moving Routing to Build Time.

Progressive enhancement, not progressive degradation

If JavaScript fails or is disabled:

If JavaScript works:

The site doesn't require JavaScript to function. It benefits from it. That's a meaningful distinction for accessibility, SEO, and resilience. More on this approach in HTMX and Progressive Enhancement.

The build process

Articles are Markdown files with frontmatter:

---
        slug: how-this-blog-works
        title: How This Site Works
        description: Architecture overview...
        image: https://...
        published: 2024-01-20
        tags:
          - web development
          - architecture
        ---
        
        # Article content here...
        

The build script reads all Markdown files, parses frontmatter, converts Markdown to HTML, generates full pages and fragments, copies static assets, and outputs everything to a dist/ folder. No Webpack, no Babel, no complex configuration. The build runs in seconds.

Deployment is pushing static files to Netlify. That's it. No Docker, no CI/CD pipeline beyond what Netlify provides out of the box.

The tradeoffs (honestly)

Every architecture has costs. Here are this one's:

  1. No TypeScript. Currently plain JavaScript. Could add JSDoc types or tsc --noEmit for type checking without a compile step, but haven't needed it yet at this scale.
  2. Limited component ecosystem. React has thousands of pre-built components. Web Components have fewer. For a content site, I haven't hit this constraint. For a complex dashboard, I would.
  3. Manual state management. No built-in reactivity system. For a blog, state is minimal (current article, theme preference). For an app with complex shared state, this would be a pain point.
  4. Fewer debugging tools. React DevTools are excellent. Web Component debugging is Chrome DevTools, which is capable but less specialized.

These are real costs. They're acceptable for this project because the benefits -- performance, simplicity, zero hosting cost, no framework churn -- outweigh them for a content site built by one person.

The bottom line

This architecture isn't the right choice for every project. It's the right choice for this project: a content site built and maintained by one person, where page speed, SEO, and low maintenance are the priorities.

The total JavaScript footprint is ~35KB. Hosting costs effectively nothing. The build takes seconds. There's no framework to upgrade and no vendor to migrate away from. Every piece of the stack is a web standard or a small, stable library.

If you're building a content site, a marketing site, or a documentation site -- especially as a small team -- this approach is worth evaluating. Not because it's technically pure, but because it's cheap, fast, and low-maintenance. Those are business virtues, not just engineering ones.

For the operational side of running this site, see Documentation That Scales: Constitution, Contracts, and Runbooks. For the analytics pipeline, see Why I Built My Own Analytics Pipeline.

Recommended

Anthropic Trained Its Replacement ai startups founders
Pydantic: The Open Source Layer Quietly Running the AI Economy ai open-source python pydantic anthropic tools
Karpathy Was Wrong: OpenClaw Still Outruns Its 5 Real Alternatives openclaw ai tools security

Recommended

Anthropic Trained Its Replacement ai startups founders
Pydantic: The Open Source Layer Quietly Running the AI Economy ai open-source python pydantic anthropic tools
Karpathy Was Wrong: OpenClaw Still Outruns Its 5 Real Alternatives openclaw ai tools security