MakeThumb

A self-hosted Vercel-style preview platform that builds a GitHub repo inside an isolated container, pushes the static output to object storage, and serves it on a per-project subdomain through a Cloudflare Worker reverse proxy.

  • Cloudflare Workers
  • Cloudflare Containers
  • Hono
  • D1
  • R2
  • Docker
  • Node.js

Problem

The path from "I have a static-site repo on GitHub" to "it's served at a public URL" is genuinely a chore for one-off preview deploys: provision a runner, install Node, clone, install, build, push to a bucket, configure a custom domain. Vercel and Netlify do this beautifully — and their pricing is fair — but I wanted to know what it actually takes to build this primitive end-to-end on infrastructure I control. MakeThumb is that primitive: paste a GitHub URL, pick a slug, get a working subdomain.

It's also the smallest interesting product where every piece of a modern stack — edge worker, container, durable object, object storage, custom-domain routing — has to do its job.

Approach

The pipeline runs entirely on Cloudflare:

Browser → Cloudflare Worker (api-server, Hono)
              ↓
         Durable Object (BuildContainer)
              ↓
         Cloudflare Container (Node + git + pnpm)
              ↓ build artifact
         R2 bucket (__outputs/{slug}/)
              ↑
         Cloudflare Worker (reverse-proxy)  ←  *.makethumb.app
  1. The browser POSTs a { github_repository, project_name } pair to the api-server worker. The worker validates the slug shape (^[a-z0-9]+(?:-[a-z0-9]+)*$) and the URL.
  2. The api-server picks a BuildContainer durable object instance and forwards the request. The container is a Node-on-Alpine image with git and pnpm installed; it clones the repo, runs pnpm install && pnpm run build, and uploads the contents of dist/ (or build/) into R2 under __outputs/{slug}/.
  3. A second worker — the reverse-proxy — is bound to *.makethumb.app. When a request comes in for myproject.makethumb.app/index.html, the proxy parses the subdomain off the hostname and rewrites the request to fetch https://<r2-endpoint>/__outputs/myproject/index.html. If the asset is missing it serves a friendly 404.
  4. D1 holds a tiny table of { slug, github_repository, status } so duplicate slugs are rejected at submit time and the dashboard can show what exists.

That's the entire backend. There is no long-lived API server, no orchestrator, no Postgres.

Components

FolderRuntimeResponsibility
backend/api-server/Cloudflare Worker (Hono)POST /build endpoint, slug validation, D1 metadata.
backend/api-server/build-container/Cloudflare Container (Node 20)Clone → pnpm installpnpm run build → upload to R2.
backend/reverse-proxy/Cloudflare Worker*.makethumb.app → R2 path rewrite. Subdomain → bucket prefix.

Key decisions

  • Cloudflare Containers, not a Kubernetes pool. The build is bursty (~30–90s) and idle most of the time. A Durable Object spinning up a container per request is exactly the right shape: no idle worker bill, no autoscaling controller to babysit.
  • R2 + a Worker proxy, not custom-domain CDN per project. Provisioning a new edge cert for every project would be miserable. A wildcard *.makethumb.app cert plus a Worker that does subdomain → prefix routing means new projects are zero-config — they're live the moment the build finishes uploading.
  • Slug validation up front. The slug becomes both an R2 prefix and a public subdomain. Locking it to [a-z0-9-] early avoided a class of routing bugs (uppercase subdomains, underscored bucket keys) before they could happen.
  • D1 is enough. A handful of rows per project, no joins, no transactions across projects. Reaching for Postgres here would have been overkill.

Lessons learned

  • Container cold starts are fine; build steps are not. The Cloudflare Container itself spins up in single-digit seconds. The slow part is pnpm install on a fresh repo. Caching the lockfile-derived store between builds (when slugs are reused) is the single biggest perf win on the roadmap.
  • Subdomain routing is a foot-gun without strict slug rules. If you allow capital letters or dots in slugs, you'll discover that hostnames are case-insensitive, that pages.dev already exists, and that "subdomain.subdomain.makethumb.app" is technically valid. A ^[a-z0-9-]+$ check at submit time is worth ten checks at proxy time.
  • The proxy is where you put the friendly 404. I tried serving the 404 from R2 first, then from the api-server, before settling on the proxy. The proxy is the only thing that already knows whether a subdomain is "valid but unbuilt" vs. "asset missing for a real project," so it should own that error path.