Migration Patterns
Patterns for moving existing applications onto Cloudflare Workers. These are general strategies, not step-by-step guides for specific frameworks. Each pattern covers what fits, what changes, and what to watch out for.
Decision Framework
Before migrating, ask two questions:
-
Does my app fit the Workers model? Workers excel at request/response workloads, API servers, and static+dynamic sites. They struggle with long-running processes, heavy compute, and TCP-dependent protocols.
-
Can I migrate incrementally? The best migrations move one service at a time, not everything at once.
flowchart TD
A["Existing App"] --> B{"Request/response\nworkload?"}
B -->|Yes| C{"CPU per request\n< 30s?"}
B -->|No| H["Consider Containers\n(when GA)"]
C -->|Yes| D{"Needs external\nPostgres?"}
C -->|No| H
D -->|Yes| E["Hyperdrive +\nWorkers"]
D -->|No| F{"Data fits\nSQLite model?"}
F -->|Yes| G["D1 + Workers"]
F -->|No| E
Pattern 1: ISR / Static Sites with Dynamic Revalidation
Migrating from: Vercel ISR, Netlify on-demand builders, self-hosted SSG with cron rebuilds.
Cloudflare approach: Static assets served from Workers Static Assets, with a cron-triggered Worker that regenerates stale pages.
// Cron Worker that rebuilds stale pages
export default {
async scheduled(event: ScheduledEvent, env: Env): Promise<void> {
const stalePages = await env.DB.prepare(
"SELECT url FROM pages WHERE updated_at < datetime('now', '-1 hour')"
).all();
for (const page of stalePages.results) {
const html = await renderPage(page.url);
await env.BUCKET.put(`pages/${page.url}.html`, html, {
httpMetadata: { contentType: "text/html" },
});
await env.DB.prepare(
"UPDATE pages SET updated_at = datetime('now') WHERE url = ?"
).bind(page.url).run();
}
},
};
What changes:
- Static assets deploy to Workers Static Assets (automatic CDN)
- Dynamic revalidation runs as a cron-triggered Worker
- Page data lives in D1, generated HTML caches in R2 or KV
Watch out for:
- No built-in ISR primitive like Vercel’s
revalidate. You build the invalidation logic yourself. - KV is faster for reads than R2 but has a 25 MB value limit. Use R2 for full HTML pages over that size.
Pattern 2: API Servers with Hono
Migrating from: Express, Fastify, Koa, or any Node.js API server.
Cloudflare approach: Use Hono as the routing framework on Workers. Hono has the same middleware model as Express but is built for edge runtimes.
import { Hono } from "hono";
import { cors } from "hono/cors";
import { bearerAuth } from "hono/bearer-auth";
type Bindings = {
DB: D1Database;
WEBHOOK_QUEUE: Queue;
};
const app = new Hono<{ Bindings: Bindings }>();
app.use("/api/*", cors());
app.use("/api/admin/*", bearerAuth({ token: "secret" }));
app.get("/api/webhooks", async (c) => {
const results = await c.env.DB.prepare(
"SELECT * FROM webhooks ORDER BY created_at DESC LIMIT 50"
).all();
return c.json(results);
});
app.post("/api/webhooks", async (c) => {
const body = await c.req.json();
await c.env.WEBHOOK_QUEUE.send(body);
return c.json({ queued: true }, 202);
});
export default app;
Migration checklist:
| Express/Fastify | Hono on Workers |
|---|---|
app.get("/path", handler) | Same syntax |
req.body | c.req.json() (async) |
req.params.id | c.req.param("id") |
req.query.page | c.req.query("page") |
res.json(data) | return c.json(data) |
process.env.DB_URL | c.env.DB (binding) |
npm install pg | Use D1 or Hyperdrive |
| Body parser middleware | Built-in (no middleware needed) |
express.static() | Workers Static Assets |
What changes:
- No
process.env- use Workers bindings instead - No filesystem access - use R2 for file storage
- No raw TCP - use Hyperdrive for database connections
- No long-lived server - each request is a fresh invocation
Watch out for:
- Express middleware that uses
req/resmutation patterns won’t port directly node:fs,node:net,node:child_processare not available- Enable
nodejs_compatcompatibility flag for packages that need Node.js APIs
Pattern 3: Full-Stack Apps with the Vite Plugin
Migrating from: Next.js, Remix, SvelteKit, or any SSR framework.
Cloudflare approach: Use the @cloudflare/vite-plugin with React (or your framework of choice) for a unified build that deploys as a single Worker.
// vite.config.ts
import { cloudflare } from "@cloudflare/vite-plugin";
import react from "@vitejs/plugin-react";
import { defineConfig } from "vite";
export default defineConfig({
plugins: [react(), cloudflare()],
});
The plugin reads wrangler.jsonc for all bindings. No duplicate config needed.
What changes:
- SSR runs on Workers (V8 isolate, not Node.js)
- API routes share the same Worker, accessed via bindings
- Static assets serve from Workers Static Assets
- No server-side node modules (no
fs, nopath.joinfor files)
Watch out for:
- Server components that depend on Node.js APIs need alternatives
- The
@cloudflare/vite-pluginreads wrangler.jsonc automatically. Do not duplicate bindings invite.config.ts. - Use
run_worker_first: ["/api/*"]in assets config to route API paths through the Worker
Pattern 4: Database Migration
Postgres to D1
When to use: Your data fits SQLite’s model (single-writer, modest size, relational queries).
-- Most Postgres SQL works in D1 with minor adjustments
-- Postgres:
CREATE TABLE users (
id SERIAL PRIMARY KEY,
email VARCHAR(255) UNIQUE NOT NULL,
created_at TIMESTAMP DEFAULT NOW()
);
-- D1 (SQLite):
CREATE TABLE users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
email TEXT UNIQUE NOT NULL,
created_at TEXT DEFAULT (datetime('now'))
);
Key differences:
| Feature | Postgres | D1 (SQLite) |
|---|---|---|
| Types | Strict typing | Type affinity (flexible) |
| Auto-increment | SERIAL | INTEGER PRIMARY KEY AUTOINCREMENT |
| Timestamps | TIMESTAMP | TEXT with datetime() |
| JSON | JSONB with operators | json_extract() function |
| Full-text search | tsvector | fts5 virtual tables |
| Connection model | TCP connection pool | HTTP binding (no connection) |
| Max DB size | Unlimited | 10 GB (paid) |
| Transactions | Full ACID | Full ACID (single-writer) |
Gotcha: D1 is single-writer. All writes go through one primary. Reads scale across replicas. If your workload is write-heavy (>1K writes/sec sustained), D1 may not be the right fit.
Postgres to Hyperdrive
When to use: You want to keep Postgres but connect from Workers. Hyperdrive pools connections and caches query results.
// Workers can't connect to Postgres directly.
// Hyperdrive provides a pooled connection string.
import { Client } from "pg";
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const client = new Client(env.HYPERDRIVE.connectionString);
await client.connect();
const result = await client.query("SELECT * FROM users LIMIT 10");
await client.end();
return Response.json(result.rows);
},
};
Watch out for:
- Local dev (
wrangler dev) connects directly, bypassing Hyperdrive. Cache behavior differs. - Hyperdrive caches read queries by default. Use
?sslmode=disablein connection string to skip cache for writes. - Connection pooling reduces cold start latency but does not eliminate it.
Pattern 5: File Storage (S3 to R2)
Migrating from: AWS S3 or any S3-compatible storage.
R2 is S3-compatible at the API level. Most S3 SDKs work with R2 by changing the endpoint.
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
// Same SDK, different endpoint
const s3 = new S3Client({
region: "auto",
endpoint: `https://${ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: R2_ACCESS_KEY_ID,
secretAccessKey: R2_SECRET_ACCESS_KEY,
},
});
// Works exactly like S3
await s3.send(new PutObjectCommand({
Bucket: "my-bucket",
Key: "uploads/photo.jpg",
Body: fileBuffer,
}));
From a Worker, use the binding instead (no credentials needed):
// Direct R2 binding - simpler, no auth config
await env.BUCKET.put("uploads/photo.jpg", request.body, {
httpMetadata: { contentType: "image/jpeg" },
});
Why R2 over S3:
- Zero egress fees (the main selling point)
- S3-compatible API (drop-in replacement for most uses)
- Direct Worker bindings (no credentials in environment)
- Automatic CDN via public bucket or Worker
Watch out for:
- R2 does not support S3 event notifications (use Workers + Queues instead)
- R2 lifecycle rules are more limited than S3’s
- No S3 Select equivalent; download the object and process in the Worker
What Doesn’t Fit (Yet)
Some workloads are not a good match for Workers today:
| Workload | Why it doesn’t fit | Alternative |
|---|---|---|
| Video transcoding | CPU-intensive, exceeds 30s | Containers (beta) or external service |
| ML model inference (large) | Memory limit (128 MB), model size | Workers AI (managed) or Containers |
| WebSocket server with heavy state | DO single-threading bottleneck at scale | Shard across multiple DOs |
| Long-running background jobs (>15 min) | Workflow step timeout limits | Containers or external compute |
| Apps needing raw TCP | No socket API | Tunnels to your infrastructure |
| Write-heavy databases | D1 single-writer bottleneck | Hyperdrive + external Postgres |
Cloudflare Containers (currently in beta) will address the heavy compute and long-running process gaps. Until GA, use Workers for the request/response layer and offload compute to external services.
Incremental Migration Strategy
The safest approach is to migrate one capability at a time:
- Start with static assets - deploy your frontend to Workers Static Assets
- Add an API route - put one endpoint on a Worker, proxy the rest to your existing backend via
fetch() - Move storage - migrate S3 to R2, or add D1 for new features
- Add async processing - use Queues to offload background work
- Cut over - once all routes are on Workers, decommission the old server
This pattern works because Workers can proxy to any origin. You do not need to migrate everything at once.