Workers
A Worker is a JavaScript/TypeScript program that runs on Cloudflare’s edge in response to HTTP requests, cron triggers, or queue messages. Every Worker exports handlers for the event types it responds to.
Module Syntax
Workers use ES module syntax exclusively. The older Service Worker syntax (addEventListener("fetch", ...)) is deprecated. Every Worker exports a default object with handler methods:
export default {
// HTTP requests
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
return new Response("hello");
},
// Cron triggers
async scheduled(event: ScheduledEvent, env: Env, ctx: ExecutionContext): Promise<void> {
ctx.waitUntil(doWork(env));
},
// Queue consumer
async queue(batch: MessageBatch, env: Env): Promise<void> {
for (const msg of batch.messages) {
await processMessage(msg);
msg.ack();
}
},
};
Startup Phase vs Request Phase
Workers have two execution phases with different capabilities:
// STARTUP PHASE - runs once when isolate is created
// Can do synchronous initialization, import modules
// Cannot use bindings (env is not available yet)
const router = new Hono();
const config = { version: "1.0" };
// REQUEST PHASE - runs per request
// Can use bindings, make subrequests, read/write storage
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext) {
// env.DB is available here, not at startup
const data = await env.DB.prepare("SELECT 1").first();
return Response.json(data);
},
};
Gotcha: Global variables persist between requests on the same isolate but you cannot depend on this. The isolate may be evicted at any time. Use KV, D1, or Durable Objects for state that must persist.
Resource Limits
| Resource | Free Plan | Paid Plan ($5/mo) |
|---|---|---|
| CPU time per request | 10ms | 30s (configurable up to 5min) |
| Memory | 128MB | 128MB |
| Subrequests | 50 | 10,000 |
| Worker size | 3MB compressed | 10MB compressed |
| Request body | 100MB | 100MB |
| Environment variables | 64 per Worker | 64 per Worker |
| Cron triggers | 3 per Worker | 3 per Worker |
Gotcha: CPU time is not wall clock time. I/O waits (database queries, fetch calls, KV reads) don’t count against the CPU limit. A Worker that makes 5 database queries taking 200ms each uses almost no CPU time - only the JavaScript execution between those calls counts. You have far more time than 30s of real-world execution.
To increase CPU time beyond 30s on paid plans:
// wrangler.jsonc
{
"limits": {
"cpu_ms": 300000 // 5 minutes
}
}
Environment Bindings
Bindings connect Workers to Cloudflare services and configuration. They appear as properties on the env parameter.
Typing Bindings with Hono
Hono provides a clean pattern for typed bindings:
import { Hono } from "hono";
// Define your binding types
type Bindings = {
DB: D1Database;
BUCKET: R2Bucket;
CACHE: KVNamespace;
WEBHOOK_SECRET: string; // Environment variable (secret)
ENVIRONMENT: string; // Environment variable (plain)
};
// Pass to Hono's generic parameter
const app = new Hono<{ Bindings: Bindings }>();
// Bindings are now typed on c.env
app.get("/users", async (c) => {
// c.env.DB is typed as D1Database
const users = await c.env.DB.prepare("SELECT * FROM users LIMIT 10").all();
return c.json(users.results);
});
app.post("/upload/:key", async (c) => {
const key = c.req.param("key");
const body = await c.req.arrayBuffer();
// c.env.BUCKET is typed as R2Bucket
await c.env.BUCKET.put(key, body);
return c.json({ key, size: body.byteLength }, 201);
});
export default app;
Binding Types Reference
| Binding Type | Config Key | TypeScript Type | What It Does |
|---|---|---|---|
| D1 Database | d1_databases | D1Database | SQL queries against SQLite |
| R2 Bucket | r2_buckets | R2Bucket | Object storage (S3-compatible) |
| KV Namespace | kv_namespaces | KVNamespace | Global key-value store |
| Queue Producer | queues.producers | Queue | Send messages to a queue |
| Durable Object | durable_objects.bindings | DurableObjectNamespace | Stateful singleton actors |
| Service Binding | services | Fetcher | Call another Worker directly |
| AI | ai | Ai | Workers AI inference |
| Secrets | wrangler secret put | string | Encrypted environment variables |
| Variables | vars | string | Plain environment variables |
Declaring Bindings in wrangler.jsonc
{
"name": "my-worker",
"main": "src/index.ts",
"compatibility_date": "2025-01-01",
"compatibility_flags": ["nodejs_compat"],
// Database
"d1_databases": [
{ "binding": "DB", "database_name": "webhooks", "database_id": "abc-123" }
],
// Object storage
"r2_buckets": [
{ "binding": "BUCKET", "bucket_name": "payloads" }
],
// Key-value
"kv_namespaces": [
{ "binding": "CACHE", "id": "def-456" }
],
// Queue
"queues": {
"producers": [
{ "binding": "DELIVERY_QUEUE", "queue": "webhook-delivery" }
]
},
// Scheduled triggers
"triggers": {
"crons": ["*/5 * * * *"]
},
// Plain variables
"vars": {
"ENVIRONMENT": "production"
}
}
waitUntil() for Background Work
ctx.waitUntil() lets you run work after the response is sent. The runtime keeps the isolate alive until the promise resolves.
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext) {
// Respond immediately
const response = new Response("accepted", { status: 202 });
// Log analytics in the background (doesn't delay response)
ctx.waitUntil(
env.DB.prepare("INSERT INTO logs (path, ts) VALUES (?, ?)")
.bind(new URL(request.url).pathname, Date.now())
.run()
);
return response;
},
};
Use cases: logging, analytics, cache warming, sending notifications. The work must complete within the Worker’s CPU time limit.
Multiple Handlers
A single Worker can handle HTTP requests, cron triggers, and queue messages:
import { Hono } from "hono";
type Bindings = {
DB: D1Database;
DELIVERY_QUEUE: Queue;
};
const app = new Hono<{ Bindings: Bindings }>();
app.post("/webhook/:source", async (c) => {
const source = c.req.param("source");
const payload = await c.req.json();
await c.env.DELIVERY_QUEUE.send({ source, payload });
return c.json({ queued: true }, 202);
});
export default {
fetch: app.fetch,
async scheduled(event: ScheduledEvent, env: Bindings, ctx: ExecutionContext) {
// Runs on cron schedule
ctx.waitUntil(cleanupOldRecords(env.DB));
},
async queue(batch: MessageBatch, env: Bindings) {
for (const msg of batch.messages) {
await deliverWebhook(msg.body, env.DB);
msg.ack();
}
},
};