Gotchas
Aggregated sharp edges across the Cloudflare developer platform. These are the things that work fine in tutorials but cause problems in real deployments.
Runtime & Compute
CPU Time vs Wall Clock Time
Workers have a CPU time limit (10ms free, 30s paid), not a wall clock limit. Waiting on fetch(), D1, KV, or R2 does not count against CPU time. But CPU-intensive work (JSON parsing, crypto, loops) does.
// This is fine - fetch wait time doesn't count
const response = await fetch("https://api.example.com/data"); // 500ms wall clock, ~0ms CPU
// This will hit the limit fast
for (let i = 0; i < 10_000_000; i++) {
// Pure computation burns CPU time
}
Gotcha: The 10ms free-tier CPU limit is per invocation, not per second. Even simple JSON parsing of a large payload can exceed it.
128 MB Memory Limit
Each Worker isolate gets 128 MB of memory. There is no way to increase this. Large JSON payloads, image processing, or accumulating data in memory will hit this wall.
Mitigations:
- Stream large responses instead of buffering them
- Use R2 for file processing (upload, then process in chunks)
- Offload heavy computation to Containers (when GA)
Worker Size Limit
- Free tier: 3 MB compressed
- Paid tier: 10 MB compressed
This is after gzip compression of your bundled Worker code plus dependencies. Large npm packages (e.g., puppeteer-core, heavy ORMs) may not fit.
Gotcha: Some older documentation says 25 MB. The actual limit is 10 MB on paid, 3 MB on free.
Subrequest Limits
Each Worker invocation can make a limited number of outbound fetch() calls:
- Free tier: 50 subrequests
- Paid tier: 10,000 subrequests (not 1,000 as some older docs say)
No Raw TCP Sockets
Workers cannot open raw TCP connections. You get fetch() (HTTP/HTTPS), WebSockets, and specific protocol bindings (D1, R2, etc.). No direct Postgres connections, no Redis protocol, no SMTP.
Workarounds:
- Use Hyperdrive for Postgres connection pooling
- Use service bindings for inter-Worker communication
- Use Tunnels to reach TCP services on your infrastructure
Storage
KV: Eventual Consistency
KV writes propagate globally but not instantly. After a write, reads from a different region may return stale data for up to 60 seconds.
await env.KV.put("counter", "42");
const value = await env.KV.get("counter");
// value might still be "41" if read from a different edge location
When this matters:
- Rate limiting (use Durable Objects instead for strict limits)
- Session data (user writes in one region, reads in another)
- Feature flags (brief staleness is usually acceptable)
Gotcha: KV is eventually consistent globally but strongly consistent within the same location. If your reads and writes happen at the same edge, you will see consistent results, which makes this bug hard to reproduce in development.
D1: Rows Scanned, Not Rows Returned
D1 billing counts rows scanned during query execution, not rows returned. An unindexed query on a 1M-row table costs 1M rows read even if it returns 1 row.
See the Pricing page for detailed mitigation strategies (indexes, EXPLAIN QUERY PLAN, cursor pagination).
Durable Objects: Single-Threaded
Each Durable Object instance processes one request at a time. Concurrent requests queue up. This is by design (single-writer consistency), but it means a slow handler blocks everything behind it.
// This blocks ALL other requests to this DO for 5 seconds
async fetch(request: Request): Promise<Response> {
await someSlowOperation(); // 5 seconds
return new Response("done");
}
Mitigations:
- Keep DO handlers fast (offload heavy work to Queues)
- Use the WebSocket hibernation API to release the isolate between messages
- Shard work across multiple DO instances if parallelism is needed
Durable Objects: SQLite API Differs from D1
DO SQLite uses synchronous this.state.storage.sql.exec(), not the D1-style prepare().bind().run() async pattern. The storage is local to the instance, so there is no network hop.
// D1 (async, network call)
const result = await env.DB.prepare("SELECT * FROM items WHERE id = ?")
.bind(id).first();
// DO SQLite (synchronous, local)
const result = this.state.storage.sql.exec(
"SELECT * FROM items WHERE id = ?", id
).one();
Configuration
wrangler.toml vs wrangler.jsonc
Cloudflare is migrating from TOML to JSON with comments. Prefer wrangler.jsonc for new projects. Both work, but jsonc is the direction tooling is heading.
Gotcha: TOML and JSONC have different syntax for the same config. Cron triggers in TOML use
[triggers]tables; in JSONC they use"triggers": { "crons": [...] }. Do not mix syntaxes.
Compatibility Flags and Dates
Workers use a compatibility_date to opt into runtime behavior changes. An outdated date may silently use legacy behavior. An aggressive date may break existing code.
{
"compatibility_date": "2024-09-23",
"compatibility_flags": ["nodejs_compat"]
}
nodejs_compatenables Node.js API polyfills (required for many npm packages)- Check the compatibility flags list before updating dates
Gotcha: Some compatibility flags are tied to specific dates. Updating
compatibility_datealone can change behavior even without adding new flags.
Testing
Vitest Version Requirement
The @cloudflare/vitest-pool-workers package requires Vitest 4.1+. The configuration uses the cloudflareTest plugin pattern, not the older pool-based config.
// vite.config.ts (correct - Vitest 4.1+)
import { cloudflareTest } from "@cloudflare/vitest-pool-workers/config";
export default defineConfig({
plugins: [cloudflareTest()],
});
Cron Trigger Local Testing
Use the special __scheduled endpoint to trigger cron handlers locally. The expression must be URL-encoded.
# Trigger the cron handler locally
curl "http://localhost:8787/__scheduled?cron=*/5+*+*+*+*"
Hyperdrive Local Dev
wrangler dev connects directly to the database, bypassing Hyperdrive’s connection pool. Cache hit rates and latency will differ from production. You cannot test connection pooling behavior locally.
Deployment
Queue Consumer Retries
If a queue consumer throws an unhandled error, the entire batch retries. Use per-message msg.ack() and msg.retry() for fine-grained control.
async queue(batch: MessageBatch, env: Env): Promise<void> {
for (const msg of batch.messages) {
try {
await processMessage(msg.body);
msg.ack();
} catch (err) {
msg.retry(); // Only this message retries
}
}
}
DO Migrations Required for New Classes
When adding a new Durable Object with SQLite, you need both the binding and a migration entry. Without the migration tag, deployment fails silently.
{
"durable_objects": {
"bindings": [
{ "name": "COUNTER", "class_name": "Counter", "sqlite_database": true }
]
},
"migrations": [
{ "tag": "v1", "new_sqlite_classes": ["Counter"] }
]
}
Workflow Step Idempotency
Workflow steps re-execute the entire step function on retry, not just the failed line. Every step must be idempotent.
// Bad - sends duplicate email on retry
export class MyWorkflow extends WorkflowEntrypoint {
async run(event: WorkflowEvent, step: WorkflowStep) {
await step.do("notify", async () => {
await sendEmail(event.payload.email); // May run twice
await updateDatabase(event.payload.id);
});
}
}
// Better - use idempotency keys
export class MyWorkflow extends WorkflowEntrypoint {
async run(event: WorkflowEvent, step: WorkflowStep) {
await step.do("notify", async () => {
await sendEmail(event.payload.email, {
idempotencyKey: event.id,
});
await updateDatabase(event.payload.id);
});
}
}
WebSocket Hibernation API
Use this.state.acceptWebSocket(ws) with class methods, not ws.addEventListener(). The hibernation API requires the class method pattern to properly suspend the isolate between messages.
// Wrong - prevents hibernation
ws.addEventListener("message", (event) => { /* ... */ });
// Correct - enables hibernation
webSocketMessage(ws: WebSocket, message: string | ArrayBuffer) {
// Handle message
}
Networking
Static Assets Routing
When using Workers with static assets, API routes need explicit routing config. Without run_worker_first, the static asset handler intercepts everything.
{
"assets": {
"directory": "dist",
"run_worker_first": ["/api/*"]
}
}
Vite Plugin Reads wrangler.jsonc
The @cloudflare/vite-plugin reads bindings from wrangler.jsonc automatically. Do not duplicate config in vite.config.ts.
// vite.config.ts - this is all you need
import { cloudflare } from "@cloudflare/vite-plugin";
export default defineConfig({
plugins: [cloudflare()],
});