Storage Landscape

Cloudflare offers four storage primitives, each optimized for different access patterns. Picking the right one is the most consequential architecture decision on the platform.

Comparison Table

D1R2KVDurable Objects
Data modelRelational (SQLite)Object/blobKey-valueSingle-actor state (SQLite or key-value)
ConsistencyStrong (single leader)StrongEventually consistentStrong (single instance)
Read latency~5-30ms~10-50ms~10ms (cached at edge)~5ms (colocated)
Write latency~30ms~50ms~60s propagation~1ms (local)
Max value sizeRow limits (SQLite)5TB per object25MB per valueUnlimited (SQLite)
Pricing unitRows read/writtenStorage + operationsReads/writesRequests + duration
Best forStructured data, queries, joinsFiles, media, backupsConfig, cache, flagsReal-time state, WebSockets, coordination

When to Use What

D1 - Your Default Database

Use D1 when you need to query, filter, join, or aggregate structured data. It’s SQLite on Cloudflare’s network, so you get full SQL with zero connection management.

// D1: structured queries
const users = await env.DB.prepare(
  "SELECT * FROM users WHERE created_at > ? ORDER BY name LIMIT 20"
).bind(weekAgo).all();

Good for: user records, webhook logs, configuration tables, anything you’d put in Postgres.

Gotcha: D1 pricing is based on rows read, not rows returned. A SELECT * that scans 100,000 rows to return 10 costs as much as reading 100,000 rows. Use indexes.

R2 - Object Storage

Use R2 for files, media, large payloads - anything binary or over a few KB. S3-compatible API, zero egress fees.

// R2: store and retrieve objects
await env.BUCKET.put("webhooks/2025/01/payload.json", JSON.stringify(data));
const obj = await env.BUCKET.get("webhooks/2025/01/payload.json");
const content = await obj.text();

Good for: uploaded files, large webhook payloads, exports, backups, static assets you manage programmatically.

KV - Edge Cache

Use KV for data that’s read frequently, written rarely, and where staleness is acceptable. KV replicates globally and caches at every edge location, giving fast reads everywhere.

// KV: fast reads, eventual consistency on writes
const config = await env.CACHE.get("feature-flags", "json");
await env.CACHE.put("feature-flags", JSON.stringify(flags), {
  expirationTtl: 3600, // expire in 1 hour
});

Good for: feature flags, rate limit counters (approximate), cached API responses, configuration.

Gotcha: KV is eventually consistent. After a write, it can take up to 60 seconds for the new value to propagate to all edge locations. If you write and immediately read from a different location, you may get the old value. Never use KV as a primary database.

Durable Objects - Stateful Actors

Use Durable Objects when you need strong consistency, real-time state, or coordination between clients. Each Durable Object is a single instance that handles requests sequentially - no race conditions.

// Durable Objects: strongly consistent, single-threaded
const id = env.COUNTER.idFromName("page-views");
const obj = env.COUNTER.get(id);
const response = await obj.fetch(request);

Good for: WebSocket servers, real-time dashboards, counters that must be exact, distributed locks, game state.

Decision Flowchart

flowchart TD
    A{"What are you storing?"} -->|Structured data| B{"Need SQL queries?"}
    A -->|Files or blobs| C["R2"]
    A -->|Simple key-value| D{"Consistency needs?"}
    A -->|Real-time state| E["Durable Objects"]

    B -->|Yes| F["D1"]
    B -->|No| D

    D -->|"Eventual OK\n(read-heavy)"| G["KV"]
    D -->|Strong required| H{"Single actor?"}

    H -->|Yes| E
    H -->|No| F

Cost Comparison

All pricing is on the $5/mo Workers Paid plan (includes generous free tiers):

ServiceFree IncludedPaid Rate
D15M rows read, 100K rows written, 5GB storage$0.001/M rows read, $1.00/M rows written
R210M reads, 1M writes, 10GB storage$0.36/M reads, $4.50/M writes, $0.015/GB/mo
KV100K reads, 1K writes, 1GB storage$0.50/M reads, $5.00/M writes
Durable Objects1M requests, 400K GB-s$0.15/M requests, $12.50/M GB-s

Combining Storage

Real applications use multiple storage services together. The Webhook Hub project demonstrates this:

import { Hono } from "hono";

type Bindings = {
  DB: D1Database;       // Webhook metadata, delivery logs
  BUCKET: R2Bucket;     // Large payloads (>1KB body)
  CACHE: KVNamespace;   // Rate limits, cached configs
};

const app = new Hono<{ Bindings: Bindings }>();

app.post("/webhook/:source", async (c) => {
  const source = c.req.param("source");
  const body = await c.req.text();

  // Check rate limit from KV (fast, approximate)
  const count = parseInt(await c.env.CACHE.get(`rate:${source}`) ?? "0");
  if (count > 100) return c.json({ error: "rate limited" }, 429);

  // Store large payloads in R2, metadata in D1
  let payloadRef: string;
  if (body.length > 1024) {
    const key = `${source}/${Date.now()}.json`;
    await c.env.BUCKET.put(key, body);
    payloadRef = `r2://${key}`;
  } else {
    payloadRef = body;
  }

  await c.env.DB.prepare(
    "INSERT INTO webhooks (source, payload_ref, received_at) VALUES (?, ?, ?)"
  ).bind(source, payloadRef, new Date().toISOString()).run();

  // Increment rate limit counter
  await c.env.CACHE.put(`rate:${source}`, String(count + 1), {
    expirationTtl: 60,
  });

  return c.json({ received: true }, 202);
});

export default app;

Pattern: D1 for metadata and queries, R2 for large objects, KV for hot data and rate limiting. Durable Objects enter when you need real-time coordination (e.g., a WebSocket dashboard showing live delivery status).