R2 Files
Create an R2 bucket, bind it to your Worker, and build upload/download routes. The Webhook Hub uses R2 to offload large payloads (>1MB) from D1, storing just an R2 key reference in the database row.
Prerequisites: Storage Landscape, First Worker
Create the Bucket
npx wrangler r2 bucket create webhook-payloads
Add the binding to wrangler.jsonc:
{
"name": "webhook-hub",
"main": "src/index.ts",
"compatibility_date": "2025-01-01",
"compatibility_flags": ["nodejs_compat"],
"d1_databases": [
{
"binding": "DB",
"database_name": "webhook-hub-db",
"database_id": "<your-db-id>"
}
],
"r2_buckets": [
{
"binding": "BUCKET",
"bucket_name": "webhook-payloads"
}
]
}
Re-generate types:
npx wrangler types
Now c.env.BUCKET is typed as R2Bucket in your Hono app.
Upload and Download
Basic put/get operations:
import { Hono } from "hono";
const app = new Hono<{ Bindings: Env }>();
// Upload an object
app.put("/files/:key", async (c) => {
const key = c.req.param("key");
const body = await c.req.arrayBuffer();
await c.env.BUCKET.put(key, body, {
httpMetadata: {
contentType: c.req.header("Content-Type") ?? "application/octet-stream",
},
});
return c.json({ key, size: body.byteLength }, 201);
});
// Download an object
app.get("/files/:key", async (c) => {
const key = c.req.param("key");
const object = await c.env.BUCKET.get(key);
if (!object) {
return c.json({ error: "not found" }, 404);
}
const headers = new Headers();
object.writeHttpMetadata(headers);
headers.set("etag", object.httpEtag);
return new Response(object.body, { headers });
});
// Delete an object
app.delete("/files/:key", async (c) => {
const key = c.req.param("key");
await c.env.BUCKET.delete(key);
return c.json({ deleted: true });
});
// List objects with optional prefix
app.get("/files", async (c) => {
const prefix = c.req.query("prefix") ?? "";
const listed = await c.env.BUCKET.list({ prefix, limit: 100 });
return c.json({
objects: listed.objects.map((obj) => ({
key: obj.key,
size: obj.size,
uploaded: obj.uploaded,
})),
truncated: listed.truncated,
});
});
export default app;
Key points:
put(key, body, options)- body can beArrayBuffer,ReadableStream,string, orBlobget(key)- returnsR2ObjectBody(with.bodystream) ornullwriteHttpMetadata(headers)- copies content-type, cache-control, etc. to response headerslist({ prefix, limit })- paginated listing, returnstruncated: trueif more results exist
Presigned URLs for Direct Browser Upload
For large file uploads, skip your Worker and let the browser upload directly to R2. Use the S3-compatible API with @aws-sdk/s3-request-presigner:
npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner
Create a route that generates presigned URLs:
import { S3Client, PutObjectCommand, GetObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
// Helper to create an S3 client pointed at R2
function createR2Client(env: Env) {
return new S3Client({
region: "auto",
endpoint: `https://${env.ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: env.R2_ACCESS_KEY_ID,
secretAccessKey: env.R2_SECRET_ACCESS_KEY,
},
});
}
// Generate a presigned upload URL
app.post("/presign/upload", async (c) => {
const { key, contentType } = await c.req.json<{
key: string;
contentType: string;
}>();
const client = createR2Client(c.env);
const command = new PutObjectCommand({
Bucket: "webhook-payloads",
Key: key,
ContentType: contentType,
});
const url = await getSignedUrl(client, command, { expiresIn: 3600 });
return c.json({ url, key, expiresIn: 3600 });
});
// Generate a presigned download URL
app.post("/presign/download", async (c) => {
const { key } = await c.req.json<{ key: string }>();
const client = createR2Client(c.env);
const command = new GetObjectCommand({
Bucket: "webhook-payloads",
Key: key,
});
const url = await getSignedUrl(client, command, { expiresIn: 3600 });
return c.json({ url, key, expiresIn: 3600 });
});
Add the required secrets:
# Find your account ID in the Cloudflare dashboard URL
npx wrangler secret put ACCOUNT_ID
npx wrangler secret put R2_ACCESS_KEY_ID
npx wrangler secret put R2_SECRET_ACCESS_KEY
Create R2 API tokens in the Cloudflare dashboard under R2 > Manage R2 API Tokens.
The browser then uploads directly to R2 using the presigned URL:
// Client-side: upload directly to R2
const { url } = await fetch("/presign/upload", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ key: "uploads/photo.jpg", contentType: "image/jpeg" }),
}).then((r) => r.json());
await fetch(url, {
method: "PUT",
body: file, // File object from <input type="file">
headers: { "Content-Type": "image/jpeg" },
});
Multipart Upload for Large Files
For files over ~100MB, use multipart upload to send data in parts. R2 supports the S3 multipart upload protocol:
// Multipart upload via the Workers R2 binding
app.post("/upload-large/:key", async (c) => {
const key = c.req.param("key");
const body = c.req.raw.body;
if (!body) {
return c.json({ error: "no body" }, 400);
}
// Create multipart upload
const upload = await c.env.BUCKET.createMultipartUpload(key, {
httpMetadata: {
contentType: c.req.header("Content-Type") ?? "application/octet-stream",
},
});
// Read the stream in 10MB chunks and upload each part
const PART_SIZE = 10 * 1024 * 1024; // 10MB minimum part size
const reader = body.getReader();
const parts: R2UploadedPart[] = [];
let buffer = new Uint8Array(0);
let partNumber = 1;
while (true) {
const { done, value } = await reader.read();
if (value) {
const newBuffer = new Uint8Array(buffer.length + value.length);
newBuffer.set(buffer);
newBuffer.set(value, buffer.length);
buffer = newBuffer;
}
// Upload when buffer exceeds part size or stream is done
if (buffer.length >= PART_SIZE || (done && buffer.length > 0)) {
const chunk = buffer.slice(0, PART_SIZE);
buffer = buffer.slice(PART_SIZE);
const part = await upload.uploadPart(partNumber, chunk);
parts.push(part);
partNumber++;
}
if (done) break;
}
// Complete the upload
const result = await upload.complete(parts);
return c.json({ key, size: result.size, etag: result.etag }, 201);
});
Tip: Minimum part size is 5MB (except the last part). Using 10MB parts is a good default. The maximum number of parts is 10,000, giving a maximum object size of ~50TB with 5MB parts.
Lifecycle Policies
Auto-delete old objects to control storage costs. Add a lifecycle rule via wrangler:
npx wrangler r2 bucket lifecycle add webhook-payloads \
--expire-days 90 \
--prefix "webhooks/"
This deletes any object under webhooks/ that’s older than 90 days.
You can also set lifecycle rules in the Cloudflare dashboard under R2 > your bucket > Settings > Object lifecycle rules.
Webhook Hub: D1 + R2 Together
Store small payloads directly in D1, offload large ones to R2:
app.post("/webhook/:source", async (c) => {
const source = c.req.param("source");
const payload = await c.req.text();
const eventType = c.req.header("X-Event-Type") ?? "unknown";
let payloadRef: string;
if (payload.length > 1_000_000) {
// Large payload: store in R2, save the key in D1
const key = `webhooks/${source}/${Date.now()}.json`;
await c.env.BUCKET.put(key, payload, {
httpMetadata: { contentType: "application/json" },
});
payloadRef = `r2://${key}`;
} else {
// Small payload: store directly in D1
payloadRef = payload;
}
const result = await c.env.DB.prepare(
"INSERT INTO webhooks (source, event_type, payload) VALUES (?, ?, ?)"
)
.bind(source, eventType, payloadRef)
.run();
return c.json({ id: result.meta.last_row_id, source, stored_in: payload.length > 1_000_000 ? "r2" : "d1" }, 201);
});
// Retrieve a webhook, resolving R2 references
app.get("/webhooks/:id/payload", async (c) => {
const id = parseInt(c.req.param("id"));
const row = await c.env.DB.prepare(
"SELECT payload FROM webhooks WHERE id = ?"
)
.bind(id)
.first<{ payload: string }>();
if (!row) return c.json({ error: "not found" }, 404);
if (row.payload.startsWith("r2://")) {
const key = row.payload.slice(5); // Remove "r2://" prefix
const object = await c.env.BUCKET.get(key);
if (!object) return c.json({ error: "payload expired or missing" }, 404);
return new Response(object.body, {
headers: { "Content-Type": "application/json" },
});
}
return c.json(JSON.parse(row.payload));
});
Pattern: D1 holds the metadata and a payload column that’s either the raw data (small) or an r2:// reference (large). The retrieval route transparently resolves both.
What’s Next
- KV Caching - add per-source rate limiting with KV
- Turnstile - protect the dashboard with bot verification
- Browser Rendering - generate screenshots and store them in R2