Browser Rendering
Cloudflare Browser Rendering runs headless Chrome instances on Workers, accessible via the @cloudflare/puppeteer library. You can take screenshots, generate PDFs, scrape web pages, and extract structured data without managing browser infrastructure.
Prerequisites: First Worker, R2 Files
How It Works
Browser Rendering provides a BROWSER binding in your Worker. When you call puppeteer.launch(), Cloudflare spins up a headless Chrome instance in the same region as your Worker, and you control it with the Puppeteer API.
import puppeteer from "@cloudflare/puppeteer";
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const browser = await puppeteer.launch(env.BROWSER);
const page = await browser.newPage();
await page.goto("https://example.com");
const screenshot = await page.screenshot();
await browser.close();
return new Response(screenshot, {
headers: { "Content-Type": "image/png" },
});
},
};
Setup
1. Add the browser binding
{
"browser": {
"binding": "BROWSER"
}
}
2. Install the Puppeteer package
npm install @cloudflare/puppeteer
3. Regenerate types
npx wrangler types
Limits
| Limit | Value |
|---|---|
| Concurrent browser sessions | 2 per Worker (free), 2+ (paid) |
| Max execution time | 60 seconds |
| Max pages per session | No hard limit, but memory-bound |
| Browser keep-alive | Reuses sessions within ~60 seconds |
Screenshot Service
A practical API that screenshots any URL and returns the image:
import { Hono } from "hono";
import puppeteer from "@cloudflare/puppeteer";
const app = new Hono<{ Bindings: Env }>();
app.get("/screenshot", async (c) => {
const url = c.req.query("url");
if (!url) return c.json({ error: "url parameter required" }, 400);
// Validate URL
try {
new URL(url);
} catch {
return c.json({ error: "invalid url" }, 400);
}
const width = parseInt(c.req.query("width") ?? "1280");
const height = parseInt(c.req.query("height") ?? "720");
const browser = await puppeteer.launch(c.env.BROWSER);
const page = await browser.newPage();
await page.setViewport({ width, height });
await page.goto(url, { waitUntil: "networkidle0" });
const screenshot = await page.screenshot({ type: "png", fullPage: false });
await browser.close();
return new Response(screenshot, {
headers: {
"Content-Type": "image/png",
"Cache-Control": "public, max-age=3600",
},
});
});
export default app;
Usage:
curl "https://your-worker.workers.dev/screenshot?url=https://example.com&width=1920&height=1080" \
-o screenshot.png
Screenshot and Store in R2
Combine Browser Rendering with R2 for a persistent screenshot archive:
app.post("/capture", async (c) => {
const { url, key } = await c.req.json<{ url: string; key: string }>();
const browser = await puppeteer.launch(c.env.BROWSER);
const page = await browser.newPage();
await page.setViewport({ width: 1280, height: 720 });
await page.goto(url, { waitUntil: "networkidle0" });
const screenshot = await page.screenshot({ type: "png" });
await browser.close();
// Store in R2
const r2Key = `screenshots/${key}/${Date.now()}.png`;
await c.env.BUCKET.put(r2Key, screenshot, {
httpMetadata: { contentType: "image/png" },
customMetadata: { sourceUrl: url, capturedAt: new Date().toISOString() },
});
return c.json({
stored: true,
key: r2Key,
size: screenshot.byteLength,
});
});
This gives you a screenshot API that archives results. Useful for monitoring dashboards, compliance snapshots, or visual regression testing.
PDF Generation
Generate PDFs from HTML content or URLs:
app.get("/pdf", async (c) => {
const url = c.req.query("url");
if (!url) return c.json({ error: "url parameter required" }, 400);
const browser = await puppeteer.launch(c.env.BROWSER);
const page = await browser.newPage();
await page.goto(url, { waitUntil: "networkidle0" });
const pdf = await page.pdf({
format: "A4",
printBackground: true,
margin: { top: "1cm", bottom: "1cm", left: "1cm", right: "1cm" },
});
await browser.close();
return new Response(pdf, {
headers: {
"Content-Type": "application/pdf",
"Content-Disposition": 'attachment; filename="page.pdf"',
},
});
});
PDF from HTML Template
Render dynamic HTML (invoices, reports) without a live URL:
app.post("/render-pdf", async (c) => {
const { html } = await c.req.json<{ html: string }>();
const browser = await puppeteer.launch(c.env.BROWSER);
const page = await browser.newPage();
await page.setContent(html, { waitUntil: "networkidle0" });
const pdf = await page.pdf({ format: "A4", printBackground: true });
await browser.close();
return new Response(pdf, {
headers: { "Content-Type": "application/pdf" },
});
});
Web Scraping
Extract data from rendered pages (handles JavaScript-rendered content that fetch() alone can’t see):
app.get("/scrape", async (c) => {
const url = c.req.query("url");
if (!url) return c.json({ error: "url parameter required" }, 400);
const browser = await puppeteer.launch(c.env.BROWSER);
const page = await browser.newPage();
// Block unnecessary resources for faster loading
await page.setRequestInterception(true);
page.on("request", (req) => {
const blocked = ["image", "stylesheet", "font", "media"];
if (blocked.includes(req.resourceType())) {
req.abort();
} else {
req.continue();
}
});
await page.goto(url, { waitUntil: "domcontentloaded" });
// Extract structured data
const data = await page.evaluate(() => {
return {
title: document.title,
description:
document
.querySelector('meta[name="description"]')
?.getAttribute("content") ?? null,
headings: Array.from(document.querySelectorAll("h1, h2")).map((h) => ({
level: h.tagName,
text: h.textContent?.trim(),
})),
links: Array.from(document.querySelectorAll("a[href]"))
.map((a) => ({
text: a.textContent?.trim(),
href: (a as HTMLAnchorElement).href,
}))
.slice(0, 50),
};
});
await browser.close();
return c.json(data);
});
Pre-rendering for SEO
Use Browser Rendering to pre-render SPA pages for search engine crawlers:
app.get("*", async (c) => {
const userAgent = c.req.header("User-Agent") ?? "";
const isCrawler = /bot|crawl|spider|slurp|bing|yandex/i.test(userAgent);
if (!isCrawler) {
// Serve the SPA normally
return c.env.ASSETS.fetch(c.req.raw);
}
// Pre-render for crawlers
const browser = await puppeteer.launch(c.env.BROWSER);
const page = await browser.newPage();
await page.goto(c.req.url, { waitUntil: "networkidle0" });
const html = await page.content();
await browser.close();
return new Response(html, {
headers: { "Content-Type": "text/html" },
});
});
Gotcha: Browser sessions are expensive compared to normal Worker requests. Each session spins up a Chrome instance. Use caching aggressively - cache screenshots in R2 or KV, cache pre-rendered HTML, and set appropriate
Cache-Controlheaders. Don’t render the same page on every request.
Cost Considerations
Browser Rendering is included in the Workers Paid plan ($5/mo) with limits on concurrent sessions. The main cost factor is execution time: each browser session consumes Worker CPU time. Optimize by:
- Setting
waitUntil: "domcontentloaded"instead of"networkidle0"when you don’t need JS to finish - Blocking unnecessary resource types (images, fonts) when scraping
- Caching results in R2 or KV
- Closing browsers promptly with
browser.close()