Hyperdrive
Hyperdrive solves the biggest performance problem with edge compute: connecting to traditional databases. Every Worker invocation that needs Postgres or MySQL pays the cost of a TCP+TLS handshake to a remote database. Hyperdrive eliminates this by maintaining persistent connection pools close to your database and caching query results at the edge.
Prerequisites: First Worker, Storage Landscape
The Problem
Without Hyperdrive, every Worker request to an external database looks like this:
- DNS lookup (~5ms)
- TCP handshake (~30-100ms depending on distance)
- TLS handshake (~30-100ms)
- Authentication (~10ms)
- Query execution (~5-50ms)
Steps 1-4 add 75-300ms of latency before your query even runs. Workers are fast (~0ms cold start), but the database connection dominates response time.
What Hyperdrive Does
Hyperdrive maintains a pool of warm connections near your database’s region. When your Worker queries the database, Hyperdrive:
- Intercepts the connection string
- Routes the query through an existing warm connection
- Optionally caches read query results at the edge
- Returns the result to your Worker
You change one line of code (the connection string), and latency drops dramatically.
Performance Comparison
| Scenario | Direct Connection | With Hyperdrive | Improvement |
|---|---|---|---|
| Cold start (first query) | 150-300ms | 20-40ms | 5-10x |
| Warm query (same region) | 50-100ms | 5-15ms | 5-10x |
| Warm query (cross-region) | 100-200ms | 10-30ms | 5-7x |
| Cached read | 50-100ms | 1-5ms | 20-50x |
The biggest gains come from eliminating connection setup. Cached reads are nearly instant because they never reach the database.
Setup
1. Create a Hyperdrive config
npx wrangler hyperdrive create my-database \
--connection-string="postgres://user:password@db.example.com:5432/mydb"
This returns a Hyperdrive ID. The connection string is encrypted and stored securely - it never appears in your Worker code.
2. Add the binding to wrangler.jsonc
{
"hyperdrive": [
{
"binding": "HYPERDRIVE",
"id": "<your-hyperdrive-id>"
}
]
}
3. Use in your Worker
Replace your database connection string with env.HYPERDRIVE.connectionString:
import { Hono } from "hono";
import postgres from "postgres";
const app = new Hono<{ Bindings: Env }>();
app.get("/users", async (c) => {
// Hyperdrive provides a connection string that routes through the pool
const sql = postgres(c.env.HYPERDRIVE.connectionString);
const users = await sql`SELECT id, name, email FROM users LIMIT 50`;
return c.json(users);
});
export default app;
That’s the entire change. The postgres driver connects to Hyperdrive’s pooled connection instead of directly to your database.
Supported Drivers
Hyperdrive works with any driver that accepts a connection string:
| Language | Driver | Notes |
|---|---|---|
| TypeScript | postgres (postgresjs) | Recommended; works out of the box |
| TypeScript | pg | Works; use pg.Pool with the connection string |
| TypeScript | mysql2 | MySQL support |
| TypeScript | Drizzle ORM | Works with postgres or pg driver underneath |
| TypeScript | Kysely | Works with pg dialect |
Gotcha: Some ORMs (notably Prisma) manage their own connection pooling and may not work seamlessly with Hyperdrive. Prisma uses a custom protocol layer that bypasses standard connection strings in some configurations. Check the Cloudflare docs for your specific ORM before committing to Hyperdrive.
Query Caching
Hyperdrive can cache read queries at the edge. Enable it in the Hyperdrive config:
npx wrangler hyperdrive update my-database \
--caching-disabled=false \
--max-age=60 \
--stale-while-revalidate=15
Or set caching options when creating:
npx wrangler hyperdrive create my-database \
--connection-string="postgres://..." \
--max-age=60 \
--stale-while-revalidate=15
Caching options:
| Option | Default | Description |
|---|---|---|
--caching-disabled | false | Disable caching entirely |
--max-age | 60 | Seconds before a cached response is stale |
--stale-while-revalidate | 15 | Seconds to serve stale while refreshing |
What Gets Cached
- Cached:
SELECTqueries (reads) - Not cached:
INSERT,UPDATE,DELETE, transactions, prepared statements with side effects
Caching is automatic and transparent. You do not need to change your queries.
Bypassing Cache
For queries that must always be fresh, use a transaction (even a read-only one):
const sql = postgres(c.env.HYPERDRIVE.connectionString);
// This bypasses the cache because it's in a transaction
const freshData = await sql.begin(async (tx) => {
return tx`SELECT * FROM accounts WHERE id = ${id}`;
});
When to Use Hyperdrive vs D1
| Hyperdrive | D1 | |
|---|---|---|
| Use case | Existing Postgres/MySQL database | New project, no existing database |
| Database location | Your infrastructure (AWS RDS, Supabase, Neon, etc.) | Cloudflare-managed |
| Connection | Over the internet (pooled) | Native binding (no network hop) |
| SQL dialect | Postgres or MySQL | SQLite |
| Migrations | Your existing tooling | wrangler d1 migrations |
| Pricing | Hyperdrive free tier + your database costs | D1 pricing (rows read/written) |
Use Hyperdrive when you have an existing Postgres/MySQL database and want to query it from Workers without rewriting everything for D1.
Use D1 when you’re building from scratch and want the simplest, fastest option with no external infrastructure.
Configuration Management
List, update, and delete Hyperdrive configs:
# List all configs
npx wrangler hyperdrive list
# Update connection string
npx wrangler hyperdrive update my-database \
--connection-string="postgres://new-user:new-pass@new-host:5432/mydb"
# Delete a config
npx wrangler hyperdrive delete my-database
Local Development
Hyperdrive works in local development with wrangler dev. Locally, the Worker connects directly to your database (bypassing the pool), so you get the same query behavior without needing the Hyperdrive service.
npx wrangler dev
Gotcha: Local dev connects directly, so latency characteristics differ from production. If you’re optimizing query patterns for Hyperdrive caching, test against a deployed Worker to see realistic cache hit rates.
Practical Example: Migrating an Existing API
If you have a Node.js API on a VM connecting to Postgres, here’s the migration path:
// Before: direct connection from a VM
const sql = postgres("postgres://user:pass@db.example.com:5432/mydb");
// After: same driver, Hyperdrive connection string
const sql = postgres(c.env.HYPERDRIVE.connectionString);
The query code stays identical. Only the connection string source changes. Your existing Drizzle/Kysely schema definitions, queries, and migrations all work unchanged.