Hyperdrive

Hyperdrive solves the biggest performance problem with edge compute: connecting to traditional databases. Every Worker invocation that needs Postgres or MySQL pays the cost of a TCP+TLS handshake to a remote database. Hyperdrive eliminates this by maintaining persistent connection pools close to your database and caching query results at the edge.

Prerequisites: First Worker, Storage Landscape

The Problem

Without Hyperdrive, every Worker request to an external database looks like this:

  1. DNS lookup (~5ms)
  2. TCP handshake (~30-100ms depending on distance)
  3. TLS handshake (~30-100ms)
  4. Authentication (~10ms)
  5. Query execution (~5-50ms)

Steps 1-4 add 75-300ms of latency before your query even runs. Workers are fast (~0ms cold start), but the database connection dominates response time.

What Hyperdrive Does

Hyperdrive maintains a pool of warm connections near your database’s region. When your Worker queries the database, Hyperdrive:

  1. Intercepts the connection string
  2. Routes the query through an existing warm connection
  3. Optionally caches read query results at the edge
  4. Returns the result to your Worker

You change one line of code (the connection string), and latency drops dramatically.

Performance Comparison

ScenarioDirect ConnectionWith HyperdriveImprovement
Cold start (first query)150-300ms20-40ms5-10x
Warm query (same region)50-100ms5-15ms5-10x
Warm query (cross-region)100-200ms10-30ms5-7x
Cached read50-100ms1-5ms20-50x

The biggest gains come from eliminating connection setup. Cached reads are nearly instant because they never reach the database.

Setup

1. Create a Hyperdrive config

npx wrangler hyperdrive create my-database \
  --connection-string="postgres://user:password@db.example.com:5432/mydb"

This returns a Hyperdrive ID. The connection string is encrypted and stored securely - it never appears in your Worker code.

2. Add the binding to wrangler.jsonc

{
  "hyperdrive": [
    {
      "binding": "HYPERDRIVE",
      "id": "<your-hyperdrive-id>"
    }
  ]
}

3. Use in your Worker

Replace your database connection string with env.HYPERDRIVE.connectionString:

import { Hono } from "hono";
import postgres from "postgres";

const app = new Hono<{ Bindings: Env }>();

app.get("/users", async (c) => {
  // Hyperdrive provides a connection string that routes through the pool
  const sql = postgres(c.env.HYPERDRIVE.connectionString);

  const users = await sql`SELECT id, name, email FROM users LIMIT 50`;
  return c.json(users);
});

export default app;

That’s the entire change. The postgres driver connects to Hyperdrive’s pooled connection instead of directly to your database.

Supported Drivers

Hyperdrive works with any driver that accepts a connection string:

LanguageDriverNotes
TypeScriptpostgres (postgresjs)Recommended; works out of the box
TypeScriptpgWorks; use pg.Pool with the connection string
TypeScriptmysql2MySQL support
TypeScriptDrizzle ORMWorks with postgres or pg driver underneath
TypeScriptKyselyWorks with pg dialect

Gotcha: Some ORMs (notably Prisma) manage their own connection pooling and may not work seamlessly with Hyperdrive. Prisma uses a custom protocol layer that bypasses standard connection strings in some configurations. Check the Cloudflare docs for your specific ORM before committing to Hyperdrive.

Query Caching

Hyperdrive can cache read queries at the edge. Enable it in the Hyperdrive config:

npx wrangler hyperdrive update my-database \
  --caching-disabled=false \
  --max-age=60 \
  --stale-while-revalidate=15

Or set caching options when creating:

npx wrangler hyperdrive create my-database \
  --connection-string="postgres://..." \
  --max-age=60 \
  --stale-while-revalidate=15

Caching options:

OptionDefaultDescription
--caching-disabledfalseDisable caching entirely
--max-age60Seconds before a cached response is stale
--stale-while-revalidate15Seconds to serve stale while refreshing

What Gets Cached

  • Cached: SELECT queries (reads)
  • Not cached: INSERT, UPDATE, DELETE, transactions, prepared statements with side effects

Caching is automatic and transparent. You do not need to change your queries.

Bypassing Cache

For queries that must always be fresh, use a transaction (even a read-only one):

const sql = postgres(c.env.HYPERDRIVE.connectionString);

// This bypasses the cache because it's in a transaction
const freshData = await sql.begin(async (tx) => {
  return tx`SELECT * FROM accounts WHERE id = ${id}`;
});

When to Use Hyperdrive vs D1

HyperdriveD1
Use caseExisting Postgres/MySQL databaseNew project, no existing database
Database locationYour infrastructure (AWS RDS, Supabase, Neon, etc.)Cloudflare-managed
ConnectionOver the internet (pooled)Native binding (no network hop)
SQL dialectPostgres or MySQLSQLite
MigrationsYour existing toolingwrangler d1 migrations
PricingHyperdrive free tier + your database costsD1 pricing (rows read/written)

Use Hyperdrive when you have an existing Postgres/MySQL database and want to query it from Workers without rewriting everything for D1.

Use D1 when you’re building from scratch and want the simplest, fastest option with no external infrastructure.

Configuration Management

List, update, and delete Hyperdrive configs:

# List all configs
npx wrangler hyperdrive list

# Update connection string
npx wrangler hyperdrive update my-database \
  --connection-string="postgres://new-user:new-pass@new-host:5432/mydb"

# Delete a config
npx wrangler hyperdrive delete my-database

Local Development

Hyperdrive works in local development with wrangler dev. Locally, the Worker connects directly to your database (bypassing the pool), so you get the same query behavior without needing the Hyperdrive service.

npx wrangler dev

Gotcha: Local dev connects directly, so latency characteristics differ from production. If you’re optimizing query patterns for Hyperdrive caching, test against a deployed Worker to see realistic cache hit rates.

Practical Example: Migrating an Existing API

If you have a Node.js API on a VM connecting to Postgres, here’s the migration path:

// Before: direct connection from a VM
const sql = postgres("postgres://user:pass@db.example.com:5432/mydb");

// After: same driver, Hyperdrive connection string
const sql = postgres(c.env.HYPERDRIVE.connectionString);

The query code stays identical. Only the connection string source changes. Your existing Drizzle/Kysely schema definitions, queries, and migrations all work unchanged.