Transports
How MCP servers communicate with clients. The transport determines the connection model, scalability characteristics, and deployment options.
Transport Types
| Transport | Use case | Protocol | Statefulness |
|---|---|---|---|
stdio | Local development, CLI tools | stdin/stdout | Per-process |
sse | Legacy remote servers | Server-Sent Events over HTTP | Stateful sessions |
streamable-http | Production remote servers | Standard HTTP with optional streaming | Stateless |
stdio (Local)
The default transport. FastMCP runs as a subprocess, communicating through stdin/stdout:
mcp = FastMCP("MyServer")
mcp.run(transport="stdio") # default
When to use: Local development, integration with Claude Desktop, CLI tool wrapping. The client spawns the server as a subprocess.
Client configuration (Claude Desktop):
{
"mcpServers": {
"my-server": {
"command": "python",
"args": ["server.py"]
}
}
}
Characteristics:
- No network setup needed
- One server per client connection
- Process lifecycle tied to client
- Fastest option (no network overhead)
SSE (Legacy Remote)
Server-Sent Events over HTTP. The original remote transport:
mcp.run(transport="sse", host="0.0.0.0", port=8000)
When to use: Existing deployments that already use SSE. For new servers, prefer streamable HTTP.
Characteristics:
- Long-lived HTTP connections
- Stateful sessions (server maintains connection state)
- Difficult to load balance (requires sticky sessions)
- May be blocked by firewalls or proxies that don’t support long-lived connections
Streamable HTTP (New Standard)
The recommended transport for production remote servers:
mcp.run(transport="streamable-http", host="0.0.0.0", port=8000)
Client configuration:
{
"mcpServers": {
"my-server": {
"transport": "streamable-http",
"url": "https://my-server.example.com/v1"
}
}
}
Characteristics:
- Standard HTTP request-response with optional chunked streaming
- Stateless by default, works with any load balancer
- Firewall-friendly (standard HTTPS)
- Built-in resumability for interrupted connections
- Supports progressive response streaming
Choosing a Transport
Is the server local (same machine as client)?
├── Yes → stdio
└── No (remote server)
├── New server → streamable-http
└── Existing SSE server → keep sse, migrate when convenient
Scalability Considerations
stdio
Client ──subprocess──> Server
One server per client. Scales by spawning more processes. No shared state between clients.
SSE
Client ──long-lived HTTP──> Server (stateful)
└── Session state in memory
Requires sticky sessions for load balancing. Session state lives in server memory. Horizontal scaling requires session affinity.
Streamable HTTP
Client ──HTTP request──> Load Balancer ──> Server 1
──> Server 2
──> Server 3
Standard load balancing works. No session affinity needed. State (if any) lives in external storage (Redis, database).
Server Capability Discovery
Clients discover what a server supports through the initialize handshake:
{
"capabilities": {
"tools": { "listChanged": true },
"resources": { "subscribe": true },
"prompts": { "listChanged": true },
"tasks": true,
"elicitation": true
}
}
Clients adapt their behavior based on declared capabilities. A server that doesn’t declare tasks capability won’t receive task-related requests.
Next
- Architecture - server class, providers, lifecycle
- MCP in 2026 - spec evolution, governance, enterprise features