Client SDK

FastMCP includes a full MCP client for connecting to any MCP server (not just FastMCP ones). Useful for testing, orchestration, and building MCP-aware applications.

Basic Usage

from fastmcp import Client

# Connect to a FastMCP instance (in-memory, for testing)
async with Client(mcp_server) as client:
    tools = await client.list_tools()
    result = await client.call_tool("tool_name", {"arg": "value"})
    resource = await client.read_resource("scheme://uri")
    prompt = await client.get_prompt("prompt_name", args={})

Transport Options

from fastmcp import Client
from fastmcp.client.transports import (
    PythonStdioTransport,
    NodeStdioTransport,
    SSETransport,
    StreamableHttpTransport,
    MCPConfigTransport,
)

# Python subprocess (stdio)
transport = PythonStdioTransport(command="python", args=["server.py"])

# Node.js subprocess (stdio)
transport = NodeStdioTransport(command="node", args=["server.js"])

# SSE (remote HTTP)
transport = SSETransport(url="http://server:8000/sse")

# Streamable HTTP
transport = StreamableHttpTransport(url="http://server:8000/streamable")

# From mcp_config.json
transport = MCPConfigTransport.from_config("path/to/mcp_config.json", server_name="my-server")

async with Client(transport) as client:
    # Use normally
    tools = await client.list_tools()

Handlers

from mcp.types import CreateMessageRequest, CreateMessageResult

# Message handler - receive log messages from server
def on_message(level: str, content: str):
    print(f"[{level}] {content}")

# Progress handler - track long-running operations
def on_progress(progress):
    print(f"{progress.progress}/{progress.total}")

# Sampling handler - server asks client's LLM to generate text
async def on_sample(params: CreateMessageRequest) -> CreateMessageResult:
    # Route to your LLM of choice
    response = await call_llm(params.messages)
    return CreateMessageResult(
        role="assistant",
        content=TextContent(type="text", text=response),
        model="gpt-4",
    )

async with Client(
    server,
    message_handler=on_message,
    progress_handler=on_progress,
    sampling_handler=on_sample,
) as client:
    result = await client.call_tool("tool_that_samples", {})

Use Cases for MyLocalGPT

  1. Gateway client: MyLocalGPT’s Go core can spawn Python MCP servers as subprocesses, then use the Client SDK (or a Go equivalent) to call tools
  2. Server-to-server: One MCP server calling another (via ProxyProvider or direct Client usage)
  3. Testing: Validate all MCP servers in the ecosystem before deployment
  4. Orchestration: Build multi-step workflows that call multiple MCP tools in sequence