SDK Comparison: Claude vs OpenAI vs Copilot

How the three major agentic SDKs compare in architecture, abstractions, and use cases.

Overview

Claude Agent SDKOpenAI Agents SDKCopilot SDK
Released2025March 2025Jan 2026 (preview)
LanguagesPython, TypeScriptPython, TypeScriptNode.js, Python, Go, .NET
Model lock-inClaude models onlyProvider-agnosticGitHub Models (multi-model)
Primary useBuild agents with ClaudeBuild multi-agent systemsEmbed Copilot in any app
Open sourceYesYesYes

Claude Agent SDK

Anthropic’s SDK that exposes Claude Code’s full agentic engine as a library. Available in Python (claude-agent-sdk) and TypeScript (@anthropic-ai/claude-agent-sdk).

Two levels

  1. Client SDK (anthropic) - you manage the tool loop yourself
  2. Agent SDK (claude-agent-sdk) - Claude manages the loop, you get streaming messages

Core abstractions (Agent SDK)

from claude_agent_sdk import query, create_sdk_mcp_server, AgentDefinition

# query() is the main entry point - async iterator of messages
async for message in query(
    prompt="Fix the bug in auth.py",
    options={
        "model": "claude-sonnet-4-6",
        "permissionMode": "default",
    }
):
    print(message)

Unlike other SDKs where you define an Agent object, the Claude Agent SDK’s core is query() - an async iterator that gives you the full agentic loop with built-in tools (Read, Write, Edit, Bash, Grep, Glob, Agent, etc.).

Key features

FeatureDescription
Built-in toolsRead, Write, Edit, Bash, Glob, Grep, WebSearch, WebFetch, Agent
Custom toolsIn-process MCP servers via create_sdk_mcp_server() + @tool
SubagentsAgentDefinition with isolated context, restricted tools, model overrides
HooksPreToolUse, PostToolUse, Stop, SessionStart, SessionEnd
SessionsCapture session_id, resume with full context
Extended thinkingInternal reasoning blocks for complex problems
Computer useGUI automation via screenshots + mouse/keyboard
MCP nativestdio, HTTP, SSE transports with wildcard permissions
ProvidersAnthropic, Bedrock, Vertex AI, Azure AI Foundry

Advanced Tool Use (Beta)

Three API features that improve tool use at scale:

  • Tool Search Tool - auto-selects relevant tools from large sets (85% token reduction)
  • Programmatic Tool Calling - Claude writes Python to orchestrate tools (37% token savings)
  • Tool Use Examples - input_examples field on tool definitions (72% to 90% accuracy)

Unique strengths

  • query() gives you Claude Code’s full engine, not just an API wrapper
  • Extended thinking with interleaved think-act-think patterns reduces hallucination
  • Computer use enables GUI automation without APIs
  • Deep MCP integration (Anthropic created MCP)
  • 200K token context + automatic summarization for long sessions
  • Hooks provide guardrails without wrapping the entire SDK

OpenAI Agents SDK

OpenAI’s framework for multi-agent orchestration. Originally evolved from “Swarm” (experimental).

Core abstractions

from agents import Agent, Runner, handoff, guardrail, trace

# Define agents
triage_agent = Agent(
    name="Triage",
    instructions="Route customer requests to the right specialist.",
    handoffs=[billing_agent, technical_agent],
)

billing_agent = Agent(
    name="Billing",
    instructions="Handle billing inquiries.",
    tools=[lookup_invoice, process_refund],
)

# Run with tracing
with trace("customer-support"):
    result = Runner.run(triage_agent, "I need a refund")

Five primitives

PrimitiveWhat it does
AgentLLM + instructions + tools + handoffs
HandoffTransfer control from one agent to another
GuardrailInput/output validation (runs in parallel)
SessionConversation state management
TracingObservability for debugging and evaluation

Handoffs - the key differentiator

OpenAI’s SDK is built around handoffs. Agents don’t just use tools; they delegate to other agents:

# Agent can hand off to specialists
triage = Agent(
    name="triage",
    handoffs=[
        handoff(billing, "Route billing questions"),
        handoff(technical, "Route technical issues"),
    ]
)

The triage agent decides at runtime which specialist to invoke. The handoff transfers the full conversation context.

Guardrails

Validation that runs in parallel with agent execution:

@guardrail
def no_pii(input: str) -> GuardrailResult:
    """Block requests containing PII."""
    if contains_pii(input):
        return GuardrailResult(block=True, reason="PII detected")
    return GuardrailResult(block=False)

agent = Agent(
    name="support",
    guardrails=[no_pii],
    tools=[...],
)

Guardrails run simultaneously with the LLM call. If a guardrail fails, the agent stops immediately without waiting for the LLM to finish.

Tracing

Built-in observability:

with trace("customer-flow") as t:
    result = Runner.run(agent, input)
    # Traces include: LLM calls, tool executions,
    # handoffs, guardrail checks, timing data

Traces can be exported to OpenAI’s dashboard or any compatible backend.

Provider-agnostic

Despite the name, the SDK works with non-OpenAI models:

agent = Agent(
    name="claude-agent",
    model=AnthropicModel("claude-sonnet-4-6"),
    ...
)

GitHub Copilot SDK

GitHub’s SDK for embedding Copilot’s agentic engine into any application. Released January 2026 in technical preview.

Core capabilities

  • Production-grade execution loop - the same engine that powers Copilot CLI and agent mode
  • Multi-language - Node.js, Python, Go, .NET from day one
  • Multi-model routing - use different models for different tasks
  • MCP integration - connect to any MCP server for tools
  • Streaming - real-time response streaming

Architecture

Your Application
    |
    v
Copilot SDK
    |-- Agent loop (plan, execute, observe)
    |-- Tool system (built-in + custom + MCP)
    |-- Model router (GPT-4.1, GPT-5 mini, Claude, etc.)
    |
    v
GitHub Models API (or bring your own)

Key differentiator: multi-language from day one

Most agent SDKs start Python-first. Copilot SDK launched with four languages, making it accessible to backend teams regardless of their stack:

// Go example
agent := copilot.NewAgent(copilot.AgentConfig{
    Model: "gpt-4.1",
    Tools: []copilot.Tool{readFile, editFile, runTests},
})

result, err := agent.Run(ctx, "Add error handling to the API endpoints")

Integration with Microsoft Agent Framework

The Copilot SDK integrates with the broader Microsoft Agent Framework, which provides:

  • Function calling and streaming
  • Multi-turn conversations
  • Shell command execution
  • File operations
  • URL fetching
  • MCP server integration

Side-by-Side: Building a Code Review Agent

Claude Agent SDK

# Agent SDK style - Claude manages the loop
async for msg in query(
    prompt=f"Review this diff for security and quality issues:\n{diff}",
    options={
        "model": "claude-sonnet-4-6",
        "agents": [security_agent, quality_agent],  # subagents
    },
):
    if msg.type == "text":
        print(msg.text, end="")

OpenAI Agents SDK

security_reviewer = Agent(name="Security", tools=[scan_vulnerabilities])
quality_reviewer = Agent(name="Quality", tools=[check_patterns])
coordinator = Agent(
    name="Coordinator",
    handoffs=[security_reviewer, quality_reviewer],
    instructions="Delegate to specialists, then synthesize findings."
)
result = Runner.run(coordinator, f"Review this diff:\n{diff}")

Copilot SDK (Node.js)

const agent = new CopilotAgent({
  model: "gpt-4.1",
  tools: [readFile, searchCode, createComment],
  instructions: "Review code for security and quality issues.",
});
const result = await agent.run(`Review this diff:\n${diff}`);

When to Use What

ScenarioBest SDK
Need extended thinking for hard reasoningClaude Agent SDK
Building multi-agent systems with handoffsOpenAI Agents SDK
Embedding agentic capabilities in Go/.NET appsCopilot SDK
Need computer use (GUI automation)Claude Agent SDK
Want built-in observability/tracingOpenAI Agents SDK
Already in GitHub ecosystemCopilot SDK
Need provider-agnostic agentsOpenAI Agents SDK or Copilot SDK
Need the largest context windowClaude Agent SDK (200K)

Skills as Open Standard

Since December 2025, Skills (SKILL.md files) are an open standard that works across Claude Code, Cursor, Gemini CLI, Codex CLI, and Antigravity IDE. This means workflow definitions are portable regardless of which SDK or tool you use.

Organization-level skills (workspace-wide deployment, centralized management) shipped in January 2026.

See Skills as Open Standard for details.

Common Patterns Across All SDKs

Despite different APIs, all three share:

  1. Agent = model + tools + instructions - the fundamental abstraction
  2. Tool definitions as JSON schemas - same format, different wrappers
  3. Agentic loop - run tools, feed results back, repeat
  4. Streaming support - real-time token and event streaming
  5. MCP compatibility - all three can use MCP servers for tools
  6. Skills compatibility - all three can use SKILL.md for workflow definitions