Quickstart: Build an Agent with OpenAI Agents SDK
Build a multi-agent system with handoffs, guardrails, and tracing.
Prerequisites
pip install openai-agents
export OPENAI_API_KEY=sk-...
Minimal Agent
from agents import Agent, Runner
agent = Agent(
name="assistant",
instructions="You are a helpful assistant.",
)
result = Runner.run_sync(agent, "What is the capital of France?")
print(result.final_output)
Agent with Tools
from agents import Agent, Runner, function_tool
@function_tool
def search_web(query: str) -> str:
"""Search the web for information."""
# Your search implementation
return f"Results for: {query}"
@function_tool
def read_file(path: str) -> str:
"""Read a file from disk."""
with open(path) as f:
return f.read()
agent = Agent(
name="researcher",
instructions="Research topics using web search and local files.",
tools=[search_web, read_file],
)
result = Runner.run_sync(agent, "Summarize the README.md in this project")
print(result.final_output)
Multi-Agent with Handoffs
The killer feature - agents that delegate to each other:
from agents import Agent, Runner, handoff
# Specialist agents
security_agent = Agent(
name="Security Reviewer",
instructions="""Review code for security vulnerabilities.
Focus on: injection, auth issues, data exposure.
Be specific about line numbers and fixes.""",
tools=[read_file, search_web],
)
performance_agent = Agent(
name="Performance Reviewer",
instructions="""Review code for performance issues.
Focus on: N+1 queries, memory leaks, blocking calls.
Suggest concrete optimizations.""",
tools=[read_file],
)
# Coordinator that routes to specialists
coordinator = Agent(
name="Review Coordinator",
instructions="""You coordinate code reviews.
Delegate security concerns to the Security Reviewer.
Delegate performance concerns to the Performance Reviewer.
Synthesize findings into a final report.""",
handoffs=[
handoff(security_agent, "Route security-related review tasks"),
handoff(performance_agent, "Route performance-related review tasks"),
],
)
result = Runner.run_sync(
coordinator,
"Review this code:\n```python\n"
"def get_user(id):\n"
" return db.execute(f'SELECT * FROM users WHERE id={id}')\n"
"```"
)
print(result.final_output)
With Guardrails
Input validation that runs in parallel with the LLM:
from agents import Agent, Runner, guardrail, GuardrailResult
import re
@guardrail
def block_pii(input: str) -> GuardrailResult:
"""Block requests containing email addresses or SSNs."""
has_email = bool(re.search(r'\b[\w.-]+@[\w.-]+\.\w+\b', input))
has_ssn = bool(re.search(r'\b\d{3}-\d{2}-\d{4}\b', input))
if has_email or has_ssn:
return GuardrailResult(
block=True,
reason="PII detected in input. Please remove personal information."
)
return GuardrailResult(block=False)
agent = Agent(
name="support",
instructions="Help customers with their accounts.",
tools=[lookup_account, process_refund],
guardrails=[block_pii],
)
# This will be blocked before the LLM even responds
result = Runner.run_sync(agent, "Refund john@example.com")
With Tracing
Built-in observability:
from agents import Agent, Runner, trace
agent = Agent(
name="assistant",
instructions="Help with coding tasks.",
tools=[read_file, edit_file, run_tests],
)
# Everything inside the trace block is recorded
with trace("fix-bug-123") as t:
result = Runner.run_sync(agent, "Fix the failing test in test_auth.py")
# Traces include:
# - LLM calls (model, tokens, latency)
# - Tool executions (input, output, duration)
# - Handoffs (from agent, to agent)
# - Guardrail checks (pass/fail)
Async Execution
For production use:
import asyncio
from agents import Agent, Runner
async def main():
agent = Agent(
name="assistant",
instructions="You are a helpful assistant.",
tools=[search_web],
)
# Streaming
async for event in Runner.run_streamed(agent, "Research MCP protocol"):
if event.type == "text":
print(event.text, end="", flush=True)
elif event.type == "tool_call":
print(f"\n[Calling {event.tool_name}...]")
asyncio.run(main())
Using Non-OpenAI Models
The SDK is provider-agnostic:
from agents import Agent, Runner
from agents.models import AnthropicModel, AzureModel
# Use Claude
claude_agent = Agent(
name="claude-assistant",
model=AnthropicModel("claude-sonnet-4-6"),
instructions="You are a helpful assistant.",
)
# Use Azure OpenAI
azure_agent = Agent(
name="azure-assistant",
model=AzureModel("gpt-4o", azure_deployment="my-deployment"),
instructions="You are a helpful assistant.",
)
What’s Next
- Multi-Agent Orchestration - advanced handoff patterns
- SDK Comparison - how this compares to Claude/Copilot
- The Agentic Loop - understand what’s happening under the hood