Skill Patterns

Five proven patterns for structuring skill logic, plus testing and distribution strategies. Most production skills combine 2-3 patterns.

Pattern 1: Sequential Workflow Orchestration

Use when: Multi-step processes must happen in a specific order.

# Step 1: Create Account
Call MCP tool: `create_customer`
Parameters: name, email, company

# Step 2: Setup Payment
Call MCP tool: `setup_payment_method`
Wait for: payment method verification

# Step 3: Create Subscription
Call MCP tool: `create_subscription`
Parameters: plan_id, customer_id (from Step 1)

Key techniques:

  • Explicit step ordering with dependencies
  • Validation at each stage before proceeding
  • Rollback instructions for failures
  • Data passing between steps (e.g., “customer_id from Step 1”)

Pattern 2: Multi-MCP Coordination

Use when: Workflows span multiple services.

# Phase 1: Design Export (Figma MCP)
Export design assets, generate specs, create manifest

# Phase 2: Asset Storage (Drive MCP)
Create project folder, upload assets, generate links

# Phase 3: Task Creation (Linear MCP)
Create dev tasks, attach asset links, assign team

# Phase 4: Notification (Slack MCP)
Post handoff summary to #engineering

Key techniques:

  • Clear phase separation with named MCP sources
  • Data passing between phases (links from Phase 2 feed Phase 3)
  • Validation before moving to next phase
  • Centralized error handling

Pattern 3: Iterative Refinement

Use when: Output quality improves with iteration.

# Initial Draft
Fetch data, generate first draft, save to temp file

# Quality Check
Run validation: `scripts/check_report.py`
Identify: missing sections, formatting issues, data errors

# Refinement Loop
Address issues, regenerate sections, re-validate
Repeat until quality threshold met

# Finalization
Apply formatting, generate summary, save final version

Key techniques:

  • Explicit quality criteria (not “make it better”)
  • Validation scripts for deterministic checks
  • Know when to stop (threshold or max iterations)

Pattern 4: Context-Aware Tool Selection

Use when: Same outcome, different tools depending on context.

# Decision Tree
1. Check file type and size
2. Route to appropriate handler:
   - Large files (>10MB): cloud storage MCP
   - Collaborative docs: Notion MCP
   - Code files: GitHub MCP
   - Temporary: local storage

# Explain the choice
Tell the user why that handler was selected

Key techniques:

  • Clear decision criteria
  • Fallback options for each branch
  • Transparency about choices made

Pattern 5: Domain-Specific Intelligence

Use when: The skill adds specialized knowledge beyond tool access.

# Before Processing (Compliance Check)
Fetch transaction details via MCP
Apply compliance rules: sanctions, jurisdiction, risk level
Document compliance decision

# Processing
IF compliance passed: process transaction
ELSE: flag for review, create compliance case

# Audit Trail
Log all checks, record decisions, generate report

Key techniques:

  • Domain expertise embedded in decision logic
  • Compliance/validation before action
  • Comprehensive audit trail

Combining Patterns

SkillPatterns Used
orchestrate-planSequential (#1) + Iterative (#3) + Domain (#5)
pr-reviewMulti-source (#2) + Domain (#5)
swarmMulti-MCP (#2) + Context-aware (#4)
design-arenaMulti-MCP (#2) + Iterative (#3)

Testing Skills

Three levels of testing rigor, from quick iteration to systematic evaluation.

Level 1: Manual Testing

Fast iteration, no setup. Run 10-20 queries and track activation rate. Target: 90%+ on relevant queries.

Should trigger:
- "Help me set up a new ProjectHub workspace"
- "I need to create a project in ProjectHub"

Should NOT trigger:
- "What's the weather?"
- "Help me write Python code"

Tip: Ask Claude directly: “When would you use the [skill name] skill?” Claude quotes the description back, showing you what it sees.

Level 2: Scripted Testing

Automate test cases for repeatable validation. Write test prompts in a file, run them sequentially, check outputs against expected patterns.

Level 3: Programmatic Testing (API)

Build evaluation suites using the /v1/skills endpoint and Messages API with container.skills parameter.

Iteration Signals

SignalSymptomFix
UndertriggeringUsers manually invoke it, support questionsAdd more trigger phrases to description
OvertriggeringLoads on unrelated tasksAdd “Do NOT use for…” negative triggers
Wrong outputInconsistent results, user correctionsImprove instructions, add examples

Skill Creator Tools

Since March 2026, Anthropic provides Skill Creator tools for measuring skill behavior over time - tracking activation rates, output quality, and user satisfaction metrics.


Distribution

Individual Users

  1. Download the skill folder
  2. Place in .claude/skills/ for Claude Code
  3. Or upload via Claude.ai > Settings > Capabilities > Skills

Organization-Wide

Admins deploy skills workspace-wide (shipped January 2026). Automatic updates, centralized management.

API / Programmatic

  • /v1/skills endpoint for listing and managing
  • container.skills parameter in Messages API
  • Works with Claude Agent SDK

Host skills on GitHub with a clear README and installation instructions:

# Installing the [Your Service] Skill

1. Clone: `git clone https://github.com/yourcompany/skills`
2. Copy to `.claude/skills/` for Claude Code
3. Test: Ask Claude "[trigger phrase]"

Positioning

Focus on outcomes, not implementation:

Good: "Set up complete project workspaces in seconds instead of 30 minutes"
Bad:  "A folder containing YAML frontmatter and Markdown instructions"

Next