Big TechPythonTypeScript

Claude Agent SDK

by AnthropicUpdated Jun 15, 2025

Anthropic's production-grade agent runtime with deep MCP integration, computer use capabilities, and a developer-first design philosophy. It provides an agentic loop that automatically handles tool calls, supports multi-turn conversations, and includes built-in guardrails for safe agent behavior.

Architecture Overview

The SDK wraps the Anthropic Messages API in an agentic loop that iterates until the model produces a stop response. Each iteration sends the conversation history (including tool results) back to the model, which decides whether to call more tools or return a final answer. MCP servers are integrated as first-class tool providers via a client-server transport layer.

When to Use Claude Agent SDK

  • Production agent systems with tool use
  • Computer use automation (browser, desktop)
  • MCP-native applications and integrations
  • Multi-turn conversational assistants
  • Code generation and analysis pipelines

Strengths & Weaknesses

Strengths

  • Deep MCP integration as a first-class feature
  • Computer use support for GUI automation
  • Strong safety features and content filtering
  • Clean, Pythonic API with excellent TypeScript support
  • Built-in support for streaming responses

Weaknesses

  • Primarily optimized for Anthropic Claude models
  • Newer ecosystem compared to LangChain/LlamaIndex
  • Community tooling still catching up to more established frameworks

Quick Start

python
import anthropic

client = anthropic.Anthropic()

# Define tools for the agent
tools = [
    {
        "name": "get_weather",
        "description": "Get the current weather for a location.",
        "input_schema": {
            "type": "object",
            "properties": {
                "location": {"type": "string", "description": "City name"}
            },
            "required": ["location"],
        },
    }
]

def run_agent(user_message: str):
    messages = [{"role": "user", "content": user_message}]

    while True:
        response = client.messages.create(
            model="claude-sonnet-4-20250514",
            max_tokens=1024,
            tools=tools,
            messages=messages,
        )

        # If the model stops, return the final text
        if response.stop_reason == "end_turn":
            return response.content[0].text

        # Process tool calls
        for block in response.content:
            if block.type == "tool_use":
                tool_result = execute_tool(block.name, block.input)
                messages.append({"role": "assistant", "content": response.content})
                messages.append({
                    "role": "user",
                    "content": [{
                        "type": "tool_result",
                        "tool_use_id": block.id,
                        "content": tool_result,
                    }],
                })

result = run_agent("What's the weather in San Francisco?")
print(result)

Features at a Glance

DeveloperAnthropic
LanguagePython, TypeScript
LicenseMIT
GitHub Stars10k+
MCP SupportYes
Multi-AgentYes

Notable Users

AnthropicNotionDuckDuckGoSourcegraph

Resources

Explore Related Content