beginner5 min readGuide 4 of 12Updated Jun 15, 2025

Your First Agent in 5 Minutes

Build a working AI agent from scratch in under 5 minutes using the OpenAI Agents SDK or Anthropic SDK.

Prerequisites

  • 1Python 3.10+ installed
  • 2An OpenAI or Anthropic API key
  • 3pip package manager

What you will learn

  • How to go from zero to a running agent in under 5 minutes
  • The minimal code needed for a tool-using agent
  • How to observe the agent reasoning and acting

Quick Setup

Open your terminal and run these four commands:

# Install the SDK
pip install openai-agents

# Set your key
export OPENAI_API_KEY="sk-..."

# Create your project file
touch quick_agent.py

That is all the setup you need. Now let us write the agent.

The 20-Line Agent

Copy this into quick_agent.py:

from agents import Agent, Runner, function_tool

@function_tool
def get_weather(city: str) -> str:
    """Get the current weather for a city."""
    # In production, call a real weather API
    return f"It is 22C and sunny in {city}."

@function_tool
def get_time(timezone: str) -> str:
    """Get the current time in a timezone (e.g. 'Asia/Tokyo', 'US/Eastern')."""
    from datetime import datetime
    from zoneinfo import ZoneInfo
    try:
        now = datetime.now(ZoneInfo(timezone))
        return now.strftime("%Y-%m-%d %H:%M:%S %Z")
    except KeyError:
        return f"Unknown timezone: {timezone}. Use format like 'Asia/Tokyo'."

agent = Agent(
    name="TravelHelper",
    instructions="You help users plan travel. Use tools to get weather and time info.",
    tools=[get_weather, get_time],
)

result = Runner.run_sync(agent, "I am flying to Tokyo tomorrow. What is the weather and time there?")
print(result.final_output)

Run It

Execute your agent:

python quick_agent.py

You will see output like:

Based on the current information:
- Weather in Tokyo: It is 22C and sunny
- Current time in Tokyo: 2025-06-15T08:30:00+00:00

Tomorrow looks like a great day to fly to Tokyo! The weather
is pleasant and sunny. Remember to adjust for the time
difference from your current timezone.

The agent called both tools, combined the results, and gave a coherent answer. All in 20 lines of code.

What Just Happened?

Let us break down the execution step by step:

  1. Agent receives your message — "I am flying to Tokyo tomorrow..."
  2. Model decides to use tools — The LLM reads the system prompt and available tools, then decides it needs weather and time data.
  3. Tool calls execute — The SDK calls get_weather("Tokyo") and get_time("Asia/Tokyo") and feeds results back to the model.
  4. Model generates final response — With the tool results in context, the LLM writes a helpful answer.
  5. Runner returnsresult.final_output contains the final text.

This is the ReAct pattern (Reason + Act) in action. The model reasons about what tools to call, acts by calling them, observes the results, and then reasons again to produce the final output.

Anthropic Alternative

Prefer Claude? Here is the same agent with the Anthropic SDK:

import anthropic

client = anthropic.Anthropic()

tools = [
    {
        "name": "get_weather",
        "description": "Get the current weather for a city.",
        "input_schema": {
            "type": "object",
            "properties": {
                "city": {"type": "string", "description": "The city name"}
            },
            "required": ["city"]
        }
    }
]

def process_tool_call(tool_name, tool_input):
    if tool_name == "get_weather":
        return f"It is 22C and sunny in {tool_input['city']}."

messages = [{"role": "user", "content": "What is the weather in Tokyo?"}]

# Agent loop
response = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    system="You are a helpful travel assistant.",
    tools=tools,
    messages=messages,
)

# Handle tool use
while response.stop_reason == "tool_use":
    tool_block = next(b for b in response.content if b.type == "tool_use")
    tool_result = process_tool_call(tool_block.name, tool_block.input)

    messages.append({"role": "assistant", "content": response.content})
    messages.append({
        "role": "user",
        "content": [{"type": "tool_result", "tool_use_id": tool_block.id, "content": tool_result}]
    })

    response = client.messages.create(
        model="claude-sonnet-4-20250514",
        max_tokens=1024,
        system="You are a helpful travel assistant.",
        tools=tools,
        messages=messages,
    )

print(response.content[0].text)

This approach gives you full control over the agent loop. You can see exactly when and why the model calls each tool.

Common Mistakes to Avoid

  • !Not installing the package in a virtual environment, leading to dependency conflicts
  • !Hardcoding API keys in source files instead of using environment variables
  • !Forgetting to handle the case where a tool call fails or returns an error
  • !Using synchronous Runner.run_sync in an async context — use Runner.run instead

Explore Related Content