Building MCP Servers
Create Model Context Protocol servers that expose tools and resources to Claude and other MCP-compatible clients.
Prerequisites
- 1Completed the Getting Started guide
- 2Python 3.10+ or Node.js 18+
- 3Basic understanding of client-server architecture
What you will learn
- What the Model Context Protocol (MCP) is and why it matters
- How to set up an MCP server project from scratch
- How to define tools that LLMs can call
- How to expose resources for context injection
- How to test your server locally and connect it to Claude Desktop
What Is MCP?
The Model Context Protocol (MCP) is an open standard created by Anthropic that defines how LLM applications connect to external data sources and tools. Think of it as a USB-C port for AI — a universal interface that any model can use to interact with any tool.
Before MCP, every framework had its own tool definition format. With MCP, you write your tool server once and it works with Claude Desktop, VS Code Copilot, Cursor, and any other MCP-compatible client.
An MCP server exposes three types of capabilities:
- Tools — Functions the LLM can call (e.g., search a database, create a file)
- Resources — Data the LLM can read for context (e.g., documentation, configuration)
- Prompts — Pre-defined prompt templates the user can invoke
Project Setup (Python)
Let us build an MCP server that provides tools for managing a task list. Start by setting up the project:
# Create project directory
mkdir mcp-task-server && cd mcp-task-server
# Create virtual environment
python -m venv venv
source venv/bin/activate
# Install the MCP SDK
pip install mcp
# Create the server file
touch server.py
Alternatively, for TypeScript:
mkdir mcp-task-server && cd mcp-task-server
npm init -y
npm install @modelcontextprotocol/sdk
touch server.ts
Defining Tools
Tools are functions that the LLM can invoke. Each tool has a name, description, and input schema. Here is our task management server with three tools:
from mcp.server.fastmcp import FastMCP
# Create the server
mcp = FastMCP("Task Manager")
# In-memory task store
tasks: dict[str, dict] = {}
task_counter = 0
@mcp.tool()
def add_task(title: str, description: str = "") -> str:
"""Add a new task to the task list."""
global task_counter
task_counter += 1
task_id = f"task-{task_counter}"
tasks[task_id] = {
"id": task_id,
"title": title,
"description": description,
"status": "pending",
}
return f"Created task {task_id}: {title}"
@mcp.tool()
def list_tasks() -> str:
"""List all tasks with their current status."""
if not tasks:
return "No tasks found."
lines = []
for t in tasks.values():
lines.append(f"[{t['status']}] {t['id']}: {t['title']}")
return "\n".join(lines)
@mcp.tool()
def complete_task(task_id: str) -> str:
"""Mark a task as completed."""
if task_id not in tasks:
return f"Task {task_id} not found."
tasks[task_id]["status"] = "completed"
return f"Task {task_id} marked as completed."
The @mcp.tool() decorator automatically generates the JSON schema from your function's type hints and docstring. No manual schema definition needed.
Defining Resources
Resources provide contextual data that the LLM can read. They are identified by URIs:
@mcp.resource("tasks://summary")
def get_task_summary() -> str:
"""Get a summary of all tasks."""
total = len(tasks)
completed = sum(1 for t in tasks.values() if t["status"] == "completed")
pending = total - completed
return f"Task Summary: {total} total, {completed} completed, {pending} pending"
@mcp.resource("tasks://details/{task_id}")
def get_task_details(task_id: str) -> str:
"""Get detailed information about a specific task."""
if task_id not in tasks:
return f"Task {task_id} not found."
t = tasks[task_id]
return f"ID: {t['id']}\nTitle: {t['title']}\nDescription: {t['description']}\nStatus: {t['status']}"
Resources are read-only. They give the LLM context about the current state of your system without needing a tool call. The LLM or client can fetch them proactively.
Running and Testing Locally
Add the entry point at the bottom of server.py:
if __name__ == "__main__":
mcp.run(transport="stdio")
Test it locally using the MCP Inspector, which comes with the SDK:
# Run the MCP Inspector
mcp dev server.py
This opens a web UI where you can see all your tools and resources, invoke them interactively, and inspect the JSON-RPC messages being exchanged.
You can also test with a simple script:
# Install the server in development mode
mcp install server.py
Connecting to Claude Desktop
To use your MCP server with Claude Desktop, add it to your Claude Desktop configuration file:
// ~/Library/Application Support/Claude/claude_desktop_config.json (macOS)
// %APPDATA%\Claude\claude_desktop_config.json (Windows)
{
"mcpServers": {
"task-manager": {
"command": "python",
"args": ["/absolute/path/to/server.py"]
}
}
}
Restart Claude Desktop and you will see the task manager tools available in the tools menu. Claude can now add, list, and complete tasks using your server.
For TypeScript MCP servers, the configuration looks like:
{
"mcpServers": {
"task-manager": {
"command": "npx",
"args": ["tsx", "/absolute/path/to/server.ts"]
}
}
}
Common Mistakes to Avoid
- !Using relative paths in the Claude Desktop config — always use absolute paths
- !Forgetting to include descriptive docstrings on tools — the LLM uses these to decide when to call your tool
- !Not validating tool inputs — always check for missing or invalid parameters
- !Making tools too broad — a tool that does 10 things is harder for the LLM to use correctly than 10 focused tools
- !Not testing with the MCP Inspector before connecting to a client
Recommended Next Steps
Explore Related Content
Model Context Protocol
The open standard that lets LLM applications seamlessly connect to any external data source or tool.
PatternTool-Augmented Generation
Agents iteratively use tools based on reasoning to augment their generation capabilities.
FrameworkModel Context Protocol
The standard for LLM integrations