Agent Teams
Role-based multi-agent collaboration where each agent has a defined expertise, persona, and goal — working together through structured communication patterns to deliver complex outputs.
function AgentTeam(task, team_config):
// Initialize agents with roles
agents = {}
for role in team_config.roles:
agents[role.name] = Agent(
system_prompt = role.prompt,
tools = role.tools,
model = role.model
)
// Execute based on team process type
shared_memory = SharedMemory()
if team_config.process == "sequential":
result = task
for role in team_config.execution_order:
result = agents[role].execute(
result, shared_memory
)
shared_memory.update(role, result)
return result
elif team_config.process == "iterative":
draft = agents["creator"].execute(task)
for round in 1..max_rounds:
feedback = agents["critic"].review(draft)
if feedback.approved:
return draft
draft = agents["creator"].revise(draft, feedback)
return draftWhen to Use
- Complex tasks requiring multiple distinct areas of expertise (research + writing + review)
- Content creation pipelines where quality depends on specialized editing and review
- Business processes that mirror human team workflows
- When you want to leverage different models for different roles (fast model for research, capable model for writing)
- Tasks where iterative refinement through creator-critic loops significantly improves quality
When NOT to Use
- Simple tasks that a single well-prompted agent can handle
- When the task doesn't naturally decompose into distinct roles
- Real-time applications where multi-agent overhead is unacceptable
- When you cannot clearly define each agent's role, goals, and success criteria
- Budget-constrained scenarios — agent teams multiply token usage
Role-Based Agent Design
The Agent Teams pattern structures multi-agent systems around well-defined roles — each agent has a specific job title, area of expertise, set of tools, and behavioral guidelines. Rather than generic "Agent 1" and "Agent 2," you have a Researcher, a Writer, a Reviewer, and an Editor, each configured to excel at their specific function.
This pattern is directly inspired by human team dynamics. A content creation team has a researcher who gathers information, a writer who drafts the content, a reviewer who provides feedback, and an editor who polishes the final output. Each person's expertise and perspective are different, and the quality of the output depends on the composition and coordination of the team.
The key elements of role-based design are:
- Role definition: Each agent has a system prompt that establishes its expertise, perspective, communication style, and goals. A Researcher is instructed to be thorough, cite sources, and flag uncertainty. A Reviewer is instructed to be critical, identify gaps, and suggest improvements. These role definitions are the most important design decision in the pattern.
- Specialized toolsets: Each role gets access to tools relevant to its function. The Researcher gets search and retrieval tools. The Writer gets formatting and generation tools. The Analyst gets data processing and visualization tools. Restricting tools by role prevents confusion and reduces errors.
- Goal alignment: Each agent has a clear objective that contributes to the team's overall goal. Individual goals should be complementary, not competing. The Writer's goal is to produce clear prose; the Reviewer's goal is to ensure accuracy and completeness. These goals create a productive tension that improves the final output.
Team Composition Strategies
How you compose your team — which roles to include, how many agents, and what the interaction structure looks like — depends on the task:
Sequential pipeline teams: Agents work in a fixed order, each building on the previous agent's output. Researcher produces findings, Writer drafts based on findings, Reviewer critiques the draft, Writer revises based on feedback. This is predictable and easy to debug but doesn't allow for parallel work.
Parallel specialist teams: Multiple agents work simultaneously on different aspects of the same task, and a coordinator merges their outputs. For market analysis: an Industry Analyst, a Financial Analyst, and a Competitive Analyst work in parallel, and a Synthesizer combines their findings. This is faster but requires careful integration.
Iterative refinement teams: Agents work in loops — a Creator produces output, a Critic evaluates it, the Creator revises based on criticism, and the cycle repeats until quality thresholds are met. This produces the highest quality output but takes the most time and tokens.
Adversarial teams: Two sub-teams argue opposing positions (bull vs. bear, prosecution vs. defense), and a Judge evaluates their arguments. This is a specialized form of peer collaboration designed to stress-test ideas and produce well-balanced analysis.
Team size matters. Research in organizational psychology suggests that team effectiveness peaks at 4-6 members for most tasks. Beyond that, coordination overhead outweighs the benefits of additional perspectives. The same principle applies to agent teams — start with the minimum viable team and add roles only when you can demonstrate they improve output quality.
Handoff Patterns
How work transfers between agents — the handoff — is a critical design point. A well-designed handoff ensures the receiving agent has all the context it needs; a poor handoff leads to information loss and rework.
Explicit handoff with summary: When one agent completes its work and passes control to the next, it provides a structured summary: what was accomplished, key findings, outstanding questions, and specific instructions for the next agent. This is the most reliable pattern and reduces the receiving agent's need to re-read earlier context.
Artifact-based handoff: Instead of passing messages, agents pass structured artifacts — a research brief, a draft document, a data table, a code file. The artifact is the interface between agents. Each role produces a well-defined artifact type and consumes artifacts from upstream roles. This pattern is especially clean for pipeline teams.
Framework-native handoffs: The OpenAI Agents SDK provides a built-in handoff() mechanism where an agent can transfer control to another agent, optionally passing context and instructions. CrewAI's delegation feature allows agents to ask other crew members for help on specific sub-problems. These framework-native mechanisms handle the plumbing of context transfer and control flow.
Conditional handoffs: The decision of which agent to hand off to depends on the current state. After analysis, if the findings are quantitative, hand off to the Data Visualization agent; if qualitative, hand off to the Writer. This routing logic can be implemented with LLM-based decisions, rule-based branching, or a combination.
Framework Implementations
Agent teams are the primary abstraction in several popular frameworks:
CrewAI is designed specifically for this pattern. You define agents with role, goal, and backstory attributes, organize them into a Crew with defined tasks, and specify the process type (sequential or hierarchical). CrewAI handles the communication between agents, task assignment, and result collection. Its high-level API makes it the fastest path to a working agent team but offers less control over the interaction dynamics.
Microsoft AutoGen / AG2 models agent teams as multi-agent conversations. You define AssistantAgent and UserProxyAgent instances, configure their system prompts and capabilities, and initiate a group chat. The framework manages turn-taking and message routing. AutoGen's GroupChat with a GroupChatManager is particularly effective for debate and collaborative problem-solving scenarios.
LangGraph gives you lower-level control. You model each agent as a node in a state graph, define edges for handoffs, and implement custom routing logic. This requires more code but allows you to design exactly the interaction pattern you need — including hybrid approaches that combine elements of pipeline, parallel, and iterative team structures.
OpenAI Agents SDK supports agent teams through its handoff and transfer mechanisms. You define multiple agents with different instructions and tools, and agents can hand off to each other based on the conversation state. The guardrails system lets you add quality checks at each handoff point.