LangChain
The foundational framework for building LLM-powered applications with a massive integration ecosystem. LangChain provides composable abstractions for prompt templates, output parsers, chains, retrievers, and tool integration. While LangGraph is now the recommended path for agents, LangChain remains the backbone for model integrations, tool definitions, and the broader ecosystem.
Architecture Overview
LangChain uses a modular architecture with core abstractions: Models (chat models, LLMs, embeddings), Prompts (templates, few-shot selectors), Output Parsers (JSON, Pydantic, etc.), Retrievers (vector stores, keyword search), and the LCEL (LangChain Expression Language) for composing components into chains. The Runnables protocol provides a unified interface for all components with built-in streaming, batching, and async support.
When to Use LangChain
- LLM-powered applications with rich integrations
- Chain-based workflows combining multiple LLM calls
- Tool and API integration via a standardized interface
- RAG applications with diverse retriever backends
- Prototyping with rapid model and tool swapping
Strengths & Weaknesses
Strengths
- Largest ecosystem with 700+ integrations
- Comprehensive model provider support (OpenAI, Anthropic, Google, etc.)
- Well-documented with extensive tutorials and cookbooks
- LCEL provides composable, streaming-first chain building
- Both Python and TypeScript implementations
Weaknesses
- Abstraction overhead can make debugging difficult
- Rapid API changes across versions cause migration pain
- For agent use cases, LangGraph is now the recommended approach
- Large dependency footprint for the full package
Quick Start
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
# Create a simple chain with LCEL
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant that translates {input_language} to {output_language}."),
("human", "{input}"),
])
model = ChatOpenAI(model="gpt-4o")
output_parser = StrOutputParser()
# Compose with LCEL pipe operator
chain = prompt | model | output_parser
result = chain.invoke({
"input_language": "English",
"output_language": "French",
"input": "Hello, how are you?",
})
print(result)Features at a Glance
| Developer | LangChain Inc. |
| Language | Python, TypeScript |
| License | MIT |
| GitHub Stars | 100k+ |
| MCP Support | Yes |
| Multi-Agent | No |
Notable Users
Resources
Explore Related Content
Tool Use & Function Calling
How agents interact with external tools, APIs, and services to take action in the real world.
GuideChoosing Your Stack
Pick the right framework and tools for your specific use case with a clear decision matrix.
PatternReAct Pattern
Reasoning and Acting — the agent thinks step-by-step, then acts on its reasoning in iterative loops.