Multi-Agent AI with MCP: How to Use CrewAI and LangChain with MCP Servers
Nikhil Tiwari
MCP Playground
TL;DR
- MCP + agent frameworks = agents that can use any MCP server as tools
- CrewAI has native MCP support — add servers directly to agents with
mcps=[...] - LangChain uses
langchain-mcp-adaptersto convert MCP tools for LangGraph agents - Both support stdio, HTTP, and SSE transports for connecting to MCP servers
MCP gives AI agents access to tools through a standard protocol. But if you want multiple agents collaborating — a researcher agent, a coder agent, a reviewer agent — you need an orchestration framework. That's where CrewAI and LangChain/LangGraph come in.
This guide shows how to wire MCP servers into both frameworks with working code.
Why Combine MCP with Agent Frameworks?
| MCP alone | MCP + agent framework |
|---|---|
| One agent, one or more tools | Multiple specialised agents collaborating |
| Simple request → tool → response | Complex workflows with loops, handoffs, memory |
| Tools defined by MCP servers | Tools from MCP + custom functions + other sources |
| Works with Claude, Cursor, ChatGPT natively | Works in your own backend code |
Option 1: CrewAI + MCP
CrewAI is a Python framework for orchestrating teams of AI agents. It has native MCP support — you can add MCP servers directly to agents.
Install
pip install crewai 'crewai-tools[mcp]'
Quick Setup: String-Based MCP References
The simplest way — pass server URLs directly:
from crewai import Agent, Task, Crew
researcher = Agent(
role="Research Analyst",
goal="Find and analyse information from the web",
mcps=[
"https://mcp.exa.ai/mcp?api_key=YOUR_KEY", # Web search
]
)
writer = Agent(
role="Content Writer",
goal="Write clear, accurate content based on research",
)
research_task = Task(
description="Research the latest trends in AI agent frameworks",
agent=researcher,
expected_output="A summary of key trends with sources"
)
writing_task = Task(
description="Write a blog post based on the research",
agent=writer,
expected_output="A 500-word blog post"
)
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task]
)
result = crew.kickoff()
print(result)
Structured Setup: Multiple Transports
For more control, use transport-specific classes:
from crewai import Agent
from crewai.mcp import MCPServerStdio, MCPServerHTTP
# Local stdio server (runs as a subprocess)
filesystem = MCPServerStdio(
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem", "/tmp/workspace"]
)
# Remote HTTP server
search = MCPServerHTTP(
url="https://mcp.exa.ai/mcp",
headers={"Authorization": "Bearer YOUR_KEY"}
)
agent = Agent(
role="Full-Stack Assistant",
goal="Help with file operations and web research",
mcps=[filesystem, search]
)
Multiple Servers with MCPServerAdapter
To connect several MCP servers and aggregate their tools:
from crewai import Agent
from crewai_tools import MCPServerAdapter
from mcp import StdioServerParameters
server_params_list = [
{"url": "http://localhost:8001/mcp", "transport": "streamable-http"},
StdioServerParameters(command="npx", args=["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]),
]
with MCPServerAdapter(server_params_list) as tools:
agent = Agent(
role="Multi-Tool Agent",
goal="Use all available tools to complete tasks",
tools=tools # All MCP tools aggregated
)
Option 2: LangChain/LangGraph + MCP
LangChain's official langchain-mcp-adapters library converts MCP tools into LangChain-compatible tools that work with LangGraph agents.
Install
pip install langchain-mcp-adapters langchain-openai langgraph
Basic Example: MCP Server + LangGraph Agent
First, create an MCP server (e.g. math_server.py):
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("Math")
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers."""
return a + b
@mcp.tool()
def multiply(a: int, b: int) -> int:
"""Multiply two numbers."""
return a * b
if __name__ == "__main__":
mcp.run(transport="stdio")
Then connect it to a LangGraph agent:
import asyncio
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from langchain_mcp_adapters.tools import load_mcp_tools
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4.1")
async def main():
server = StdioServerParameters(
command="python",
args=["math_server.py"]
)
async with stdio_client(server) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# Convert MCP tools to LangChain tools
tools = await load_mcp_tools(session)
# Create a LangGraph agent with MCP tools
agent = create_react_agent(model, tools)
response = await agent.ainvoke(
{"messages": [{"role": "user", "content": "What is (3 + 5) x 12?"}]}
)
print(response["messages"][-1].content)
asyncio.run(main())
Multiple MCP Servers with LangChain
from langchain_mcp_adapters.client import MultiServerMCPClient
async with MultiServerMCPClient({
"math": {
"command": "python",
"args": ["math_server.py"],
"transport": "stdio",
},
"weather": {
"url": "http://localhost:8000/mcp",
"transport": "streamable_http",
}
}) as client:
tools = client.get_tools()
agent = create_react_agent(model, tools)
result = await agent.ainvoke(
{"messages": [{"role": "user", "content": "Add 5+3, then check weather in London"}]}
)
CrewAI vs LangChain for MCP: Which to Choose?
| CrewAI | LangChain / LangGraph | |
|---|---|---|
| MCP support | Native (mcps=[...] on agents) |
Via langchain-mcp-adapters |
| Multi-agent | Built-in (Crew, roles, tasks) | Via LangGraph (graph-based workflows) |
| Best for | Team-of-agents with defined roles | Complex workflows, loops, conditional logic |
| Setup | Simpler, more declarative | More flexible, more code |
| Transports | stdio, HTTP, SSE | stdio, HTTP (via adapters) |
Practical Multi-Agent Example
A team of agents that uses MCP servers to research a topic, write code, and review it:
from crewai import Agent, Task, Crew
from crewai.mcp import MCPServerHTTP
# Connect to GitHub MCP for code access
github = MCPServerHTTP(
url="https://api.githubcopilot.com/mcp/",
headers={"Authorization": "Bearer YOUR_TOKEN"}
)
researcher = Agent(
role="Tech Researcher",
goal="Find relevant code examples and documentation",
mcps=[github]
)
developer = Agent(
role="Developer",
goal="Write clean, tested code based on research",
)
reviewer = Agent(
role="Code Reviewer",
goal="Review code for bugs, security issues, and best practices",
mcps=[github]
)
crew = Crew(
agents=[researcher, developer, reviewer],
tasks=[
Task(description="Research MCP server implementations on GitHub",
agent=researcher, expected_output="Summary of patterns found"),
Task(description="Write a Python MCP server based on the research",
agent=developer, expected_output="Complete server.py code"),
Task(description="Review the code and suggest improvements",
agent=reviewer, expected_output="Code review with suggestions"),
]
)
result = crew.kickoff()
Key Resources
| Resource | Link |
|---|---|
| CrewAI MCP docs | docs.crewai.com/mcp/overview |
| CrewAI multiple servers | docs.crewai.com/mcp/multiple-servers |
| langchain-mcp-adapters | github.com/langchain-ai/langchain-mcp-adapters |
| LangChain MCP docs | docs.langchain.com |
| OpenAI Agents SDK + MCP | openai.github.io/openai-agents-python/mcp |
Test MCP Servers Before Connecting to Agents
Verify your MCP server works correctly with MCP Playground
Open MCP Playground →Related Content
- MCP vs Function Calling vs APIs: When to Use Each
- Build Your First MCP Server with Python and FastMCP
- ChatGPT MCP: OpenAI Agents SDK Setup Guide
- What Is the Model Context Protocol (MCP)?
Frequently Asked Questions
Do I need CrewAI or LangChain to use MCP?
Can I use the same MCP server with both CrewAI and LangChain?
Which is better for beginners — CrewAI or LangChain?
Written by Nikhil Tiwari
15+ years in product development. AI enthusiast building developer tools that make complex technologies accessible to everyone.
Related Resources