ChatGPT MCP Support: Complete Setup Guide with OpenAI Agents SDK
TL;DR
- OpenAI added MCP support via the OpenAI Agents SDK and ChatGPT Apps
- 4 transport options: Hosted MCP tools, Streamable HTTP, SSE (legacy), stdio
- Hosted MCP tools = easiest: OpenAI calls the MCP server for you (no local process)
- ChatGPT Apps (beta) let workspace admins add MCP servers directly inside ChatGPT
- Install:
pip install openai-agents
OpenAI has embraced the Model Context Protocol. You can now connect MCP servers to ChatGPT and build agents that use any MCP-compatible tool — from filesystem access to databases to GitHub. This guide covers every way to do it.
Two Ways OpenAI Supports MCP
| Approach | What it is | Who it's for |
|---|---|---|
| OpenAI Agents SDK | Python/JS SDK for building agents that use MCP servers as tools | Developers building custom agents and apps |
| ChatGPT Apps (beta) | Add MCP servers directly inside ChatGPT via workspace settings | Business/Enterprise/Edu teams who want MCP in ChatGPT UI |
Part 1: OpenAI Agents SDK + MCP
The OpenAI Agents SDK (Python) is OpenAI's production-ready framework for building agentic applications. It has first-class MCP support with 4 transport options.
Install
pip install openai-agents
The 4 MCP Transport Options
| Transport | Class | Best for |
|---|---|---|
| Hosted MCP Tools | HostedMCPTool |
Publicly reachable servers — OpenAI calls them for you |
| Streamable HTTP | MCPServerStreamableHttp |
Your own HTTP servers (local or remote) |
| stdio | MCPServerStdio |
Local servers that run as subprocesses |
| SSE (legacy) | MCPServerSse |
Older servers using Server-Sent Events |
Option A: Hosted MCP Tools (Easiest)
With hosted tools, OpenAI's Responses API calls the remote MCP server on your behalf. No local process, no connection management. Best for publicly reachable servers.
import asyncio
from agents import Agent, HostedMCPTool, Runner
async def main():
agent = Agent(
name="Assistant",
tools=[
HostedMCPTool(
tool_config={
"type": "mcp",
"server_label": "gitmcp",
"server_url": "https://gitmcp.io/openai/codex",
"require_approval": "never",
}
)
],
)
result = await Runner.run(agent, "What language is the Codex repo written in?")
print(result.final_output)
asyncio.run(main())
Key points:
- Add
HostedMCPTooltotools(notmcp_servers) - The model discovers and invokes tools automatically
require_approval: set to"always"for sensitive operations, or provide anon_approval_requestcallback
Option B: Streamable HTTP
For servers you run yourself (local or remote). You manage the connection; the SDK manages the protocol.
import asyncio
from agents import Agent, Runner
from agents.mcp import MCPServerStreamableHttp
async def main():
async with MCPServerStreamableHttp(
name="My MCP Server",
params={
"url": "http://localhost:8000/mcp",
"headers": {"Authorization": "Bearer YOUR_TOKEN"},
},
cache_tools_list=True,
) as server:
agent = Agent(
name="Assistant",
instructions="Use MCP tools to answer questions.",
mcp_servers=[server],
)
result = await Runner.run(agent, "What tools are available?")
print(result.final_output)
asyncio.run(main())
Option C: stdio (Local Servers)
Spawn a local MCP server as a subprocess. Great for quick prototyping with existing MCP servers like the filesystem server.
import asyncio
from pathlib import Path
from agents import Agent, Runner
from agents.mcp import MCPServerStdio
async def main():
async with MCPServerStdio(
name="Filesystem Server",
params={
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp/demo"],
},
) as server:
agent = Agent(
name="File Assistant",
instructions="Help the user with file operations.",
mcp_servers=[server],
)
result = await Runner.run(agent, "List the files available to you.")
print(result.final_output)
asyncio.run(main())
Multi-Server Setup
Agents can use multiple MCP servers at once. Use MCPServerManager for clean lifecycle management:
from agents import Agent, Runner
from agents.mcp import MCPServerManager, MCPServerStreamableHttp
servers = [
MCPServerStreamableHttp(name="calendar", params={"url": "http://localhost:8000/mcp"}),
MCPServerStreamableHttp(name="docs", params={"url": "http://localhost:8001/mcp"}),
]
async with MCPServerManager(servers) as manager:
agent = Agent(
name="Assistant",
mcp_servers=manager.active_servers,
)
result = await Runner.run(agent, "What meetings do I have today?")
print(result.final_output)
MCPServerManager drops failed servers by default (drop_failed_servers=True) and tracks them in manager.failed_servers.
Part 2: ChatGPT Apps (MCP in the ChatGPT UI)
If you want MCP tools directly inside ChatGPT (not code), OpenAI offers ChatGPT Apps in beta for Business, Enterprise, and Edu plans.
How to Set Up
Go to chatgpt.com/admin/ca → Workspace Settings → Permissions & Roles → enable Connected Data Developer mode.
Workspace Settings → Apps → Create. Provide your MCP server endpoint URL and metadata.
Set up OAuth or API key authentication for your MCP server.
Once deployed, ChatGPT can call your MCP server's tools, execute workflows, and display results.
Availability: ChatGPT Apps with full MCP (read + write) are in beta for Business, Enterprise, and Edu plans on chatgpt.com. Not available on free/Plus plans yet.
Part 3: Build an MCP Server for ChatGPT
Want to build your own MCP server that works with ChatGPT? The OpenAI Apps SDK documentation at developers.openai.com/apps-sdk/build/mcp-server covers the full flow. Here's the quick version:
# Install dependencies
pip install "mcp[cli]" fastapi uvicorn
Create your server using the MCP Python SDK's built-in FastMCP class:
from mcp.server.fastmcp import FastMCP
from fastapi import FastAPI
mcp = FastMCP("my-chatgpt-app")
app = FastAPI()
@mcp.tool()
def lookup_order(order_id: str) -> dict:
"""Look up an order by ID."""
# Your business logic here
return {"order_id": order_id, "status": "shipped", "eta": "2 days"}
@mcp.tool()
def search_products(query: str, limit: int = 5) -> list:
"""Search the product catalog."""
# Your search logic here
return [{"name": f"Product matching '{query}'", "price": 29.99}]
if __name__ == "__main__":
mcp.run(transport="streamable-http")
Then deploy it (any hosting that exposes an HTTP endpoint works), configure it as a ChatGPT App, and ChatGPT can call your tools.
OpenAI Agents SDK vs Claude Desktop: MCP Comparison
| Feature | OpenAI Agents SDK | Claude Desktop |
|---|---|---|
| MCP transports | Hosted, Streamable HTTP, stdio, SSE | stdio (local only) |
| Remote servers | Yes (Hosted + HTTP) | No (stdio only) |
| Multi-server | Yes (MCPServerManager) |
Yes (config file) |
| Approval flows | Built-in (require_approval) |
Per-tool prompts |
| Language | Python + JavaScript SDKs | Config-based (any server language) |
| Setup | Code-first (agent scripts) | Config-first (JSON config) |
Useful Links
| Resource | URL |
|---|---|
| OpenAI Agents SDK (Python) | github.com/openai/openai-agents-python |
| Agents SDK MCP docs | openai.github.io/openai-agents-python/mcp/ |
| OpenAI Apps SDK | developers.openai.com/apps-sdk/ |
| ChatGPT Developer Mode | help.openai.com |
| MCP Examples (GitHub) | examples/mcp |
| Agents SDK JS | github.com/openai/openai-agents-js |
Test Remote MCP Servers in the Browser
Built an MCP server? Test it with MCP Playground before connecting to ChatGPT
Open MCP Playground →Related Content
- What Is the Model Context Protocol (MCP)?
- Build Your First MCP Server with Python and FastMCP
- How to Set Up MCP in Claude Desktop
- Awesome MCP Servers List
Frequently Asked Questions
Can I use MCP with ChatGPT on a free or Plus plan?
What's the difference between HostedMCPTool and MCPServerStreamableHttp?
Does ChatGPT support stdio MCP servers?
MCPServerStdio in your own Python scripts, or use Claude Desktop / Cursor which natively support stdio.Can I use the same MCP server with both ChatGPT and Claude?
Nikhil Tiwari
15+ years of experience in product development, AI enthusiast, and passionate about building innovative solutions that bridge the gap between technology and real-world applications. Specializes in creating developer tools and platforms that make complex technologies accessible to everyone.