The Docker MCP server gives AI models structured access to your local Docker environment — list running containers, inspect images, read logs, execute commands inside containers and manage Docker Compose services. Runs via Docker Desktop or the Docker CLI with no extra infrastructure.
Bring your own
Claude Sonnet 4.5
MCP Playground runs 30+ models on the same workflow: switch anytime, or use Compare mode to run several in parallel and balance quality vs. cost.
No token required
How models use it and what it is built for.
Docker's MCP integration exposes the Docker Engine API as MCP tools. Models can list all containers (running or stopped), inspect container configuration and resource usage, read stdout/stderr logs, run one-off commands inside a container, pull images, inspect image layers and manage Docker Compose stacks. It connects through the Docker socket on your machine, so Docker Desktop (or Docker Engine on Linux) must be running. The server is part of Docker's official MCP Catalog and is open source. It requires no API key — just a running Docker installation.
Typical tools an AI model can call. Exact names vary by version.
Copy any of these into MCP Agent Studio after connecting.
List all running containers and how long they have been up.
Show me the last 50 lines of logs from the "api" container.
Which containers are using more than 500MB of memory right now?
Run "npm test" inside the "app" container and show me the output.
This is not a single-model product: you get the same MCP connection with 30+ models (Claude, GPT, Gemini, DeepSeek, open-weight, and more), you can switch mid-conversation, and you can open Compare mode to run the same prompt against multiple models at once. The card above is a suggested starting point for this server — not the only choice.
Default pick for Docker
Claude Sonnet 4.5
Claude Sonnet 4.5 handles multi-step DevOps reasoning and log analysis well. Use Haiku for simple container listing; Opus for complex Compose debugging.
Open MCP Agent Studio with the connection pre-filled. Add your token, pick any of 30+ models, and start chatting — no install required.
Open Agent StudioCommon questions about connecting, scoping and using it safely.
It is an official open-source MCP server from Docker that lets AI models interact with your local Docker environment. Agents can list containers, read logs, run commands inside containers and manage images — all from a natural language chat.
Either works. The server connects to the Docker socket, which Docker Desktop, Docker Engine (Linux) and Docker CLI all expose. On macOS, Docker Desktop is the easiest path.
Yes. The server itself is free and open source. Docker Desktop is free for personal use and small businesses. AI model usage is billed by your model provider.
Yes — start, stop, restart and remove tools are available. Always confirm before the agent takes a destructive action. For read-only monitoring workflows, you can restrict access at the Docker socket level.
The CLI requires you to know the exact commands. Docker MCP lets you describe what you want in plain English and the AI agent figures out the right Docker API calls, chains multiple operations, and explains the results in a readable format.
GitHub
Drive GitHub repos, PRs and issues with an AI agent.
Cloudflare
Drive Workers, DNS, R2, D1 and the rest of the Cloudflare API with AI.
Vercel
List deployments, read logs, manage env keys and roll back from natural language.
Playwright
Give AI models real browser control — navigate, click, fill forms and screenshot any page.