Sentry provides a remote MCP at mcp.sentry.dev/mcp (streamable HTTP) that lets a model work with the same issue and event data you use in the Sentry console: searching issues, opening stack traces, and reasoning about which release or deploy introduced a regression. Create an org-scoped token with the read scopes you need, then connect an MCP client and ask questions in natural language instead of hand-building queries.
https://mcp.sentry.dev/mcp
Claude Sonnet 4.5
MCP Playground runs 30+ models on the same workflow: switch anytime, or use Compare mode to run several in parallel and balance quality vs. cost.
Sentry org auth token (Settings → Account or Organization → Auth Tokens) with at least `project:read` and `event:read` for the projects you will query.
How models use it and what it is built for.
Sentry’s MCP is aimed at on-call, engineering leads, and support engineers who need quick answers: what new errors showed up in the last release, which issue is the user-facing 500, or whether two crash groups are the same root cause. The server mirrors Sentry’s permission model, so a token that can only read a single project will not exfiltrate the rest of the org. Pair it with GitHub or Linear MCP when you need to jump from a stack trace to a PR and a Jira follow-up, all in one session.
Typical tools an AI model can call. Exact names vary by version.
Copy any of these into MCP Agent Studio after connecting.
What new errors appeared in production after the 14:00 UTC deploy?
Summarise the top 5 issues by event volume in the web client for the last 24 hours.
Which issue matches this Sentry event URL, and which release first showed it?
Are these two Sentry issues duplicates of the same line in `checkout.py`?
This is not a single-model product: you get the same MCP connection with 30+ models (Claude, GPT, Gemini, DeepSeek, open-weight, and more), you can switch mid-conversation, and you can open Compare mode to run the same prompt against multiple models at once. The card above is a suggested starting point for this server — not the only choice.
Default pick for Sentry
Claude Sonnet 4.5
Claude Sonnet 4.5 handles stack traces, release correlation, and long error threads well. For simple “get issue 1234” fetches, Haiku 4.5 is often enough.
Open MCP Agent Studio with the connection pre-filled. Add your token, pick any of 30+ models, and start chatting — no install required.
Try Sentry in Agent StudioCommon questions about connecting, scoping and using it safely.
It is a hosted remote MCP Sentry provides so AI tools can list and read issues, events, and related metadata in your organisation using the same security boundaries as the REST and UI APIs, instead of you copying error bodies into a chat by hand.
Create a personal or organisation access token in Sentry with the minimum scopes: typically `org:read`, `project:read`, and `event:read` for error triage. Pass it as a bearer token to your MCP client, or use whatever configuration your product requires for the Authorization header.
The template and defaults in most clients focus on read-only triage. If Sentry ever exposes write tools, treat them with the same caution as write APIs: prefer read-only tokens until you explicitly need and trust automated mutations.
The hosted mcp.sentry.dev endpoint is built for Sentry’s SaaS. Self-hosted and EU-specific hosts may use different base URLs; follow your organisation’s Sentry or contractual guidance before pointing an agent at non-default regions.
Use Claude Sonnet 4.5 for multi-issue comparisons and “what changed in this release” questions, where the model has to juggle time ranges, tags, and stack frames. You can A/B the same prompt against Haiku 4.5 in Agent Studio to see where premium reasoning pays off.
Vercel
List deployments, read logs, manage env keys and roll back from natural language.
GitHub
Drive GitHub repos, PRs and issues with an AI agent.
Linear
Triage issues, run cycles and update your product backlog with AI.
Jira & Confluence
Search Jira and Confluence, move tickets, and keep docs aligned with in-flight work.