Linear hosts an official MCP server for issues, projects, teams and comments. It speaks streamable HTTP at mcp.linear.app/mcp and SSE at mcp.linear.app/sse. Authenticate with a Linear personal API key from Settings → API, paste the URL and token into any MCP client, and your backlog becomes a live agent surface.
https://mcp.linear.app/sse
Claude Sonnet 4.5
MCP Playground runs 30+ models on the same workflow: switch anytime, or use Compare mode to run several in parallel and balance quality vs. cost.
Linear personal API key (Settings → API → Personal API keys). Scopes should match the teams and actions you need.
How models use it and what it is built for.
The Linear MCP server wraps Linear’s product-management API: listing teams, cycles, issues and projects; fetching a single issue or document; creating or updating work (using the vendor’s current save/create tools); and posting comments. Models can answer “what is blocked in the current cycle?”, “summarise what shipped last week”, or “create a P1 bug for the checkout failure in ENG”. Because Linear runs the server, you do not self-host a bridge — you only need API credentials with the right scopes. Agent Studio and other clients connect over HTTP or SSE using the same endpoint family.
Typical tools an AI model can call. Exact names vary by version.
Copy any of these into MCP Agent Studio after connecting.
What issues are in progress in the current cycle for my team?
List unassigned P1 bugs in the backlog and suggest who might own them.
Summarise everything my team closed in the last 7 days with links to issues.
Create a new issue in ENG titled "Investigate 500s on /checkout" and set it to High priority.
This is not a single-model product: you get the same MCP connection with 30+ models (Claude, GPT, Gemini, DeepSeek, open-weight, and more), you can switch mid-conversation, and you can open Compare mode to run the same prompt against multiple models at once. The card above is a suggested starting point for this server — not the only choice.
Default pick for Linear
Claude Sonnet 4.5
Claude Sonnet 4.5 handles multi-step backlog reasoning and state changes reliably. The Linear template defaults to Haiku 4.5 for lighter triage; compare both in Agent Studio.
Open MCP Agent Studio with the connection pre-filled. Add your token, pick any of 30+ models, and start chatting — no install required.
Try Linear in Agent StudioCommon questions about connecting, scoping and using it safely.
It is the official remote MCP service Linear hosts for its product. It exposes your workspace’s issues, teams, cycles, projects and comments as tools so an AI can read the backlog, update work and add comments the same way the Linear API does.
Create a personal API key under Linear → Settings → API, with permissions for the teams and objects you want the agent to touch. In MCP clients that do not use OAuth, pass that key as a bearer token when connecting to the hosted URL.
Yes, if the API key can write issues in those teams. Start with a key scoped to a test team or a single project, review proposed issue updates before you approve runs in production, and use read-only or narrower keys until you trust the workflow.
Yes. Any MCP client that can reach the hosted URL with your token works — for example Claude Desktop, Cursor, or MCP Agent Studio in the browser. Use HTTP at mcp.linear.app/mcp or SSE at mcp.linear.app/sse depending on what your client supports.
Use Claude Sonnet 4.5 for planning, summarisation, and writing issue descriptions. Claude Haiku 4.5 is enough for short triage. Run two models on the same thread in Agent Studio to compare answers before you standardise.
GitHub
Drive GitHub repos, PRs and issues with an AI agent.
Slack
Search workspace, read threads, post updates and work with Slack from chat.
Notion
Search, summarise and edit your Notion workspace with AI.
Jira & Confluence
Search Jira and Confluence, move tickets, and keep docs aligned with in-flight work.