Back to Blog
GuideFeb 10, 202611 min read

MCP vs Function Calling vs REST APIs: When to Use Each for AI Agents

NT

Nikhil Tiwari

MCP Playground

TL;DR

  • REST APIs — low-level HTTP communication. You build the integration manually.
  • Function calling — an LLM capability where the model outputs structured JSON to request tool execution. Tied to a specific provider.
  • MCP — an open protocol that standardises tool discovery, auth, and execution across any AI client. Provider-agnostic.
  • They operate at different layers and are often complementary, not competing

If you're building AI agents, you've probably asked: should I use MCP, function calling, or just call APIs directly? The short answer is that they solve different problems at different layers. This guide breaks down each approach so you can pick the right one — or combine them.

Quick Comparison

REST APIs Function Calling MCP
What it is HTTP request-response pattern for web services LLM feature: model outputs structured JSON to invoke functions Open protocol for connecting AI clients to tools and data
Layer Transport (HTTP) Model capability Application protocol (uses JSON-RPC)
Tool discovery Manual — developer reads docs, writes integration code Static — tools defined in each API request as JSON schemas Dynamic — tools discovered at runtime via protocol
State Stateless (each request is independent) Stateless (tool defs sent per request) Stateful (persistent session with context)
Provider lock-in None (HTTP is universal) Yes — tied to OpenAI, Anthropic, etc. None — open standard, any client/server
Scaling tools N APIs = N custom integrations All tools in every request (context tax) Tools loaded on-demand, server-side
Auth Per-API (API keys, OAuth, etc.) Your code handles auth Credential isolation at server level

How Each Works

REST APIs

The traditional approach. You call HTTP endpoints directly, parse responses, and handle errors. Every API has its own auth, schema, and conventions.

# Direct API call — you handle everything
import requests

response = requests.get(
    "https://api.github.com/repos/facebook/react/pulls",
    headers={"Authorization": "Bearer ghp_xxx"}
)
pulls = response.json()

For AI agents: You write wrapper functions, define them as tools in your prompt, and the agent calls them. Each new API means new integration code.

Function Calling

A feature offered by LLM providers (OpenAI, Anthropic, Google). You define tools as JSON schemas in your API request. The model decides when to use them and outputs structured JSON with the function name and arguments. Your code then executes the function and sends results back.

# OpenAI function calling example
response = client.chat.completions.create(
    model="gpt-4.1",
    messages=[{"role": "user", "content": "What's the weather in London?"}],
    tools=[{
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get current weather for a city",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {"type": "string"}
                },
                "required": ["city"]
            }
        }
    }]
)
# Model returns: {"name": "get_weather", "arguments": {"city": "London"}}
# You execute get_weather("London") and send the result back

Key limitation: Every tool definition is sent in every request, consuming tokens. With 50+ tools, this "context tax" becomes expensive and may degrade model performance.

MCP (Model Context Protocol)

An open protocol (now under the Linux Foundation) where tools live in separate MCP servers. AI clients connect to servers, discover tools at runtime, and invoke them through a standard interface. Tools are defined once on the server side — not repeated in every prompt.

# MCP server (server.py) — define tools once
from fastmcp import FastMCP

mcp = FastMCP("Weather")

@mcp.tool
def get_weather(city: str) -> dict:
    """Get current weather for a city."""
    return {"city": city, "temp": 18, "condition": "cloudy"}

mcp.run()
# Any MCP client connects and uses the tools
# Claude Desktop, Cursor, ChatGPT, OpenAI Agents SDK,
# custom clients — all use the same server

Key advantage: Build the server once, and every MCP-compatible client can use it. No per-provider integration. Tools are discovered dynamically, not hardcoded into prompts.

The N×M Problem

This is the core issue MCP was designed to solve:

Without MCP

N AI clients × M tools = N×M custom integrations. Each client needs its own code to talk to each tool.

With MCP

N clients + M servers = N+M implementations. Each side implements the protocol once.

When to Use Each

Scenario Recommendation Why
Simple script calling 1-2 APIs REST API (direct calls) Minimal overhead, full control
Chatbot with 3-5 tools, single LLM provider Function calling Simple, low latency, built into the API
Agent with 10+ tools, multiple providers MCP Dynamic discovery, no context tax, portable
Enterprise agent with auth requirements MCP Credential isolation, RBAC, audit logging
IDE integration (Cursor, VS Code, Claude Code) MCP These tools use MCP natively
Quick prototype / hackathon Function calling Fastest to implement, minutes to set up
Tools you want to share across teams/clients MCP Build once, use from any MCP client

Can You Combine Them?

Yes — and most production systems do. They operate at different layers:

  • MCP servers often call REST APIs internally. An MCP server for GitHub uses the GitHub REST API under the hood. MCP wraps it with discovery, auth isolation, and a standard interface.
  • Function calling can invoke MCP tools. The OpenAI Agents SDK supports HostedMCPTool — the model uses function calling to trigger MCP server tools.
  • MCP doesn't replace APIs. It sits on top of them, providing a standard layer for AI clients.

Adoption in 2026

MCP has been adopted by all major AI platforms:

  • Anthropic — created MCP; Claude Desktop, Claude Code use it natively
  • OpenAI — Agents SDK supports MCP; ChatGPT Apps use MCP endpoints
  • Google — official MCP servers for BigQuery, Maps, Cloud services
  • Microsoft — VS Code + GitHub Copilot support MCP; Azure MCP servers available
  • Cursor, Windsurf, Cline — native MCP support

MCP was donated to the Linux Foundation (Dec 2025) for vendor-neutral governance. The MCP SDK has 97M+ monthly downloads across Python and TypeScript.

Test MCP Servers in Your Browser

Try any remote MCP server with MCP Playground — no install needed

Open MCP Playground →

Related Content

Frequently Asked Questions

Does MCP replace function calling?
No. They work at different layers. Function calling is a model capability (the model outputs structured JSON). MCP is a protocol for tool hosting and discovery. In practice, function calling is often used inside MCP — for example, the OpenAI Agents SDK uses function calling to invoke tools exposed by MCP servers.
Is MCP just a wrapper around APIs?
MCP servers often call REST APIs internally, but MCP adds significant value: standardised tool discovery, credential isolation, stateful sessions, bidirectional communication, and a protocol that any AI client can use. It's an application-level protocol, not just an API wrapper.
What about the "context tax" with function calling?
With function calling, every tool definition must be included in every API request. This consumes tokens and context window space. With 50+ tools, this becomes expensive and can degrade model performance. MCP avoids this because tools live on the server side and are discovered on-demand.
Should I migrate all my function calling tools to MCP?
Not necessarily. If you have a simple chatbot with a few tools on a single LLM provider, function calling works fine. Consider MCP when you need portability across providers, have many tools, require enterprise auth, or want to share tools across multiple clients (IDE, chat, agents).
NT

Written by Nikhil Tiwari

15+ years in product development. AI enthusiast building developer tools that make complex technologies accessible to everyone.