
TL;DR
MCP servers and function calling both let AI tools interact with external systems. They solve different problems. Here is when to reach for each.
MCP and function calling are not competing approaches. They operate at different layers. Function calling is a model capability - the model decides to call a function. MCP is a protocol - it standardizes how tools connect to AI systems. Understanding when to use each saves you from building the wrong abstraction.
Function calling is built into the model API. You define tools as JSON schemas, send them alongside your prompt, and the model returns structured tool calls when it decides one is needed.
const response = await anthropic.messages.create({
model: "claude-sonnet-4-6",
messages: [{ role: "user", content: "What's the weather in Tokyo?" }],
tools: [{
name: "get_weather",
description: "Get current weather for a city",
input_schema: {
type: "object",
properties: {
city: { type: "string" },
units: { type: "string", enum: ["celsius", "fahrenheit"] },
},
required: ["city"],
},
}],
});
The model sees the tool definitions, decides if one is relevant, and returns a structured tool call. Your code executes the tool and returns the result. This loop can repeat multiple times.
When to use function calling:
MCP is a protocol layer that sits between AI tools and external services. Instead of defining tools inline with your API call, MCP servers expose tools, resources, and prompts through a standardized interface.
// MCP server exposes tools via the protocol
const server = new McpServer({ name: "weather-server" });
server.tool("get_weather", { city: z.string(), units: z.enum(["celsius", "fahrenheit"]) },
async ({ city, units }) => {
const data = await fetchWeather(city, units);
return { content: [{ type: "text", text: JSON.stringify(data) }] };
}
);
Claude Code, Cursor, and other AI tools discover MCP servers and their capabilities automatically. The user does not wire up tool schemas manually.
When to use MCP:
Get the weekly deep dive
Tutorials on Claude Code, AI agents, and dev tools - delivered free every week.
| Function Calling | MCP | |
|---|---|---|
| Level | Model API feature | Protocol layer |
| Scope | Per-request | Persistent server |
| Discovery | Manual (defined in code) | Automatic (server advertises) |
| Portability | Tied to your app | Works across AI clients |
| State | Stateless per call | Can maintain connections |
| Resources | Tools only | Tools + resources + prompts |
| Transport | HTTP/API | Stdio, HTTP, SSE |
The best architectures use both. MCP servers provide reusable tool infrastructure. Function calling handles application-specific logic.
User prompt
-> Claude Code / AI Client
-> MCP Server (database access, file system, external APIs)
-> Function calling (app-specific business logic)
-> Response
Example: your AI coding assistant uses an MCP server for database queries (reusable across projects) and function calling for your specific code generation logic (unique to your app).
Reach for function calling when:
Reach for MCP when:
Use both when:
MCP is winning for infrastructure-level tools. Database access, browser automation, Slack integration, GitHub operations - these all make sense as MCP servers because they are reusable across projects and clients.
Function calling remains essential for application-specific logic. Your custom data pipeline, your specific API endpoints, your business rules - these belong in your application's function calling layer.
The line between them will blur as more AI clients support MCP natively, but the architectural distinction will remain: protocol for reusable infrastructure, API for application logic.
Yes. MCP is an open protocol. Cursor, Windsurf, Zed, and other tools support it. You can also use MCP servers directly via the TypeScript SDK in any Node.js application.
No. They solve different problems. Function calling is how models interact with tools at the API level. MCP is how tools expose themselves to AI clients. A single application often uses both.
Function calling is simpler for quick prototypes - add a tool definition to your API call and handle the result. MCP requires running a separate server but pays off when you want the tool to work across multiple AI clients.
If you are building AI applications, yes. Function calling is fundamental to how models use tools. MCP is becoming the standard for how tools connect to AI development environments.
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.
Anthropic's agentic coding CLI. Runs in your terminal, edits files autonomously, spawns sub-agents, and maintains memory...
View ToolThe TypeScript toolkit for building AI apps. Unified API across OpenAI, Anthropic, Google. Streaming, tool calling, stru...
View Tool
New tutorials, open-source projects, and deep dives on coding agents - delivered weekly.
Gives AI agents access to 250+ external tools (GitHub, Slack, Gmail, databases) with managed OAuth. Handles the auth and...
Configure Claude Code for maximum productivity -- CLAUDE.md, sub-agents, MCP servers, and autonomous workflows.
AI AgentsWhat MCP servers are, how they work, and how to build your own in 5 minutes.
AI AgentsInstall Claude Code, configure your first project, and start shipping code with AI in under 5 minutes.
Getting Started
In this video, I introduce the tool support feature within Ollama that enables local function calling on your machine. We'll explore the easy setup, deployment options (local and cloud), and...

#OpenAI #NodeJS #LLM #GPT4 #GPT3 In this video I will show you how to set up the brand new function calling within OpenAI's API that was just released on June 13. In this video I will show...

Getting Started with OpenAI's New TypeScript Agents SDK: A Comprehensive Guide OpenAI has recently unveiled their Agents SDK within TypeScript, and this video provides a detailed walkthrough...

A step-by-step guide to building Model Context Protocol servers in TypeScript. Project setup, tool registration, resourc...
Everything you need to know about Model Context Protocol - how it works, how to install servers, how to build your own,...

MCP servers connect AI agents to databases, APIs, and tools through a standard protocol. Here is how to configure and us...