
TL;DR
MCP is the USB-C of AI agents. What the Model Context Protocol is, why Anthropic built it, and how to install your first server in Claude Code or Cursor. Fact-checked against the official MCP spec.
If you have touched an AI coding tool in the last twelve months, you have seen the letters MCP. They show up in Claude Code docs, in the Cursor settings panel, in every new Anthropic launch video, and now in the OpenAI Codex docs too. In April 2026 the MCP Directory we maintain at mcp.developersdigest.tech lists 271 active servers. The official Anthropic MCP registry tracks thousands more.
Most beginner posts either wave their hands ("it connects AI to tools") or drown you in JSON-RPC frame diagrams. This guide sits in between. Everything you are about to read is pulled directly from the official spec at modelcontextprotocol.io and the Claude Code MCP documentation. If a claim is not sourced to a primary doc, it is not in this post.
The Model Context Protocol is described on the official introduction page as "an open-source standard for connecting AI applications to external systems." The same page uses the now-famous analogy: "Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect electronic devices, MCP provides a standardized way to connect AI applications to external systems."
That is the whole idea. Before MCP, every AI tool that wanted to talk to your filesystem, your database, or your Slack had to ship a bespoke integration. Anthropic open-sourced MCP in November 2024 to replace that N-by-M mess with a single protocol. One server, every client.
The Wikipedia entry on MCP and coverage in The New Stack both note that the approach worked. OpenAI formally adopted the protocol in March 2025, and the OpenAI Codex docs at developers.openai.com/codex/mcp now list first-party MCP support. Microsoft Copilot Studio, Google's Gemini CLI, Cursor, VS Code, and Claude Code all ship MCP clients today. In December 2025 Anthropic donated the protocol to the Agentic AI Foundation under the Linux Foundation, with Block and OpenAI as co-founders.
That is what "won" looks like for a protocol: your competitor ships it.
MCP defines three roles.
A host is the AI application you sit in front of. Claude Code, Cursor, ChatGPT Desktop, Gemini CLI. The host runs the LLM and owns the conversation.
A client is the connection the host opens for each server. One client per server, one-to-one.
A server is a small program that exposes capabilities over the protocol. It runs locally as a subprocess or remotely as an HTTP service. It does not care which host is on the other end.
The transport between client and server is JSON-RPC 2.0 encoded as UTF-8. That is fixed by the spec.
Every MCP server exposes some combination of three primitives. This is the part that trips up beginners the most, because they look similar but have different ownership semantics.
From the spec: "Tools in MCP are designed to be model-controlled, meaning that the language model can discover and invoke tools automatically based on its contextual understanding and the user's prompts."
Tools are functions the model can call. The server declares them by listing a name, description, and inputSchema (JSON Schema). When the client asks, the server returns the list. When the model wants to invoke one, it sends tools/call with arguments, and the server returns a result.
A tool definition from the spec looks like this:
{
"name": "get_weather",
"title": "Weather Information Provider",
"description": "Get current weather information for a location",
"inputSchema": {
"type": "object",
"properties": {
"location": { "type": "string", "description": "City name or zip code" }
},
"required": ["location"]
}
}
The spec is explicit that "there SHOULD always be a human in the loop with the ability to deny tool invocations." That is why Claude Code prompts you before running a new tool the first time.
Resources are "application-driven" context: files, database rows, API payloads, anything the host might want to feed into the model. Each resource is identified by a URI. The spec defines standard schemes like file://, https://, and git://, and allows custom ones.
Clients call resources/list to discover what is available, then resources/read with a URI to fetch contents. Resources can also be subscribable. The server notifies the client when a file changes.
The key distinction: the host decides when to pull in a resource, not the model. In Cursor, the @ picker that lets you attach a file to a prompt is a resource picker. Tools are for the model to invoke; resources are for the app (or the user) to attach.
Prompts are "user-controlled" templates. The spec describes them as "structured messages and instructions for interacting with language models," and the docs show the canonical UI: slash commands.
A prompt template is a named, argument-accepting message builder. The server returns a list of message objects when asked. Most MCP clients surface these as slash commands like /review-code or /summarize-thread.
The mental model: tools are the model's hands, resources are its eyes, prompts are its phrasebook.
Get the weekly deep dive
Tutorials on Claude Code, AI agents, and dev tools - delivered free every week.
There are two transports in the current spec, and exactly one of them is new enough that most beginner posts get it wrong.
The spec is direct: "The protocol currently defines two standard transport mechanisms for client-server communication: stdio and Streamable HTTP."
The client launches the server as a subprocess. JSON-RPC messages flow over stdin and stdout, delimited by newlines. Logs go to stderr. This is the local, on-your-machine transport.
Introduced in protocol version 2025-03-26, Streamable HTTP is now the standard remote transport. It uses a single HTTP endpoint that accepts both POST (client-to-server messages) and GET (open an SSE stream for server-to-client messages). It supports resumable sessions via an Mcp-Session-Id header.
The older HTTP+SSE transport from protocol version 2024-11-05 is deprecated. The transports spec page includes this note verbatim: "This replaces the HTTP+SSE transport from protocol version 2024-11-05." If an article tells you to "set up an SSE server," it is at least a year out of date. The Claude Code docs also flag this directly: "The SSE (Server-Sent Events) transport is deprecated. Use HTTP servers instead, where available."
Custom transports are allowed. The spec only requires that they preserve JSON-RPC semantics and lifecycle.
Rather than hand-wave ecosystem claims, here is what the primary sources show as of April 2026.
claude mcp add as a first-class CLI for managing servers. Docs at code.claude.com/docs/en/mcp.~/.cursor/mcp.json (global) or .cursor/mcp.json (project). Docs at cursor.com/docs/context/mcp.code.visualstudio.com/docs/copilot/chat/mcp-servers.developers.openai.com/codex/mcp.developers.openai.com/api/docs/mcp/.modelcontextprotocol.io/clients.The protocol is client-agnostic by design. Write once, connect anywhere.
The Claude Code docs define three scopes for server configuration, and the precise storage location for each.
| Scope | Loads in | Shared | Stored in |
|---|---|---|---|
local (default) | Current project only | No | ~/.claude.json |
project | Current project only | Yes, via version control | .mcp.json in project root |
user | All your projects | No | ~/.claude.json |
Pick one and go. Here are three real installs using real servers from our directory.
claude mcp add --transport stdio filesystem -- npx -y @modelcontextprotocol/server-filesystem
This gives the model scoped file operations. The server is listed as an official reference implementation on the modelcontextprotocol GitHub org.
claude mcp add --transport http notion https://mcp.notion.com/mcp
Notion publishes an official MCP endpoint. No local install needed. Claude Code handles the OAuth flow on first use.
claude mcp add --transport http secure-api https://api.example.com/mcp \
--header "Authorization: Bearer your-token"
Per the Claude Code docs, all flags must come before the server name, and the -- separator precedes the subprocess command for stdio servers.
After any install, type /mcp inside Claude Code to see the status of every configured server, their tool counts, and reconnect any that dropped.
Cursor's docs show the same config shape for every server, just in JSON instead of a CLI.
Create ~/.cursor/mcp.json for global install or .cursor/mcp.json in your project root:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem"],
"env": {}
},
"notion": {
"url": "https://mcp.notion.com/mcp"
}
}
}
Cursor supports ${env:VARIABLE_NAME} interpolation inside the config, which is the cleanest way to inject API keys without committing them.
The MCP Directory at mcp.developersdigest.tech indexes 271 active servers. You do not need 271. You need five. A short list of reference servers from the Anthropic-maintained modelcontextprotocol/servers repo, all real and listed in our directory:
@modelcontextprotocol/server-filesystem - sandboxed file operations.@modelcontextprotocol/server-git - read, search, and diff Git repos.@modelcontextprotocol/server-fetch - fetch URLs and convert HTML to clean markdown.@modelcontextprotocol/server-postgres - read-only Postgres queries.@modelcontextprotocol/server-sequential-thinking - step-by-step reasoning scaffold.Our full opinionated shortlist lives in 271 MCP Servers Exist. These 5 Actually Make Claude Code Better.. The filters we apply: actively maintained, one-command install, fills a gap Claude Code does not already cover, returns clean structured output.
The spec takes security seriously, and Streamable HTTP has three explicit warnings you should internalize before exposing a server.
From the transports spec: "Servers MUST validate the Origin header on all incoming connections to prevent DNS rebinding attacks. When running locally, servers SHOULD bind only to localhost (127.0.0.1) rather than all network interfaces (0.0.0.0). Servers SHOULD implement proper authentication for all connections."
Translation: do not run a public MCP server without auth, do not bind 0.0.0.0 on a dev machine, and validate Origin.
On the tools side, the spec requires client-side defences too: "Prompt for user confirmation on sensitive operations. Show tool inputs to the user before calling the server, to avoid malicious or accidental data exfiltration. Validate tool results before passing to LLM."
Practical rules for a beginner:
env blocks or environment variables, never in committed config.server-filesystem let you whitelist directories.The fastest path is the official TypeScript SDK.
npm init -y
npm install @modelcontextprotocol/sdk zod
A minimal server with one tool, built from the shape in the official quickstart:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({ name: "hello-mcp", version: "1.0.0" });
server.tool(
"greet",
"Return a friendly greeting",
{ name: z.string() },
async ({ name }) => ({
content: [{ type: "text", text: `Hello, ${name}.` }],
})
);
const transport = new StdioServerTransport();
await server.connect(transport);
Run it once with node server.js to sanity check. Then register it with Claude Code:
claude mcp add --transport stdio hello -- node /absolute/path/to/server.js
Reload Claude Code, type /mcp, and hello should show up with one tool. Python, Kotlin, Java, C#, Rust, and Swift SDKs are all listed on modelcontextprotocol.io if TypeScript is not your stack.
For deeper dives on building servers, see the official SDK quickstart at modelcontextprotocol.io/quickstart/server and the concepts docs linked in the references below.
A quick reality check, because misconceptions spread fast.
If you are new to MCP, do these four things in order:
claude mcp add --transport stdio filesystem -- npx -y @modelcontextprotocol/server-filesystem or paste the Cursor config above./mcp (Claude) or the MCP panel (Cursor) and confirm the server is connected.That is the entire loop. Everything else - custom servers, remote deployments, prompts, resources - is the same pattern scaled up.
If you want a curated shortlist, read our 271 MCP Servers post. If you want to browse the full ecosystem, the MCP Directory at mcp.developersdigest.tech is kept current.
USB-C took a decade to win. MCP took about eighteen months. That tells you exactly how starved the AI tooling space was for a standard.
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.
Gives AI agents access to 250+ external tools (GitHub, Slack, Gmail, databases) with managed OAuth. Handles the auth and...
View ToolVisual testing tool for Model Context Protocol servers. Like Postman for MCP - call tools, browse resources, and view...
View Tool
New tutorials, open-source projects, and deep dives on coding agents - delivered weekly.
Configure Claude Code for maximum productivity -- CLAUDE.md, sub-agents, MCP servers, and autonomous workflows.

Autocomplete wrote the line. Agents write the pull request. The shift from Copilot to Claude Code, Cursor Agent, and Dev...

Claude Code is Anthropic's AI coding agent for your terminal. What it does, how it works, how it compares to Cursor and...
Hacker News keeps arguing about Claude Code, Codex, skills, MCP, and orchestration. Under the noise, the same four truth...