Every AI agent needs to interact with the outside world. Read a file. Query a database. Call an API. Without a standard way to do this, every integration is custom glue code. You write a different adapter for every tool, every model, every framework.
Model Context Protocol (MCP) fixes this. It is an open protocol, created by Anthropic, that standardizes how AI models connect to external data sources and tools. Think of it as USB-C for AI integrations. One interface. Any tool. Any model.
Before MCP, connecting Claude to your Postgres database meant writing custom code. Connecting it to GitHub meant more custom code. Every new integration was a fresh engineering effort. MCP replaces all of that with a single protocol that any client and any server can speak.
MCP uses a client-server architecture with three core concepts:
The flow is straightforward. Your AI application (the MCP client) connects to one or more MCP servers. Each server exposes tools and resources. The AI model decides which tools to call based on the user's request, and the client executes those calls against the server.
User prompt
↓
AI Model (Claude, GPT, etc.)
↓
MCP Client
↓
┌─────────────┬─────────────┬─────────────┐
│ MCP Server │ MCP Server │ MCP Server │
│ (Filesystem)│ (GitHub) │ (Postgres) │
└─────────────┴─────────────┴─────────────┘
The servers run locally or remotely. They communicate over stdio (local processes) or HTTP with Server-Sent Events (remote servers). The client handles discovery, capability negotiation, and message routing.
Anthropic maintains an official TypeScript SDK: @modelcontextprotocol/sdk. It gives you everything needed to build both MCP clients and servers.
Install it:
npm install @modelcontextprotocol/sdkHere is a minimal MCP server that exposes a single tool. It takes a city name and returns the current weather:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({
name: "weather-server",
version: "1.0.0",
});
server.tool(
"get-weather",
"Get current weather for a city",
{ city: z.string().describe("City name") },
async ({ city }) => {
const response = await fetch(
`https://api.weatherapi.com/v1/current.json?key=${process.env.API_KEY}&q=${city}`
);
const data = await response.json();
return {
content: [
{
type: "text",
text: `${data.location.name}: ${data.current.temp_c}°C, ${data.current.condition.text}`,
},
],
};
}
);
const transport = new StdioServerTransport();
await server.connect(transport);
That is a complete, working MCP server. The server.tool() call registers the tool with a name, description, Zod schema for input validation, and a handler function. The transport layer handles communication. Run it, and any MCP client can discover and call get-weather.
Connecting to an MCP server from your own application:
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
const transport = new StdioClientTransport({
command: "node",
args: ["./weather-server.js"],
});
const client = new Client({
name: "my-app",
version: "1.0.0",
});
await client.connect(transport);
// List available tools
const { tools } = await client.listTools();
console.log("Available tools:", tools.map((t) => t.name));
// Call a tool
const result = await client.callTool({
name: "get-weather",
arguments: { city: "Toronto" },
});
console.log(result.content);
The client spawns the server as a child process, connects over stdio, discovers available tools, and calls them with typed arguments. Clean and predictable.
Get the weekly deep dive
Tutorials on Claude Code, AI agents, and dev tools - delivered free every week.
The ecosystem already has production-ready servers for common integrations. Here are a few that matter:
Filesystem - Read, write, search, and manage files. Your AI agent gets access to project directories with configurable permissions.
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/project"]
}
}
}
GitHub - Create issues, open PRs, search repos, manage branches. Uses your GitHub token for authentication.
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_..." }
}
}
}
Postgres - Query your database directly. The AI can inspect schemas, run SELECT queries, and analyze data.
{
"mcpServers": {
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres", "postgresql://localhost/mydb"]
}
}
}
These servers drop into any MCP-compatible client. Claude Desktop, Claude Code, Cursor, Windsurf, and others all support the same configuration format.
The real power is building servers tailored to your stack. Here is a more complete example: an MCP server that wraps your application's API.
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({
name: "app-api-server",
version: "1.0.0",
});
// Expose a tool for searching users
server.tool(
"search-users",
"Search users by name or email",
{
query: z.string().describe("Search term"),
limit: z.number().optional().default(10).describe("Max results"),
},
async ({ query, limit }) => {
const res = await fetch(
`${process.env.API_URL}/users?q=${encodeURIComponent(query)}&limit=${limit}`,
{ headers: { Authorization: `Bearer ${process.env.API_TOKEN}` } }
);
const users = await res.json();
return {
content: [
{
type: "text",
text: JSON.stringify(users, null, 2),
},
],
};
}
);
// Expose a resource for reading app config
server.resource(
"app-config",
"config://app",
async (uri) => {
const config = await fetch(`${process.env.API_URL}/config`);
const data = await config.json();
return {
contents: [
{
uri: uri.href,
mimeType: "application/json",
text: JSON.stringify(data, null, 2),
},
],
};
}
);
const transport = new StdioServerTransport();
await server.connect(transport);
This server exposes both a tool (search users) and a resource (app config). Your AI agent can now search your user base and read your app configuration, all through MCP.
MCP sits between your AI model and your infrastructure. It does not replace your API layer. It wraps it. Your existing REST endpoints, database connections, and file systems stay exactly where they are. MCP just gives your AI a standardized way to reach them.
For TypeScript developers, the pattern looks like this:
The protocol handles discovery, authentication, error handling, and message formatting. You focus on what the tools do, not how they communicate.
If you are working with AI agents in TypeScript, MCP is worth adopting now. The ecosystem is growing fast. Anthropic, OpenAI, Google, and Microsoft all support it. The TypeScript SDK is well-maintained and the API is stable.
Start with the official servers. Add filesystem and GitHub access to your Claude setup. Then build a custom server for your most common workflow. Once you see an AI agent calling your own tools through a clean protocol, the value becomes obvious.
For a hands-on, interactive breakdown of MCP and how to build with it, check out the full course at subagent.developersdigest.tech/mcp.
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.
The TypeScript toolkit for building AI apps. Unified API across OpenAI, Anthropic, Google. Streaming, tool calling, stru...
View ToolGives AI agents access to 250+ external tools (GitHub, Slack, Gmail, databases) with managed OAuth. Handles the auth and...
View ToolNew tutorials, open-source projects, and deep dives on coding agents - delivered weekly.
Reactive backend - database, server functions, real-time sync, cron jobs, file storage. All TypeScript. This site's ba...

In this video, learn how to leverage convex components, independent modular TypeScript building blocks for your backend. This tutorial focuses on one of the latest integrations with the Resend...

Getting Started with OpenAI's New TypeScript Agents SDK: A Comprehensive Guide OpenAI has recently unveiled their Agents SDK within TypeScript, and this video provides a detailed walkthrough...

Learn The Fundamentals Of Becoming An AI Engineer On Scrimba; https://v2.scrimba.com/the-ai-engineer-path-c02v?via=developersdigest Anthropic's New Model Context Protocol (MCP): AI Data Integratio...
AI agents use LLMs to complete multi-step tasks autonomously. Here is how they work and how to build them in TypeScript.
From swarms to pipelines - here are the patterns for coordinating multiple AI agents in TypeScript applications.
OpenClaw has 247K stars and zero MCPs. The best tools for AI agents aren't new protocols - they're the CLIs developers h...