Model Context Protocol (MCP) is the standard protocol for connecting AI coding tools to external data sources and services. You configure a server, and your agent gets access to databases, APIs, browsers, and anything else that exposes an MCP interface.
The ecosystem has exploded. There are hundreds of MCP servers available. Most are noise. This list covers the 15 that actually matter - the ones that make Claude Code, Cursor, and other AI coding tools significantly more useful for real development work.
Every server below includes a working configuration you can paste directly into your settings file. For an interactive way to build your config, use the MCP Config Generator.
Read, write, search, and manage files across specified directories. This is the most fundamental MCP server. Without it, your agent is blind to anything outside the current project.
{
"filesystem": {
"command": "npx",
"args": [
"-y",
"@anthropic-ai/mcp-server-filesystem",
"/Users/you/projects",
"/Users/you/docs"
]
}
}
The server restricts access to the directories you pass as arguments. Your agent cannot read or write anywhere else. This is a security boundary - pass only the paths the agent actually needs.
Why it matters: Agents that can access your notes, documentation, and other projects alongside your code produce dramatically better output. Context is everything.
Full GitHub integration. Search repos, read and create issues, open and review PRs, manage branches, comment on code reviews. This is the second server most developers install after filesystem.
{
"github": {
"command": "npx",
"args": ["-y", "@anthropic-ai/mcp-server-github"],
"env": {
"GITHUB_TOKEN": "ghp_your_token_here"
}
}
}
Scope your token to what the agent needs. Read-only access requires repo:read. Creating issues and PRs requires full repo scope. Do not hand over a token with admin permissions.
Why it matters: "Review all open PRs and summarize the status of each" becomes a single prompt instead of 20 minutes of context-switching. The agent reads diffs, comments, and CI results in one pass.
Direct database access for querying tables, inspecting schemas, and running analytical queries. The server enforces read-only access by default - the agent runs SELECT and EXPLAIN ANALYZE but cannot modify data.
{
"postgres": {
"command": "npx",
"args": [
"-y",
"@anthropic-ai/mcp-server-postgres",
"postgresql://user:pass@localhost:5432/mydb"
]
}
}
Point this at a read replica if you are connecting to production. The agent can run expensive analytical queries, and you do not want those hitting your primary.
Why it matters: "How many users signed up this week compared to last week?" gets answered in seconds. The agent writes the SQL, executes it, and interprets the results. No context-switching to a database client.
Browser automation with full page interaction. Navigate to URLs, click elements, fill forms, take screenshots, and read page content. This is the upgrade from Puppeteer - Playwright handles modern web apps with better reliability.
{
"playwright": {
"command": "npx",
"args": ["-y", "@anthropic-ai/mcp-server-playwright"]
}
}
The agent gets a headless Chromium instance. It can navigate your deployed app, test user flows, capture visual regressions, and scrape documentation pages. Pair it with screenshot-based debugging for fast QA cycles.
Why it matters: Your agent can visually verify its own changes. "Deploy this, open the staging URL, and confirm the new dashboard renders correctly" - all handled autonomously.
Connect your agent to Slack for reading messages, searching channels, and posting updates. Requires a Slack app with bot token scopes.
{
"slack": {
"command": "npx",
"args": ["-y", "@anthropic-ai/mcp-server-slack"],
"env": {
"SLACK_BOT_TOKEN": "xoxb-your-bot-token",
"SLACK_TEAM_ID": "T01234567"
}
}
}
Minimum scopes needed: channels:read, channels:history, chat:write. Set these in the Slack App dashboard under OAuth and Permissions.
Why it matters: "Summarize the engineering discussion from today and post the action items to #standup." The agent reads thread context, extracts decisions, and writes a clean summary - work that usually falls through the cracks.
A persistent knowledge graph that stores entities and relationships across sessions. The agent can create facts, define connections between concepts, and recall them in future conversations.
{
"memory": {
"command": "npx",
"args": ["-y", "@anthropic-ai/mcp-server-memory"]
}
}
The graph persists to a local file. The agent stores observations like "Project X uses React and Convex" and "The auth service depends on Clerk." Next session, it remembers.
Why it matters: AI coding sessions are ephemeral. Everything the agent learns disappears when the session ends. Memory fixes this. The agent builds up project knowledge over time instead of starting from zero every session.
Get the weekly deep dive
Tutorials on Claude Code, AI agents, and dev tools - delivered free every week.
Web search from inside your agent session. The agent can search the web, read results, and incorporate current information into its responses without leaving the terminal.
{
"brave-search": {
"command": "npx",
"args": ["-y", "@anthropic-ai/mcp-server-brave-search"],
"env": {
"BRAVE_API_KEY": "your-brave-api-key"
}
}
}
Get a free API key from the Brave Search API dashboard. The free tier is generous enough for development use.
Why it matters: "Find the latest release notes for Next.js and check if any breaking changes affect our project." The agent searches, reads, and applies current information - not stale training data.
Make HTTP requests and read web pages. Simpler than a full browser server, but faster and lighter. The agent can call APIs, download content, and parse responses.
{
"fetch": {
"command": "npx",
"args": ["-y", "@anthropic-ai/mcp-server-fetch"]
}
}
No API key required. The server makes standard HTTP requests and returns the response body. It handles HTML pages by extracting readable text content.
Why it matters: Quick API testing, reading documentation pages, and fetching remote configurations - all without opening a browser or writing curl commands. Lightweight and fast for tasks that do not need full browser rendering.
A structured reasoning server that helps the agent break down complex problems into explicit steps. It provides a thinking framework that prevents the model from jumping to conclusions.
{
"sequential-thinking": {
"command": "npx",
"args": ["-y", "@anthropic-ai/mcp-server-sequential-thinking"]
}
}
The agent calls this server when it encounters problems that need multi-step reasoning - architectural decisions, debugging complex issues, or planning large refactors. Each step is recorded and can be revised.
Why it matters: Complex problems trip up agents that try to solve everything in one shot. Sequential thinking forces the agent to reason step-by-step, catch errors early, and revise its approach. The quality of output on hard problems improves noticeably.
Connect your agent to Sentry for reading error reports, stack traces, and issue metadata. The agent can investigate production errors, identify patterns, and suggest fixes based on real crash data.
{
"sentry": {
"command": "npx",
"args": ["-y", "@anthropic-ai/mcp-server-sentry"],
"env": {
"SENTRY_AUTH_TOKEN": "your-sentry-token",
"SENTRY_ORG": "your-org"
}
}
}
Create an internal integration token in Sentry with read access to issues, events, and projects.
Why it matters: "What errors spiked after yesterday's deploy?" The agent pulls the stack traces, cross-references with your recent commits, and identifies the likely cause. Debug production issues from your coding session instead of switching to the Sentry dashboard.
Issue tracking integration for teams using Linear. The agent can read issues, create new ones, update status, add comments, and query project boards.
{
"linear": {
"command": "npx",
"args": ["-y", "@anthropic-ai/mcp-server-linear"],
"env": {
"LINEAR_API_KEY": "lin_api_your_key_here"
}
}
}
Generate an API key from Linear Settings under API.
Why it matters: "Create a bug report for the auth token refresh issue I just fixed, link it to the current sprint, and mark it as done." The agent handles the project management busywork while you keep coding.
Web scraping and crawling that converts pages into clean markdown. Unlike raw fetch, Firecrawl handles JavaScript-rendered pages, removes boilerplate, and returns structured content.
{
"firecrawl": {
"command": "npx",
"args": ["-y", "@anthropic-ai/mcp-server-firecrawl"],
"env": {
"FIRECRAWL_API_KEY": "fc-your-key-here"
}
}
}
Firecrawl is a paid service with a free tier. The agent can scrape documentation sites, competitor pages, or any web content and get clean, parseable text.
Why it matters: When the agent needs to read a documentation page that relies on client-side rendering, basic fetch fails. Firecrawl renders the page and extracts the real content. Essential for research-heavy workflows.
Sandboxed code execution in the cloud. The agent can run arbitrary code - Python, JavaScript, Bash - in an isolated environment without touching your local machine.
{
"e2b": {
"command": "npx",
"args": ["-y", "@anthropic-ai/mcp-server-e2b"],
"env": {
"E2B_API_KEY": "e2b_your_key_here"
}
}
}
E2B sandboxes spin up in under a second and run for up to 24 hours. Each sandbox is a full Linux VM with network access.
Why it matters: Testing code without risk. The agent can run experiments, install packages, and execute scripts in a throwaway environment. If it breaks something, your local machine is untouched. Critical for agents working on infrastructure or deployment scripts.
Read and write Notion pages and databases. The agent can search your workspace, read page content, create new pages, and update database entries.
{
"notion": {
"command": "npx",
"args": ["-y", "@anthropic-ai/mcp-server-notion"],
"env": {
"NOTION_API_KEY": "ntn_your_integration_key"
}
}
}
Create an internal integration in Notion Settings under Connections. Share the specific pages and databases you want the agent to access.
Why it matters: Teams that use Notion for documentation, specs, and project planning can give their agent direct access to that context. "Read the PRD for the auth redesign and implement the first phase" - the agent reads the spec from Notion and starts coding.
Database, auth, storage, and edge functions - all accessible through a single MCP server. The agent can query your Supabase database, manage auth users, read and write to storage buckets, and inspect edge function logs.
{
"supabase": {
"command": "npx",
"args": ["-y", "@anthropic-ai/mcp-server-supabase"],
"env": {
"SUPABASE_URL": "https://your-project.supabase.co",
"SUPABASE_SERVICE_ROLE_KEY": "eyJ..."
}
}
}
Use the service role key for full access, or an anon key for restricted access. The service role key bypasses Row Level Security, so handle it carefully.
Why it matters: If your stack runs on Supabase, this server gives the agent complete visibility into your backend. It can debug auth issues, query data, and inspect storage - all from the same coding session.
MCP servers are configured in a JSON settings file. The format is the same across tools.
Add servers to .claude/settings.json in your project directory, or ~/.claude/settings.json for global access:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@anthropic-ai/mcp-server-filesystem", "/Users/you/projects"]
},
"github": {
"command": "npx",
"args": ["-y", "@anthropic-ai/mcp-server-github"],
"env": {
"GITHUB_TOKEN": "ghp_your_token"
}
},
"postgres": {
"command": "npx",
"args": [
"-y",
"@anthropic-ai/mcp-server-postgres",
"postgresql://localhost:5432/mydb"
]
}
}
}
Restart Claude Code after changing the config. It discovers servers on startup and logs which tools are available.
Cursor reads MCP configuration from ~/.cursor/mcp.json. The format is identical to Claude Code - you can copy server entries between the two without changes. See our Cursor guide for more details.
The fastest way to build your MCP configuration is the MCP Config Generator. Select the servers you need, fill in your credentials, and it outputs the JSON ready to paste into your settings file.
Do not install all 15. Start with the two or three that match your daily workflow and add more as you find concrete use cases.
Every developer needs: Filesystem and GitHub. These cover the most common operations and require minimal setup.
Backend developers add: Postgres (or Supabase if that is your stack) and Sentry. Database access and error monitoring are the highest-leverage additions for API work.
Full-stack developers add: Playwright for visual testing, Fetch for API exploration, and Firecrawl for reading documentation.
Team leads add: Slack and Linear. Project management and communication from your coding session eliminates context-switching.
Power users add: Memory for persistent context across sessions, Sequential Thinking for complex problem decomposition, and E2B for sandboxed experimentation.
Pair your MCP configuration with a CLAUDE.md file that tells the agent how to use your specific servers. "Use the postgres MCP to answer questions about user data. Use the GitHub MCP to create issues, never manually." This gives the agent intent, not just access.
An MCP server is a process that exposes tools, resources, and prompts to AI agents through the Model Context Protocol. It runs locally or remotely and communicates with AI clients like Claude Code and Cursor using a standard JSON-RPC interface. Each server provides specific capabilities - filesystem access, database queries, API integration - that the agent can call during its reasoning loop.
Start with Filesystem and GitHub. They cover the most common use cases - reading files across projects and managing GitHub repos - and require minimal setup. Add database access (Postgres or Supabase) and Brave Search as your next two. Build up from there based on what you actually use daily.
MCP is supported by Claude Code, Claude Desktop, Cursor, Windsurf, and a growing list of AI coding tools. The configuration format is standardized, so a server configured for Claude Code works with Cursor without changes. Not all tools support every feature of the protocol, but core tool calling works consistently across clients.
MCP servers run with the permissions you grant them. Security depends on your configuration. Use least-privilege tokens - give the GitHub server a token scoped to specific repos, not your entire account. Give database servers read-only connection strings. Restrict filesystem access to specific directories. AI clients show you which tools the agent calls before executing them, so you can review destructive operations.
There is no hard limit in the protocol. Practically, each server is a separate process that consumes memory and CPU. Most developers run 3 to 5 servers concurrently without issues. If you need more, consider which servers you actually use in every session versus which ones you could enable on demand.
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.
New tutorials, open-source projects, and deep dives on coding agents - delivered weekly.
Build MCP Servers with Databutton It's been a huge few months for MCP! In this video, I'll show you how to build and deploy MCP servers using Databutton— all with just natural language....

Complete pricing breakdown for every major AI coding tool - Claude Code, Cursor, Copilot, Windsurf, Codex, and more. Fre...

The exact tools, patterns, and processes I use to ship code 10x faster with AI. From morning briefing to production depl...

Advanced tips for getting the most out of Claude Code - sub-agents, memory, worktrees, hooks, custom skills, and keyboar...