
TL;DR
Most agent tool APIs are just REST endpoints with nicer names. Production agents need intent-shaped tools that compress workflows, reduce context, and return reviewable receipts.
Read next
Most MCP servers are noise. After shipping 24 apps with Claude Code, these are the five I reach for every time.
11 min readEverything you need to know about Model Context Protocol - how it works, how to install servers, how to build your own, and the best ones.
12 min readMCP servers and function calling both let AI tools interact with external systems. They solve different problems. Here is when to reach for each.
6 min readThe fastest way to make an agent worse is to give it too many tools.
That sounds backwards. Agents need tools. Tools are what make them agents instead of chatbots. But most tool surfaces are designed by copying an existing REST API:
getUserlistUserscreateTicketupdateTicketattachFilesendMessagelistMessagessearchMessagesThat looks clean to the engineer who owns the API. It is often terrible for the agent.
An agent does not want your internal resource model. It wants a small set of actions that match user intent. Anthropic's writing on MCP production systems makes the same point from the platform side: tools should help agents complete real workflows, not mirror every endpoint one by one.
For the broader MCP map, read the complete MCP servers guide and the MCP server shortlist. This post is the product-design layer underneath both.
Every tool definition costs something.
It costs tokens in the prompt. It costs attention when the model decides what to call. It costs reliability when the agent has to chain five low-level calls correctly. It costs observability when the final result is scattered across intermediate tool outputs.
The failure mode is predictable:
This is the tool menu tax. You pay it on every task, even when the task is simple.
Get the weekly deep dive
Tutorials on Claude Code, AI agents, and dev tools - delivered free every week.
From the archive
The better tool is shaped like the job.
Bad:
searchSlack
getThread
summarizeThread
createLinearIssue
attachSlackLink
postReply
Better:
create_issue_from_slack_thread
The better tool can still call Slack, summarize the thread, create the issue, attach the source link, and post a reply. The difference is that the agent sees one workflow-shaped capability instead of six infrastructure-shaped endpoints.
The same pattern applies everywhere:
bad: listDeployments, getLogs, searchErrors, rollbackDeployment
good: diagnose_failed_deploy
bad: queryDatabase, getSchema, explainQuery, exportRows
good: investigate_empty_dashboard
bad: createBranch, editFile, runTests, openPullRequest
good: implement_issue_with_pr
You do not remove power. You package it at the right level.
A production agent tool should not only return text. It should return a receipt.
For example:
{
"status": "created",
"issueUrl": "https://linear.app/acme/issue/ENG-123",
"sourceThread": "https://slack.com/archives/C123/p456",
"summary": "Customer cannot export invoices after plan downgrade.",
"actions": [
"read 14 Slack messages",
"created Linear issue ENG-123",
"attached source thread",
"posted confirmation reply"
]
}
That receipt gives the agent enough context to continue without dumping every Slack message into the model. It also gives the human something reviewable.
This is the same principle behind agent swarms needing receipts. Orchestration without reviewable outputs becomes theater quickly.
This is not an argument against low-level tools entirely.
Thin tools are useful when:
But once a workflow repeats, promote it. The first time the agent creates an issue from a Slack thread, a low-level chain is fine. The tenth time, that chain should become a tool, a CLI command, or a skill.
That is how agent systems mature.
Start with user jobs, not API resources.
Ask:
Then design the tool around that.
The right tool set is usually smaller than the API. A calendar API might expose 80 operations. The agent might need five:
find_meeting_timeschedule_meeting_from_threadsummarize_daymove_meeting_with_noticeprepare_meeting_briefThat is enough to do real work.
Agents do not need every endpoint. They need the right affordances.
If your MCP server exposes your whole REST API, you probably built an integration, not an agent tool. The next step is product design: compress repeated workflows into intent-shaped tools, return receipts, and keep the raw endpoint surface available only when it actually helps.
One good tool beats ten endpoints because the agent is not paid to navigate your API. It is there to finish the job.
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.
Gives AI agents access to 250+ external tools (GitHub, Slack, Gmail, databases) with managed OAuth. Handles the auth and...
View ToolLightweight Python framework for multi-agent systems. Agent handoffs, tool use, guardrails, tracing. Successor to the ex...
View ToolMulti-agent orchestration framework built on the OpenAI Agents SDK. Define agent roles, typed tools, and directional com...
View ToolVisual testing tool for Model Context Protocol servers. Like Postman for MCP - call tools, browse resources, and view...
View ToolHosted MCP servers as a service. Plus tier unlocks private servers, higher quotas, and priority routing.
Open AppVirtualized filesystem on Neon for AI agents. $20/mo Plus.
Open AppDevtool replay proxy for MCP servers. Capture, inspect, and replay every tool call.
Open AppStep-by-step guide to building an MCP server in TypeScript - from project setup to tool definitions, resource handling, testing, and deployment.
AI AgentsConfigure model, effort, tools, MCP servers, and invocation scope.
Claude CodeConfigure model, tools, MCP, skills, memory, and scoping.
Claude Code
Most MCP servers are noise. After shipping 24 apps with Claude Code, these are the five I reach for every time.

Everything you need to know about Model Context Protocol - how it works, how to install servers, how to build your own,...

MCP servers and function calling both let AI tools interact with external systems. They solve different problems. Here i...

OpenClaw has 247K stars and zero MCPs. The best tools for AI agents aren't new protocols - they're the CLIs developers h...

Efficient agents do not stuff every tool result into the model context. They keep intermediate state in code, files, and...

Manual approval prompts stop protecting users when coding agents ask too often. The better pattern is risk-aware autonom...

New tutorials, open-source projects, and deep dives on coding agents - delivered weekly.