
OpenClaw is the most starred project on GitHub. 247K stars and counting. The creator built a CLI-first architecture for AI agent orchestration. No MCPs. Not a single one.
Think about that. The most popular developer tool of 2026 looked at MCP servers and said "no thanks." It ships a CLI instead. So does Claude Code. So does Codex. So does the GitHub CLI.
This isn't a coincidence. It's a pattern.
CLIs are the better primitive for AI agents. Not MCPs. Not custom protocols. The command line interfaces developers have used for 40 years.
Here's the reasoning: the best proxy for what a computer should use is what both humans and computers already know how to use. No human uses an MCP. Every developer uses a CLI. When you need to find something, you grep. When you need to transform data, you pipe through sed or awk. When you need to interact with a service, you reach for its CLI.
AI agents should do the same thing.
This is where the token math gets brutal.
MCPs load everything into context. Want to search a codebase? The MCP reads files into the model's context window. Want to scrape a webpage? The entire page gets serialized and stuffed into tokens. For anything large, you need a sub-agent sitting between the orchestrator and the MCP just to manage the data flow.
CLIs interact with the file system directly. grep -r "pattern" ./src runs on your machine and returns only the matching lines. The model sees 10 lines instead of 10,000. curl fetches a URL and pipes it to jq to extract exactly what you need. The heavy lifting happens outside the context window.
# MCP approach: load entire file into context, search in-model
# Cost: ~4,000 tokens for a typical source file
# CLI approach: search on disk, return only matches
grep -rn "handleAuth" ./src --include="*.ts"
# Cost: ~50 tokens for the results
That's an 80x difference in token usage for a single search operation. Multiply that across an agent session with hundreds of tool calls and the gap is massive. CLIs keep the expensive context window lean. MCPs bloat it.
Run --help on any CLI. That's your entire API, loaded in one command.
$ obsidian --help
Usage: obsidian <command> [options]
Commands:
search Search notes by content or title
read Read a note by path
create Create a new note
list List notes in a folder
tags List all tags
An AI agent reads that output and immediately knows every capability, every flag, every argument. No schema files. No protocol negotiation. No server discovery. One command, full understanding.
This is the part that matters most: CLIs are a universal interface. Humans use them. Scripts use them. AI agents use them. The same tool serves all three audiences with zero adaptation. When Obsidian released their CLI, it didn't just help developers. It made every AI coding harness on the planet capable of managing Obsidian vaults. When Google shipped a Workspace CLI, every agent gained the ability to create docs, manage sheets, and send emails.
MCPs require agent-specific integration. You build an MCP server, and it works with Claude. Maybe Cursor. Maybe a handful of others. A CLI works with everything.
Get the weekly deep dive
Tutorials on Claude Code, AI agents, and dev tools - delivered free every week.
A CLI alone is just a tool. The magic happens when you combine three things:
# .claude/skills/vault-management.md
When working with Obsidian notes:
- Use `obsidian search` to find relevant notes before creating new ones
- Use `obsidian read` to check existing content
- Use `obsidian create` with proper frontmatter
- Always use wikilinks for cross-references
The skill file is plain markdown. The CLI is a standard binary. The harness reads the skill, discovers the CLI via --help, and chains operations together. No protocol overhead. No server management. No authentication handshakes.
This combination lets you do things MCPs cannot. Write the search results to a file. Pipe one CLI's output into another. Use xargs to parallelize operations. Compose tools with standard Unix patterns that have been refined for decades.
# Find all TODO comments, extract file paths, run tests for those files
grep -rn "TODO" ./src --include="*.ts" -l | xargs -I {} dirname {} | sort -u | xargs -I {} npm test -- --testPathPattern={}
Try expressing that in MCP calls. You can't, not cleanly. CLIs compose. MCPs don't.
MCPs aren't useless. They solve real problems in specific areas:
Authentication flows. OAuth, API keys, token refresh. CLIs can handle auth, but MCP's standardized protocol makes multi-service auth cleaner when you need it.
Tool discovery. "What tools does this server offer?" MCP's schema-based discovery is elegant. CLIs require the agent to know the tool exists and run --help.
Structured context loading. When you need to tell an agent about available capabilities in a standardized format, MCP's tool descriptions work well.
But these are complementary features, not primary interfaces. Use MCPs for auth and discovery. Use CLIs for the actual work.
The trend is accelerating. Every major tool release in 2025 and 2026 points the same direction:
OpenClaw (247K stars): CLI-first, zero MCPs. The most popular open-source project on GitHub chose the command line as its agent interface.
Claude Code: Anthropic's own coding agent is a CLI. Not a web app. Not an MCP server. A CLI you install with npm and run in your terminal.
Codex CLI: OpenAI built their coding agent as a CLI too. Two competing companies, same architectural choice.
Obsidian CLI: Millions of impressions on social when it launched. Developers immediately started wiring it into their agent workflows.
Google Workspace CLI: Same story. Millions of views. Instant adoption by agent harnesses everywhere.
The pattern is clear. The companies building the most successful AI tools aren't inventing new protocols. They're shipping CLIs.
If you're building a tool and wondering whether to create an MCP server or a CLI: build the CLI.
Your tool will work with every agent harness that exists today and every one that will exist tomorrow. It will work for humans who prefer the terminal. It will compose with other tools via pipes and subshells. It will be testable, scriptable, and debuggable with standard Unix tools.
MCPs are a layer you can add later if you need structured discovery or auth flows. But the CLI is the foundation.
The best AI agent tools aren't the ones we're inventing. They're the ones that have been sitting in our PATH for years. grep, git, curl, jq. Every CLI you've ever installed. The agent revolution doesn't need a new protocol. It needs access to what already works.
Run --help. That's the whole API.
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.
Gives AI agents access to 250+ external tools (GitHub, Slack, Gmail, databases) with managed OAuth. Handles the auth and...
View ToolAnthropic's agentic coding CLI. Runs in your terminal, edits files autonomously, spawns sub-agents, and maintains memory...
View ToolLightweight Python framework for multi-agent systems. Agent handoffs, tool use, guardrails, tracing. Successor to the ex...
View ToolNew tutorials, open-source projects, and deep dives on coding agents - delivered weekly.
Configure Claude Code for maximum productivity -- CLAUDE.md, sub-agents, MCP servers, and autonomous workflows.
AI AgentsWhat MCP servers are, how they work, and how to build your own in 5 minutes.
AI AgentsInstall the dd CLI and scaffold your first AI-powered app in under a minute.
Getting Started
Check out Zed here! https://zed.dev In this video, we dive into Zed, a robust open source code editor that has recently introduced the Agent Client Protocol. This new open standard allows...

Check out Trae here! https://tinyurl.com/2f8rw4vm In this video, we dive into @Trae_ai a newly launched AI IDE packed with innovative features. I provide a comprehensive demonstration...

Boost Your Productivity with Augment Code's Remote Agent Feature Sign up: https://www.augment.new/ In this video, learn how to utilize Augment Code's new remote agent feature within your...