TL;DR
The coding-agent workflow is maturing past giant hand-written prompts. The winning pattern in 2026 is a control stack: project rules, reusable skills, bounded sub-agents, and deterministic tools around the model.
The most useful coding-agent shift in 2026 is not a new model release. It is the industry's slow realization that giant prompts do not scale.
Hacker News has been circling this point for months. The threads around Skills Officially Comes to Codex, OpenAI quietly adopting skills, and the broader control-layer discussion in Why AI coding agents feel powerful at first, then become harder to control all point in the same direction:
Prompting is not disappearing, but it is being demoted.
The better pattern is a stack:
That stack is much closer to how real software work already behaves.
At small scale, prompting feels magical.
You write a careful request. The agent does something impressive. You refine it a little. Everything feels fast.
Then the codebase grows.
Now the same session has to remember:
That is when prompt-only workflows start to degrade.
The prompt gets longer. The same instructions get repeated every day. Constraints leak into task wording. One session behaves well, the next ignores the exact same preference. The system still looks capable, but it becomes inconsistent.
That inconsistency is what developers are actually complaining about when they say coding agents "feel harder to control" over time.
One reason this conversation gets muddled is that people keep comparing skills and MCP as if they are substitutes.
They are not.
MCP is mostly about connection:
Skills are mostly about operating knowledge:
This is why the strongest HN comments on skills keep making the same point: a markdown-based skill can tell the agent how to properly use existing tools, including MCP tools, without forcing that behavior into the main prompt every time.
That is a big deal.
A good skill is not just a stored prompt. It is reusable method.
Get the weekly deep dive
Tutorials on Claude Code, AI agents, and dev tools - delivered free every week.
Skills are becoming a standard because they match the shape of repeated developer work.
Most useful coding work is not unique. It rhymes.
You keep doing variations of the same things:
Each of those tasks benefits from a preferred approach. Not just a desired output. An approach.
That is where skills outperform prompts.
Instead of restating the same instructions in every session, you keep the repeatable parts in a reusable unit.
That makes prompts shorter and easier to reason about.
One task might need a deployment-debugging skill plus a documentation-checking skill plus a repo-specific testing skill.
That is a more scalable model than trying to encode every possible combination into one giant system prompt.
The model does not need the full body of every workflow instruction at all times. It only needs to load the relevant one when the task calls for it.
That is one reason the "skills as lazy-loaded markdown" model keeps resonating with power users.
You do not need a separate prompt-engineering platform or a complex orchestration product to start. A markdown file in the repo is often enough.
That matters. The winning pattern in developer tooling is usually the one that ordinary teams can author and maintain without ceremony.
The most useful framing I have seen recently is that agent features are not random bells and whistles. They are control layers.
Very roughly:
When teams mix these layers up, things get messy.
Examples:
That is why some teams feel like coding agents are chaotic while others are getting strong results from the same underlying models.
The better teams are not just "prompting better." They are building a better control stack.
This is not an anti-MCP argument.
MCP remains the right abstraction when the problem is:
If your agent needs to talk to GitHub, Linear, a database, a browser harness, or a deployment system, MCP is often the right connective tissue.
But MCP does not automatically tell the agent how to behave well with those tools.
That is why the sharpest recent HN critiques of "MCP everywhere" are also useful. Developers are noticing that connecting tools is not the same as teaching good operational judgment.
The connector layer is necessary. It is not sufficient.
If you are using coding agents seriously, the practical next step is not "write a better master prompt."
It is:
Project rules, coding conventions, and repeated operational workflows should live in repo-local files, not in copy-pasted prompts.
Start with the boring, high-frequency work:
Those are the areas where methodology matters most.
Use MCP or CLI tooling for capability. Use skills for approach. Do not try to jam both concerns into one layer.
The point of sub-agents is not novelty. It is blast-radius control.
If one agent is researching docs and another is patching infra config, that is often safer than one giant session doing both with one giant context pile.
The goal is not to eliminate oversight. It is to move humans up the stack so they review intent, risk, and correctness instead of micromanaging every keystroke.
The prompt era is not over, but prompt maximalism is.
The emerging best practice is to treat coding agents less like chatbots and more like systems. Systems need structure. They need reusable knowledge. They need separation of concerns. They need bounded scopes and deterministic checks.
That is why skills are becoming more important.
Not because they are fashionable. Because they solve a real scaling problem in day-to-day agent use.
In 2026, the teams getting the most leverage from coding agents are not the teams writing the cleverest prompts.
They are the teams building the clearest control stack around the model.
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.
Anthropic's agentic coding CLI. Runs in your terminal, edits files autonomously, spawns sub-agents, and maintains memory...
View ToolOpenAI's cloud coding agent. Runs in a sandboxed container, reads your repo, executes tasks, and submits PRs. Uses GPT-5...
View Tool
New tutorials, open-source projects, and deep dives on coding agents - delivered weekly.
Codeium's AI-native IDE. Cascade agent mode handles multi-file edits autonomously. Free tier with generous limits. Stron...
View ToolConfigure Claude Code for maximum productivity -- CLAUDE.md, sub-agents, MCP servers, and autonomous workflows.
AI AgentsWhat MCP servers are, how they work, and how to build your own in 5 minutes.
AI AgentsStep-by-step guide to building an MCP server in TypeScript - from project setup to tool definitions, resource handling, testing, and deployment.
AI Agents
Setting Up Self-Improving Skills in Claude Code: Manual & Automatic Methods In this video, you'll learn how to set up self-improving skills within Claude Code. The tutorial addresses the key problem

Check out Zed here! https://zed.dev In this video, we dive into Zed, a robust open source code editor that has recently introduced the Agent Client Protocol. This new open standard allows...

In this episode, we explore the newly released GPT-5 Codex by OpenAI, a specialized version of GPT-5 designed for agentic coding tasks. Codex offers advanced features, including enhanced code...
Hacker News keeps arguing about Claude Code, Codex, skills, MCP, and orchestration. Under the noise, the same four truth...

A new study from nrehiew quantifies a problem every Claude Code, Cursor, and Codex user has felt: models making huge dif...

A practical operational guide to Claude Code usage limits in 2026: plan behavior, API key pitfalls, routing choices, and...