The Bitter Lesson: How We Build and What We Build Is About to Change

8 min read
The Bitter Lesson: How We Build and What We Build Is About to Change

The Core Principle

General methods that leverage computation are ultimately the most effective—and by a large margin.

This is the essence of Rich Sutton's "The Bitter Lesson," published seven years ago but increasingly relevant as we enter 2026. The lesson is bitter because it directly contradicts our instinct to encode human knowledge into systems. We want to impart our expertise, design elegant architectures, and create frameworks that reflect how we think. But history shows this approach loses in the end.

What History Teaches Us

In 1997, Deep Blue defeated Kasparov through brute force search. In 2016, AlphaGo beat the world's best Go player through self-play and scale. The critical insight: once these systems reached human-level performance, they didn't stop. They kept improving, quickly surpassing any human capability in their domain.

The same pattern is emerging in software development. We've moved from GitHub Copilot's line-by-line completions in 2021, through multi-file editing tools like Cursor, to today's agent harnesses—Claude Code, Cody, Devin, and others. These systems can now run autonomously for hours, equipped with tools, memory, and iteration loops.

Evolution of AI coding tools from autocomplete to autonomous agents

The trajectory is clear. What feels like cutting-edge today will look like autocomplete in 2026.

Why Encoded Knowledge Fails

Encoding knowledge feels smart. You design a system that takes actions as you would take them. You impart your expertise through careful prompting, detailed instructions, and rigid frameworks. The system runs autonomously, and it feels like you've successfully automated your own thinking.

But this approach optimizes for what you already know. It constrains the system to your current understanding rather than letting it discover better solutions.

The alternative? Give agents general capabilities. Provide access to a computer, tools, and the ability to learn from data. Let them research, experiment, and build their own tooling. Just as AI agents can discover and integrate open-source libraries faster than any human, they can discover and create solutions we haven't considered.

Think of it like a self-driving car. You input the destination—get to the airport—and let the system figure out the route. Don't encode turn-by-turn directions. The agent with general methods and sufficient compute will find better paths than you could program.

The Two Paths of 2026

Software development is splitting into two simultaneous transformations: how we build and what we build.

How We Build

The fastest-growing companies in tech are now in code generation. Cursor, Claude Code, Devin, Lovable, Bolt—these agentic systems are becoming the primary interface for development work. The pattern is consistent across platforms: heavy file operations, web search, code execution, and autonomous iteration.

Agent harness architecture with tool access and memory systems

The shift is from human-driven, top-down development to agent-centric workflows. Instead of designing architectures and steering agents through execution, developers are increasingly setting goals and letting agents determine implementation.

What We Build

The bigger change is in the nature of software itself. We're moving from no-code builders to agents writing bespoke software at the moment it's needed.

Consider an accounting system. Rather than building a monolithic application with predetermined workflows, you define the goals and outcomes. The agent determines the steps, validates its work, and constructs tools on demand. If it needs a specific calculation module or data transformation, it writes it. If it needs an API, it builds it.

This isn't speculative. The models released 12-18 months after the Claude 3.5 Sonnet era are already capable of reliable code generation and extended autonomous operation. The next era will feature agents writing tools for themselves and other agents.

Agent-generated infrastructure and tool creation workflow

The Inevitable Conclusion

This isn't preference or laziness. It's mathematics. In any domain where data exists, general methods at scale beat encoded knowledge every time.

The 2026 shift flips the script on software architecture. Currently, humans design, agents build. We choose frameworks, design architectures, and fix the agent's approach along the way. The emerging model is agent-driven: agents decide they need a web application, build APIs as infrastructure, and provision resources dynamically.

Architecture will emerge from need rather than predetermined structure. Agents will become the infrastructure. The boundary between application and infrastructure will blur because the agent can generate both on demand.

Adaptation and Leverage

Change this rapid creates anxiety. But the developers who internalize these lessons—who shift from encoding knowledge to leveraging computation, from rigid frameworks to flexible agent capabilities—will have disproportionate leverage in what gets built over the coming years.

The bitter lesson isn't just about AI research. It's about how we work. Computation at scale wins. Agents that generate their own tools beat systems constrained by human foresight. And we're only at the beginning of what's possible.