1M Token Context - Claude Code
Extended context window for Opus and Sonnet on supported plans.
The 1M token context window lets Claude Code hold roughly a full mid-size codebase plus history in a single session - no compaction, no chunking.
What it does
Supported Opus and Sonnet models on qualifying plans offer up to a one-million-token context window. That's enough for a large repository's source, a long conversation history, and tool output without losing earlier turns. Compaction still exists as a safety net but rarely fires during normal work.
When to use it
- Large codebases you'd otherwise have to chunk.
- Long debugging sessions with lots of file reads.
- Tasks that require tying together evidence from many files at once.
- Agent runs where compaction loss would hurt correctness.
Gotchas
- More context costs more per turn, even with caching. Watch your spend on marathon sessions.
- Availability depends on your plan and the selected model. Check
/statusif you're unsure. - Just because the window is big doesn't mean you should fill it. Focused reads still outperform broad ones.
Official docs: https://code.claude.com/docs/en/model-config.md#extended-context
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.
Get the weekly deep dive
Tutorials on Claude Code, AI agents, and dev tools - delivered free every week.
Was this helpful?




