AI Tools Deep Dive
14 partsTL;DR
Codeburn is a terminal dashboard for tracking token spend across Claude Code and Cursor. It hit 3,400+ stars in its first week on GitHub. Here is what it shows, why people are reaching for it, and how it ties into the over-editing problem.
If you are on a Claude Max subscription, you have a cap and a usage bar. What you do not have is a breakdown. You cannot see which project ate the most tokens this week, which agent loop ran hot at 2am, or how many dollars of inference you would have paid for if you were on pay-as-you-go. The bar just creeps toward full and then resets.
Codeburn from Agent Seal is the first tool that tries to answer that question directly. It is a terminal UI that reads the local session logs from Claude Code and Cursor and renders them as a live dashboard of token spend, cost estimates, and per-project breakdowns. The repo hit 3,400+ stars in its first week on GitHub. That kind of number is usually a sign that a tool landed on a real, widely felt pain point.
This post is a look at what codeburn actually does, where the data comes from, and why it is suddenly the one tool a lot of Claude Code users wish they had installed three months ago.
Codeburn is a TUI, meaning it runs inside your terminal and renders panels of data that update as you work. It is not a hosted dashboard. There is no account, no login, no telemetry leaving your machine. It parses the session and usage files that Claude Code and Cursor already write to your local disk and surfaces them as one coherent view.
The main panels are:
None of this data is new. It has always been sitting on your disk. Codeburn is the first tool to make reading it trivial.
Get the weekly deep dive
Tutorials on Claude Code, AI agents, and dev tools - delivered free every week.
A week-one star count in the thousands is rare. It usually means the maintainer either has a large audience already, or the tool solves a problem that a lot of people were actively Googling for. Codeburn is closer to the second case. The Claude Code subreddit and developer Twitter have been full of "where is my Max subscription actually going" posts for months. Anthropic's usage page shows you a bar. It does not show you the breakdown.
There is a second reason codeburn resonates, and it connects directly to the over-editing essay that is making the rounds right now. When models make fifty-line diffs for one-character bugs, every one of those extra lines is tokens. When an agent loops on a test that is failing for environmental reasons and re-reads the same files twenty times, that is tokens. When Claude Code opens a file you did not ask it to open because it wants to "understand the context," that is tokens.
The over-editing post mentions "$50 of tokens burned" on what should have been a one-line fix. That number is not theoretical. It is exactly the kind of number codeburn is built to surface. Before codeburn, you could feel that a session went long. You could not point at a line item that said "this project, this day, this model, this many dollars." Now you can.
Worth being honest about the limits.
Codeburn is a viewer, not a controller. It does not stop a runaway session, throttle your agent, or alert you when you cross a threshold. If Claude Code is in a loop at 3am, codeburn will show you the damage after the fact. It will not intervene. That is a feature a competing tool or a future version could add, but it is not here today.
Codeburn also relies on the structure of local log files that Anthropic and Cursor can change without notice. If either vendor reorganizes their session format, the tool will need an update to keep parsing correctly. This is the usual tradeoff for any tool built on top of logs it does not own. The project is active enough that this will probably get fixed quickly when it happens, but it is a real dependency.
Cost estimates are also exactly that: estimates. Anthropic's per-token rates can change, and the tool needs to keep up. The dollar numbers codeburn shows are useful as signal, not as an audit.
If you have a Claude Max subscription and you have ever wondered where the hours went, the answer is yes. Install it. The feedback loop of seeing your spend in a TUI next to your editor is small in effort and large in payoff. You start noticing which kinds of prompts are cheap and which are expensive. You start noticing when an agent goes off the rails and eats ten thousand tokens re-reading the same three files. Awareness is the first step toward changing the behavior.
If you are on a team, the case is stronger. Shared projects with multiple engineers using Claude Code benefit from the per-project view. You can see which repos are heavy users, which are efficient, and have a concrete starting point for a conversation about agent discipline.
If you are pay-as-you-go, codeburn is closer to necessary. The cost panel is not a what-if anymore. It is the actual invoice forming in real time.
What codeburn represents is worth naming. We are moving from the era of "AI coding tools exist" into the era of "AI coding tools have an observability problem." Models are fast. Models loop. Models over-edit. Models read files they do not need to read. All of this shows up on your token bill, and for a long time the bill has been a black box.
Tools like codeburn are the first wave of making that box transparent. The next wave will probably be alerting, throttling, and policy. Team admins will want to set budgets per project. Solo developers will want a hard stop when a session crosses a threshold. The building blocks are the same log files codeburn is already reading.
For now, install it. Watch the numbers for a week. You will learn more about your own workflow than any productivity article can teach you.
Codeburn is on GitHub at github.com/getagentseal/codeburn.
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.
Interactive TUI dashboard that shows exactly where your Claude Code and Cursor tokens are going, in real time.
View ToolAnthropic's agentic coding CLI. Runs in your terminal, edits files autonomously, spawns sub-agents, and maintains memory...
View Tool
New tutorials, open-source projects, and deep dives on coding agents - delivered weekly.
AI-powered context manager that remembers your code, snippets, links, and project context across IDEs, browsers, and ter...
A practical walk-through of how to design, write, and ship a Claude Code skill - from choosing when to trigger, through allowed-tools, to the steps the agent will actually follow.
Getting StartedInstall Claude Code, configure your first project, and start shipping code with AI in under 5 minutes.
Getting StartedA concrete step-by-step guide to moving your development workflow from Cursor to Claude Code - settings, rules, keybindings, and the habits that transfer.
Getting Started
In this video, we dive into Anthropic's newly launched Cowork, a user-friendly extension of Claude Code designed to streamline work for both developers and non-developers. This discussion includes an

Try out GitKraken here: https://gitkraken.cello.so/myw3K67IkCr to get 50% GitKraken Pro. In this video, we explore GitKraken, a robust Git GUI that not only visualizes your Git repository...

Check out Trae here! https://tinyurl.com/2f8rw4vm In this video, we dive into @Trae_ai a newly launched AI IDE packed with innovative features. I provide a comprehensive demonstration...

How a single developer shipped 100+ features in one day using Claude Code, parallel agents, and the never-ending todo sy...
Four Claude-Design-adjacent repos entered the trending week with a combined 8,300+ stars. Huashu-design, open-codesign,...
Martin Fowler reframes AI-era debt into three layers - technical, cognitive, and intent. The third one is the one most t...