
I ship production TypeScript every day using four core tools. Everything else feeds into or around them.
Claude Code is the primary coding agent. It runs in the terminal, reads my entire codebase, writes and edits files, runs tests, and commits. I use the Max plan, which gives me access to the best models Anthropic ships. Most of my coding happens here.
Cursor handles visual work. When I need to see a diff side by side, review a complex UI change, or make quick edits across scattered files, Cursor's interface is faster than reading terminal output. I use it as a review layer, not a primary authoring tool.
Obsidian is the knowledge base. Every project has notes. Every video has research files, scripts, and production assets. Daily journals track what I worked on. The vault is the single source of truth for everything that is not code.
Vercel deploys everything. Push to main, it builds. No CI/CD configuration, no Docker files, no server management. The deploy step is invisible, which is exactly what I want.
There are secondary tools in the mix: Firecrawl for web scraping, Screen Studio for screen recordings, Descript for video editing, Wispr Flow for voice dictation. But the core four handle 90% of the daily workflow. You can see the full list on my uses page or browse the developer toolkit.

The day starts before I sit down. An automated briefing system runs at 6 AM, pulls data from multiple sources, and sends me an HTML email with everything I need to know.
The briefing checks:
By the time I open my laptop, I already know what needs attention. Failed CI runs get fixed first. Sponsor emails get flagged for response. Everything else goes into the day's plan.
I open Obsidian, review the kanban board, and pick 2-3 priorities. This takes five minutes. The briefing system removed the 30-minute morning ritual of checking email, Slack, GitHub, and calendars manually.
The entire system is a TypeScript project that runs as a cron job. It gathers data in parallel from each source, formats it into a clean email template, and sends it via Gmail API. Building it took an afternoon. It saves me 30 minutes every morning.
Every coding session follows the same five-step pattern. It sounds rigid, but the structure is what makes it fast.
Before writing any code, I use Claude Code's plan mode. I describe what I want to build in plain language, and the agent outlines the approach: which files to create, which to modify, what the data flow looks like, and what edge cases to handle.
This step catches architectural mistakes before they become expensive. If the plan includes something wrong, like reaching for a library I do not use or proposing a database schema that conflicts with the existing one, I correct it here. Correcting a plan costs nothing. Correcting implemented code costs time.
The plan also primes the context window. Claude Code now has the full picture of what we are building, why, and how. That context carries through the entire session.
With the plan approved, I let Claude Code work. It creates files, writes functions, installs dependencies, and wires components together. For straightforward features, this runs autonomously for minutes at a time.
The key insight here is trust. Early on, I made the mistake of hovering over every line the agent wrote. Now I let it finish, then review. Interrupting mid-task breaks the agent's chain of reasoning and produces worse results than letting it complete and iterating.
Sub agents make this more powerful. For larger tasks, Claude Code spawns specialized workers: one for the frontend components, one for the API routes, one for the database schema. Each works in its own context, focused on its own domain. The results merge cleanly because the plan defined clear boundaries.
This is where Cursor earns its place. I open the project, review the diffs visually, and check for issues the agent might have missed. Naming conventions, import ordering, component structure, accessibility attributes.
I also run the app locally and click through the new feature manually. AI agents are excellent at generating code that compiles. They are less reliable at generating code that feels right in the browser. Spacing, transitions, loading states, error boundaries: these need human eyes.
If something looks off, I either fix it directly in Cursor or go back to Claude Code with a targeted correction. "The button padding is wrong" or "this query runs on every render, memoize it."
Run the test suite. Fix failures. This is straightforward but critical.
Claude Code handles test fixes well. I paste the error output, and it traces the failure back to the root cause. Most test failures after an agent-built feature come from one of two sources: the agent used a mock that does not match the real implementation, or the agent changed a function signature without updating all callers.
For projects without existing tests, I ask Claude Code to write them as part of the build step. The plan should include "write tests for X" as a discrete task.
Commit with a meaningful message. Push to main. Vercel handles the rest.
I commit after every meaningful change, not at the end of a session. Small, frequent commits make rollbacks trivial and make the git log useful as documentation.
git add -A && git commit -m "add user preferences panel with theme selector"
git push
Vercel picks up the push, builds the project, runs the checks, and deploys to production. The feedback loop from "code written" to "live in production" is under two minutes.

Get the weekly deep dive
Tutorials on Claude Code, AI agents, and dev tools - delivered free every week.
The single biggest multiplier in my workflow is running agents in parallel. When a task has independent parts, I do not do them sequentially.
Here is a concrete example. I need to add three new blog posts to this site. Each post is independent. They do not share data, templates, or logic. In a sequential workflow, I would write one, then the next, then the next. With parallel agents, I spawn three workers and all three posts get written simultaneously.
The pattern scales. When I run a site audit, I spawn four agents in parallel: one checks design consistency, one checks content gaps, one checks for broken links, and one audits SEO metadata. Each returns a report. I merge them into a single action list.
For a new feature that touches the database, the API, and the frontend, I define clear interfaces first, then spawn agents for each layer. The database agent creates the schema and migrations. The API agent builds the endpoints against the schema types. The frontend agent builds the UI against the API types. Because the interfaces are defined upfront, the pieces snap together.
The constraint is independence. If task B depends on the output of task A, they cannot run in parallel. But most development work decomposes into more independent pieces than people realize. A landing page and a dashboard page. Three API endpoints for different resources. Documentation, tests, and implementation.
I routinely spawn 5-10 agents for larger tasks. The wall clock time drops dramatically. What used to take a full afternoon finishes in an hour.
AI coding tools are only as good as the context you give them. I have a system for this.
Every project has a CLAUDE.md file at the root. This is the first thing Claude Code reads when it starts a session. It contains:
Writing this file before writing code is the single highest-leverage activity in an AI-assisted workflow. Ten minutes of CLAUDE.md saves hours of corrections. Try the CLAUDE.md generator if you want a starting point.
Claude Code supports persistent memory across sessions. Corrections I make, preferences I state, patterns I approve: these get captured and replayed at the start of future sessions.
This means I correct the agent once on a naming convention, and it remembers forever. I do not re-explain my preferences. The system learns continuously from how I work.
Repeated workflows become skills: markdown files that encode a multi-step process. I have skills for writing blog posts, running QA audits, deploying to production, processing emails, and dozens of other tasks.
A skill is just a system prompt with instructions. But because it is stored in a file and version-controlled, it compounds. Every improvement to a skill applies to every future invocation. Over months, skills get sharp. They encode exactly how I want things done, with exactly the right constraints.
The Model Context Protocol connects Claude Code to external services. I use MCP servers for browser automation, web search, Linear project management, and more. Each server gives the agent structured access to a specific tool or API.
The MCP config generator helps you set these up. The key is selective access. Do not give every agent access to every server. A research agent needs web search. A coding agent needs file system access. A deployment agent needs cloud provider APIs. Scope them correctly.

Code is only half of what I ship. The other half is content: videos, blog posts, social threads, open-source repos. The AI workflow applies here too.
Research. I use Firecrawl and web search agents to gather information on a topic. They scrape documentation, pull recent news, and summarize findings into structured notes in Obsidian. A research task that used to take two hours finishes in 20 minutes.
Script writing. Video scripts live in Obsidian as markdown. I use Wispr Flow for voice dictation when I want to think out loud, then let Claude clean up the transcript into a structured script. The faceless format means every script is written for voiceover. No face cam, no talking head. Just clear explanations over screen recordings and animations.
Recording. Screen Studio captures everything. It handles zoom, cursor effects, and export settings in one tool. I record the screen while narrating the script.
Editing. Descript turns the recording into a polished video. It transcribes automatically, so I edit by editing text. Remove a sentence from the transcript, the video cuts match. It is the fastest editing workflow I have found.
Distribution. Every published video turns into multiple pieces: a blog post on this site, social posts for X, a newsletter mention, and sometimes a GitHub repo. One piece of work, many distribution channels. The content pipeline is partially automated: agent teams handle the distribution while I move on to the next project.
After a year of building this way, these are the principles that stuck.
Do not micromanage. State the goal, provide context, and let the agent work. Intervene only when it is stuck or heading in a clearly wrong direction. The agent's first attempt is usually 80% correct, and fixing the remaining 20% is faster than writing 100% yourself.
Context is everything. A well-written CLAUDE.md file prevents entire categories of mistakes. It is not documentation. It is instructions for your coding partner. Make it specific, opinionated, and complete.
Small commits. Frequently. Each one should represent a coherent unit of work. This makes rollbacks trivial, makes the git log useful, and gives you clean save points to return to if the agent goes off track.
Decompose tasks into independent pieces. Run them simultaneously. Review the results. Merge. This is the single biggest time multiplier in the workflow. Sequential work is the enemy of throughput.
If you do something more than twice, encode it. Write a skill file. Version control it. Let it improve over time. The compound effect of dozens of well-tuned skills is enormous. Each one saves minutes. Together they save hours every week.
Perfection is the enemy of shipping. Get the feature to "good enough," deploy it, and iterate based on real usage. AI tools make iteration so cheap that waiting for perfection is wasteful. Ship, observe, improve.
The honest assessment: I ship 3-5x more code than I did before adopting this workflow. That is not a precise measurement. It is a gut sense based on the volume of features, blog posts, and projects that leave my machine compared to two years ago.
The bottleneck shifted. It used to be writing code. Now it is reviewing and directing. The limiting factor is not how fast I can type or how well I know an API. It is how clearly I can describe what I want and how quickly I can evaluate what I get.
This is a fundamental change in the developer role. You spend less time inside the code and more time above it. Architecture, product decisions, quality standards, user experience. The agent handles implementation. You handle intent.
The tools are still improving. Models get smarter every quarter. Agent harnesses get more capable. MCP servers connect to more services. The workflow I described here will look primitive in a year. But the principles, letting the agent work, managing context deliberately, running tasks in parallel, shipping frequently, will hold.
If you are just starting with AI coding tools, pick one. Claude Code if you live in the terminal. Cursor if you prefer a visual IDE. Write a CLAUDE.md file. Let the agent build something small. Review the output. Iterate. The muscle memory builds fast.
The tools are ready. The question is whether your workflow is.
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.
New tutorials, open-source projects, and deep dives on coding agents - delivered weekly.

Try out GitKraken here: https://gitkraken.cello.so/myw3K67IkCr to get 50% GitKraken Pro. In this video, we explore GitKraken, a robust Git GUI that not only visualizes your Git repository...

Advanced tips for getting the most out of Claude Code - sub-agents, memory, worktrees, hooks, custom skills, and keyboar...

Complete pricing breakdown for every major AI coding tool - Claude Code, Cursor, Copilot, Windsurf, Codex, and more. Fre...

How solo developers and indie hackers ship products 10x faster using AI coding tools. The complete stack for building al...