Every tool, framework, and piece of hardware in my development stack. Updated for 2026.
The tools I open every single day.
Terminal coding agent from Anthropic. Reads my codebase, edits files, spawns sub-agents, and commits autonomously. Max plan ($200/mo).
AI-native code editor forked from VS Code. Composer mode rewrites multiple files at once. Pro plan ($20/mo).
Local-first markdown knowledge base. My entire content pipeline lives here - research, scripts, video planning, daily journal.
AI voice dictation for macOS. Works in any app. Context-aware formatting. Faster than typing for scripts and prose.
What this site and every DevDigest project is built with.
React framework with App Router, server components, and Turbopack. This site runs on Next.js 16.
Deployment platform. Git push to deploy, edge functions, image optimization. This site runs here.
Reactive database and backend. Real-time sync, server functions, cron jobs, file storage. All TypeScript.
Drop-in authentication for React and Next.js. Pre-built sign-in UI, session management, OAuth.
Utility-first CSS framework. Every page on this site is styled with Tailwind and a custom Gumroad-inspired design system.
Everything is TypeScript. Frontend, backend, scripts, CLIs. No exceptions.
Models and SDKs I use to build AI-powered applications.
The TypeScript toolkit for building AI apps. Unified API across OpenAI, Anthropic, Google. Streaming, tool calling, structured output.
Primary AI model. Opus 4.6 for hard problems, Sonnet 4.6 for speed, Haiku 4.5 for cost. 200K context window.
Unified API for 200+ models. One API key, one billing dashboard. Automatic fallbacks and load balancing.
How every DevDigest video gets made.
Screen recording for macOS. Automatic zoom, cursor effects, and beautiful export. Every tutorial uses this.
Video editing powered by transcripts. Edit video by editing text. Removes filler words automatically.
AI image generation API. Fast inference, multiple models. Used for thumbnails and visual assets.
The machines running everything.
Primary development machine. Fast enough to run local models while building and recording simultaneously.
NVIDIA local AI inference server. Runs Ollama with qwen3.5 (122B, 35B, 27B, 9B) and lfm2. Unlimited local inference.
Last updated March 2026. See the full AI Tools Directory for detailed reviews of every tool.
New tutorials, open-source projects, and deep dives on coding agents - delivered weekly.