
TL;DR
One dev, one CLI, 24 subdomains, and a lot of parallel agents. The playbook for shipping an AI app portfolio.
There are currently 24 apps on the Developers Digest network. Fitness tracker, cron scheduler, video clipper, CLI directory, MCP directory, skills marketplace, AI model comparison, overnight agents, agent hub, content calendar, voice tools, and a dozen more. Every one lives on its own subdomain under developersdigest.tech.
One developer. No team. Most are running in production. Some are fully shipped. A lot of them are half working. I want to be honest about that because the interesting thing is not that they all work. The interesting thing is that 24 of them exist at all, and that a single dev can keep pushing them forward in parallel without the whole thing collapsing.
This post is the meta-story. The stack, the pattern, the agent loop, what broke, and the tactical lessons I would give to anyone trying to run a similar portfolio.
Every app uses the same spine so I do not have to think about infrastructure per project:
That is it. No Vercel. No AWS. No Kubernetes. No per-app decisions about hosting, auth, or database. The stack is the same every time, so bootstrapping a new app is closer to copy-paste than to architecture.
The reason this matters: when the stack is identical, the agents do not need to relearn anything. Whatever works for one app works for the next.
The hub is the /apps page on developersdigest.tech. It is driven by a single file, app/apps/apps-data.ts, which is the source of truth for every product in the network. Each entry looks like this:
{
slug: "fit",
name: "Fit",
host: "fit.developersdigest.tech",
url: "https://fit.developersdigest.tech",
description: "Log workouts in plain English...",
category: "SaaS Products",
badge: "Popular",
searchKeywords: ["fitness", "habits", "tracking"],
}
One row per app. The registry feeds the /apps directory, the JSON-LD metadata, the search index, and the hero terminal. Adding a new product is one commit to apps-data.ts plus a Cloudflare DNS record. That is the whole onboarding.
Each sub-brand gets its own subdomain because the alternative (everything under one domain) would turn every design and auth decision into a coordination problem. Subdomains give each app its own identity, its own design freedom, its own deploy cadence, and crucially its own blast radius when something breaks.
Get the weekly deep dive
Tutorials on Claude Code, AI agents, and dev tools - delivered free every week.
The loop that keeps 24 apps moving forward without me turning into a tech lead in my own life:
APPS-TIGHTEN-STATUS.md) that I can skim in 60 seconds.dd-cron. Consolidate magic numbers in dd-fitness. Delete the dead /app directory in dd-canvas. Each agent has one target, one set of files, zero coordination overhead.A real slice from this week's status file:
iter 1 (cron 73814f48, first run)
- dd-clipper 8fe1af6: mockClips/Transcript/runMockPipeline deleted, empty state honest
- dd-fitness 8452cd7: DEFAULT_TARGETS consolidated across 5 components, tests green
Deploy queue (staggered):
1. contentcal (credit check fix)
2. dd-academy (real streak)
3. dd-cron (fake key gone)
4. dd-content-engine (fake key gone)
5. dd-fitness (rebuild)
6. dd-canvas (rebuild)
That is the whole workflow. Audit, fan out, commit, deploy one at a time. The cron runs the loop on its own. I check in, read the status file, approve or redirect.
The honest tier list looks like this:
dd-canvas.Why put this in a public post? Because pretending the portfolio is 24-for-24 production apps would be a lie, and the interesting thing is the mechanism, not the polish. Half-built is the natural state of a portfolio that is growing faster than any single dev can finish individual products. The loop is designed to close those gaps over time, not to avoid ever having them.
The credibility move is saying "here is what is real, here is what is not, here is the queue." That beats a glossy launch page that falls over on first click.
Seven tactical takeaways from running this loop for the past few months:
Shipping an app with mockClips, mockTranscript, and runMockPipeline feels fast. It is not. It is a landmine. Every mock is a lie you will have to explain to a user who clicked something and got nothing back. Killing mocks early forces you to either wire the real thing or ship an honest empty state. Both are better than fake data.
A single Hetzner box running Coolify cannot build 10 Next.js 16 apps at the same time without tipping over. I learned this by watching my build queue return 500s while docker builder prune -f crawled. The fix is operational, not architectural. One deploy per iteration. Verify. Move on.
I tried a single "audit the portfolio" agent. It produced beautiful generic slop. Switching to one agent per app, each reading only that app's repo, produced actionable status reports that fit on a page. Narrow scope, narrow context, narrow output.
Every app in the network is one row in apps-data.ts. That single file drives the /apps page, search, metadata, and terminal navigation. When a new app ships, it is one commit. When an app gets renamed, it is one commit. There are no scattered references to update.
Next.js 16 plus Convex plus Clerk plus Coolify is the same every time. The agents do not waste context figuring out which auth system this app uses or how this one deploys. The marginal cost of a new app is the feature work, not the infrastructure tax.
Kimi handles the high-volume grunt work. Killing mocks, renaming files, fixing lint, writing boilerplate. Claude Code gets the tasks that require judgment. Refactors that cross files. Decisions about architecture. Anywhere the wrong call costs a day of rework.
The temptation is to hide the half-working apps until they are done. The problem is they are never done. There is always another feature, always another edge case. Publishing the registry and the status file publicly forces the work to move forward because the gap is visible. "DD Build has no repo" is a lot harder to ignore when the row is live on the /apps page.
The entire loop is driven by the dd CLI. One command to scaffold a new app (dd new), one to audit (dd audit), one to deploy (dd deploy). Each command is a thin wrapper over the same agent and infrastructure stack, but it turns the workflow into muscle memory.
If you want to see the apps, the registry is live at /apps. If you want to see the CLI that glues the network together, it is at cli.developersdigest.tech. And if you want the longer writeup of how the main site was built, the case study has the receipts.
One developer running 24 apps works because the stack is identical, the registry is one file, the loop is automated, and the honesty is public. The agents do the grunt work. The CLI does the orchestration. The status file keeps me in the loop without making me the bottleneck.
It is not that any of these apps are individually revolutionary. They are not. The interesting thing is that 24 of them exist on the same spine, maintained by one person, with a system that lets them keep improving in parallel without any single app starving the others.
That is the playbook. The portfolio is the product.
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.
The TypeScript toolkit for building AI apps. Unified API across OpenAI, Anthropic, Google. Streaming, tool calling, stru...
View ToolAnthropic's agentic coding CLI. Runs in your terminal, edits files autonomously, spawns sub-agents, and maintains memory...
View Tool
New tutorials, open-source projects, and deep dives on coding agents - delivered weekly.
Anthropic's Python SDK for building production agent systems. Tool use, guardrails, agent handoffs, and orchestration. R...
View ToolConfigure Claude Code for maximum productivity -- CLAUDE.md, sub-agents, MCP servers, and autonomous workflows.
AI AgentsStep-by-step guide to building an MCP server in TypeScript - from project setup to tool definitions, resource handling, testing, and deployment.
AI AgentsDeep comparison of the top AI agent frameworks - architecture, code examples, strengths, weaknesses, and when to use each one.
AI Agents
Boost Your Productivity with Augment Code's Remote Agent Feature Sign up: https://www.augment.new/ In this video, learn how to utilize Augment Code's new remote agent feature within your...

Check out CopilotKit on GitHub at https://go.copilotkit.ai/copilotkit to view the demo + more featured in this video. While you're there, star ⭐️ their repository and support open source....

In this video, I demonstrate how to use VectorShift to build AI applications and workflows. By applying ideas from Anthropic's blog post 'Building Effective Agents,' I show you how to create...

Claude Code now has a native Loop feature for scheduling recurring prompts - from one-minute intervals to three-day wi...

Anthropic dropped a batch of updates across Claude Code and Cowork - remote control from your phone, scheduled tasks,...

Anthropic brought git worktrees to Claude Code. Spawn multiple agents working on the same repo simultaneously - no mer...