
TL;DR
The TypeScript patterns that show up in every AI project. Streaming responses, type-safe tool definitions, structured output, retry logic, and more.
These are the patterns I reach for in every AI project. Not theoretical - these show up in real TypeScript codebases that ship AI features.
Every AI response should stream. Users see output immediately instead of waiting for the full response.
async function* streamCompletion(prompt: string) {
const response = await fetch("/api/chat", {
method: "POST",
body: JSON.stringify({ prompt }),
});
const reader = response.body!.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
yield decoder.decode(value);
}
}
// Usage
for await (const chunk of streamCompletion("Explain TypeScript generics")) {
process.stdout.write(chunk);
}
The Vercel AI SDK wraps this into streamText() which handles the protocol automatically.
AI tools need runtime validation. Zod gives you TypeScript types and validation from a single schema.
import { z } from "zod";
import { tool } from "ai";
const weatherTool = tool({
description: "Get current weather for a location",
parameters: z.object({
city: z.string().describe("City name"),
units: z.enum(["celsius", "fahrenheit"]).default("celsius"),
}),
execute: async ({ city, units }) => {
const data = await fetchWeather(city, units);
return { temperature: data.temp, condition: data.condition };
},
});
The parameters schema validates input AND generates the JSON Schema that the model sees. One source of truth.
When you need the model to return a specific shape, not free text.
import { generateObject } from "ai";
import { z } from "zod";
const ProductReview = z.object({
sentiment: z.enum(["positive", "negative", "neutral"]),
score: z.number().min(0).max(10),
keyPoints: z.array(z.string()).max(5),
recommendation: z.boolean(),
});
type ProductReview = z.infer<typeof ProductReview>;
const { object } = await generateObject({
model: anthropic("claude-sonnet-4-6"),
schema: ProductReview,
prompt: `Analyze this review: "${reviewText}"`,
});
// object is fully typed as ProductReview
console.log(object.sentiment, object.score);
Every AI API call fails sometimes. Rate limits, timeouts, server errors. Wrap calls in retry logic.
async function withRetry<T>(
fn: () => Promise<T>,
maxRetries = 3,
baseDelay = 1000
): Promise<T> {
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
return await fn();
} catch (error) {
if (attempt === maxRetries) throw error;
const isRetryable =
error instanceof Error &&
(error.message.includes("429") ||
error.message.includes("503") ||
error.message.includes("timeout"));
if (!isRetryable) throw error;
const delay = baseDelay * Math.pow(2, attempt) + Math.random() * 1000;
await new Promise((resolve) => setTimeout(resolve, delay));
}
}
throw new Error("Unreachable");
}
// Usage
const result = await withRetry(() =>
generateText({ model: anthropic("claude-sonnet-4-6"), prompt })
);
Get the weekly deep dive
Tutorials on Claude Code, AI agents, and dev tools - delivered free every week.
When agents can take multiple action types, discriminated unions make the type system enforce correctness.
type AgentAction =
| { type: "search"; query: string }
| { type: "write_file"; path: string; content: string }
| { type: "run_command"; command: string; cwd?: string }
| { type: "ask_user"; question: string }
| { type: "done"; result: string };
function executeAction(action: AgentAction): Promise<string> {
switch (action.type) {
case "search":
return searchWeb(action.query);
case "write_file":
return writeFile(action.path, action.content);
case "run_command":
return exec(action.command, { cwd: action.cwd });
case "ask_user":
return prompt(action.question);
case "done":
return Promise.resolve(action.result);
}
}
TypeScript guarantees you handle every action type. Adding a new type without handling it is a compile error.
Type-safe conversation history that works across providers.
interface Message<Role extends string = string> {
role: Role;
content: string;
metadata?: Record<string, unknown>;
}
type ChatMessage = Message<"user" | "assistant" | "system">;
class Conversation {
private messages: ChatMessage[] = [];
system(content: string): this {
this.messages.push({ role: "system", content });
return this;
}
user(content: string): this {
this.messages.push({ role: "user", content });
return this;
}
assistant(content: string): this {
this.messages.push({ role: "assistant", content });
return this;
}
toArray(): ChatMessage[] {
return [...this.messages];
}
get lastAssistant(): string | undefined {
return this.messages.findLast((m) => m.role === "assistant")?.content;
}
}
Switch between AI providers without changing application code.
interface AIProvider {
generate(prompt: string, options?: GenerateOptions): Promise<string>;
stream(prompt: string, options?: GenerateOptions): AsyncIterable<string>;
}
interface GenerateOptions {
maxTokens?: number;
temperature?: number;
systemPrompt?: string;
}
function createProvider(name: "anthropic" | "openai"): AIProvider {
const providers: Record<string, AIProvider> = {
anthropic: {
generate: async (prompt, opts) => {
const { text } = await generateText({
model: anthropic("claude-sonnet-4-6"),
prompt,
maxTokens: opts?.maxTokens,
temperature: opts?.temperature,
system: opts?.systemPrompt,
});
return text;
},
stream: (prompt, opts) => streamProvider("anthropic", prompt, opts),
},
openai: {
generate: async (prompt, opts) => {
const { text } = await generateText({
model: openai("gpt-5"),
prompt,
maxTokens: opts?.maxTokens,
});
return text;
},
stream: (prompt, opts) => streamProvider("openai", prompt, opts),
},
};
return providers[name];
}
Track and limit token usage per request, per user, or per session.
interface TokenBudget {
maxInput: number;
maxOutput: number;
used: { input: number; output: number };
}
function createBudget(maxInput = 100_000, maxOutput = 4_096): TokenBudget {
return { maxInput, maxOutput, used: { input: 0, output: 0 } };
}
function checkBudget(budget: TokenBudget, inputTokens: number): boolean {
return budget.used.input + inputTokens <= budget.maxInput;
}
function recordUsage(
budget: TokenBudget,
input: number,
output: number
): TokenBudget {
return {
...budget,
used: {
input: budget.used.input + input,
output: budget.used.output + output,
},
};
}
// Usage in an agent loop
let budget = createBudget();
while (checkBudget(budget, estimatedTokens)) {
const result = await generateText({ model, prompt });
budget = recordUsage(budget, result.usage.promptTokens, result.usage.completionTokens);
}
Never use untyped process.env directly. Parse and validate at startup.
import { z } from "zod";
const envSchema = z.object({
ANTHROPIC_API_KEY: z.string().min(1),
OPENAI_API_KEY: z.string().min(1),
DATABASE_URL: z.string().url(),
NODE_ENV: z.enum(["development", "production", "test"]).default("development"),
MAX_TOKENS: z.coerce.number().default(4096),
ENABLE_STREAMING: z.coerce.boolean().default(true),
});
export const env = envSchema.parse(process.env);
// Now fully typed
console.log(env.ANTHROPIC_API_KEY); // string
console.log(env.MAX_TOKENS); // number
console.log(env.ENABLE_STREAMING); // boolean
Parse once at the top of your app. If any variable is missing or malformed, it crashes immediately with a clear error instead of failing silently at runtime.
Replace try/catch with a Result type for composable error handling.
type Result<T, E = Error> =
| { ok: true; value: T }
| { ok: false; error: E };
function ok<T>(value: T): Result<T, never> {
return { ok: true, value };
}
function err<E>(error: E): Result<never, E> {
return { ok: false, error };
}
async function safeGenerate(prompt: string): Promise<Result<string>> {
try {
const { text } = await generateText({
model: anthropic("claude-sonnet-4-6"),
prompt,
});
return ok(text);
} catch (e) {
return err(e instanceof Error ? e : new Error(String(e)));
}
}
// Usage - no try/catch needed
const result = await safeGenerate("Explain monads");
if (result.ok) {
console.log(result.value);
} else {
console.error("Failed:", result.error.message);
}
Streaming (pattern 1) and structured output (pattern 3) have the biggest impact. Streaming is table stakes for user experience. Structured output eliminates parsing errors and gives you type safety on model responses.
Zod. TypeScript types disappear at runtime, but AI tools need runtime validation. Zod schemas generate both the TypeScript type (via z.infer) and the JSON Schema that models consume. One schema, two outputs.
Use the retry with exponential backoff pattern (pattern 4). Check for 429 status codes, add jitter to prevent thundering herd, and set a max retry count. The Vercel AI SDK has built-in retry support.
Use generateObject() with a Zod schema (pattern 3). The response is fully typed at compile time and validated at runtime. For streaming, use streamObject() which gives you partial typed results as they arrive.
Use the provider abstraction pattern (pattern 7) or the Vercel AI SDK which handles this natively. Define a common interface and swap the model string. The AI SDK supports Anthropic, OpenAI, Google, and 20+ other providers with the same API.
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.
The TypeScript toolkit for building AI apps. Unified API across OpenAI, Anthropic, Google. Streaming, tool calling, stru...
View ToolTypeScript-first AI agent framework. Workflows, RAG, tool use, evals, and integrations. Built for production Node.js app...
View Tool
New tutorials, open-source projects, and deep dives on coding agents - delivered weekly.
LLM data framework for connecting custom data sources to language models. Best-in-class RAG, data connectors, and query...
Install the dd CLI and scaffold your first AI-powered app in under a minute.
Getting StartedConfigure Claude Code for maximum productivity -- CLAUDE.md, sub-agents, MCP servers, and autonomous workflows.
AI AgentsWhat MCP servers are, how they work, and how to build your own in 5 minutes.
AI Agents
Courses: https://links.zerotomastery.io/Courses/DD/Jan25 AI Courses: https://links.zerotomastery.io/AICourses/DD/Jan25 Career Path Quiz: https://links.zerotomastery.io/CPQuiz/DD/Jan25 Prompt...

Check out Magic Patterns: https://magicpatterns.1stcollab.com/developersdigest_4 In this video, I share insights inspired by the CEO of Figma on the importance of design in the age of AI-generated...

Check out Magic Patterns here: https://magicpatterns.1stcollab.com/developersdigest_2 This video provides an in-depth look at the custom components library feature of Magic Patterns, a design...

How RAG works, why it matters, and how to implement it in TypeScript. The technique that lets AI models use your data wi...

Convex and Supabase both work for AI-powered apps. Here is when to use each, based on building production apps with both...

MCP servers and function calling both let AI tools interact with external systems. They solve different problems. Here i...