
TL;DR
Two popular frameworks for building AI apps in TypeScript. Here is when to use each and why most Next.js developers should start with the AI SDK.
Direct answer
Two popular frameworks for building AI apps in TypeScript. Here is when to use each and why most Next.js developers should start with the AI SDK.
Best for
Developers comparing real tool tradeoffs before choosing a stack.
Covers
Verdict, tradeoffs, pricing signals, workflow fit, and related alternatives.
Read next
The AI SDK is the fastest way to add streaming AI responses to your Next.js app. Here is how to use it with Claude, GPT, and open source models.
5 min readA practical guide to building AI agents with TypeScript using the Vercel AI SDK. Tool use, multi-step reasoning, and real patterns you can ship today.
10 min readA practical comparison of the five major AI agent frameworks in 2026 - architecture, code examples, and a decision matrix to help you pick the right one.
14 min readYou want to build an AI-powered app in TypeScript. You search for frameworks and land on two names: LangChain and the Vercel AI SDK.
Both are production-ready. Both support multiple LLM providers. Both have TypeScript-first APIs. But they solve different problems, and picking the wrong one costs you time.
Source check: keep the official Vercel AI SDK docs, Vercel AI SDK GitHub repo, LangChain.js docs, and LangChain GitHub repo open while evaluating. For the broader agent-framework decision, read the AI agent frameworks guide and the OpenAI Agents SDK TypeScript guide.
Here is an honest breakdown.
Vercel AI SDK is minimal by design. It gives you streaming, tool calling, and structured output with almost no abstraction layer. You write normal TypeScript. The SDK handles the transport and provider differences so you do not have to.
LangChain is an orchestration framework. It provides chains, agents, memory, retrievers, document loaders, and dozens of integrations out of the box. It is opinionated about how you compose AI workflows, and it gives you building blocks for complex pipelines.
The core tension: the AI SDK trusts you to build your own patterns. LangChain gives you pre-built patterns and asks you to learn its abstractions.
Here is the same basic task in both frameworks: stream a chat completion to the browser.
Vercel AI SDK:
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai("gpt-4o"),
messages,
});
return result.toDataStreamResponse();
}
Five lines of real logic. The useChat hook on the client handles the rest. No configuration objects, no chain definitions, no execution context.
LangChain:
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
import { HttpResponseOutputParser } from "langchain/output_parsers";
export async function POST(req: Request) {
const { messages } = await req.json();
const model = new ChatOpenAI({
modelName: "gpt-4o",
streaming: true,
});
const parser = new HttpResponseOutputParser();
const stream = await model
.pipe(parser)
.stream(messages.map((m: any) =>
new HumanMessage(m.content)
));
return new Response(stream, {
headers: { "Content-Type": "text/event-stream" },
});
}
More imports, more setup, and you are managing the stream format yourself. LangChain's strength is not simple chat. It is what comes after.
Get the weekly deep dive
Tutorials on Claude Code, AI agents, and dev tools - delivered free every week.
From the archive
Mar 19, 2026 • 6 min read
Mar 19, 2026 • 12 min read
Mar 19, 2026 • 5 min read
Mar 19, 2026 • 10 min read
This is where both frameworks shine, but differently.
import { generateText, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
const result = await generateText({
model: openai("gpt-4o"),
tools: {
getWeather: tool({
description: "Get the weather for a location",
parameters: z.object({
city: z.string(),
}),
execute: async ({ city }) => {
return { temp: 72, condition: "sunny" };
},
}),
},
prompt: "What is the weather in San Francisco?",
});
Tools are defined inline with Zod schemas. The SDK handles the function calling protocol, parses the response, executes your function, and feeds the result back to the model. Clean and predictable.
LangChain:
import { ChatOpenAI } from "@langchain/openai";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { AgentExecutor, createOpenAIFunctionsAgent } from "langchain/agents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";
const weatherTool = new DynamicStructuredTool({
name: "getWeather",
description: "Get the weather for a location",
schema: z.object({
city: z.string(),
}),
func: async ({ city }) => {
return JSON.stringify({ temp: 72, condition: "sunny" });
},
});
const model = new ChatOpenAI({ modelName: "gpt-4o" });
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant."],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]);
const agent = createOpenAIFunctionsAgent({ llm: model, tools: [weatherTool], prompt });
const executor = new AgentExecutor({ agent, tools: [weatherTool] });
const result = await executor.invoke({
input: "What is the weather in San Francisco?",
});
More ceremony. But the AgentExecutor gives you something the AI SDK does not out of the box: a loop. The agent can call multiple tools, reason about intermediate results, and decide when it is done. The AI SDK can do this too with maxSteps, but LangChain's agent abstraction is more structured.
RAG pipelines. LangChain has document loaders for PDFs, CSVs, web pages, Notion, and dozens of other sources. It has text splitters, embedding integrations, and vector store connectors. Building a retrieval-augmented generation pipeline in LangChain takes a fraction of the custom code you would write with the AI SDK.
Complex agent workflows. LangGraph (LangChain's agent framework) lets you define stateful, multi-step agent graphs with branching, cycles, and human-in-the-loop checkpoints. If you are building an agent that needs to plan, execute, reflect, and retry, LangGraph has the primitives.
Ecosystem breadth. LangChain integrates with nearly every vector database, document store, and LLM provider. If you need Pinecone + Cohere + a custom retriever + a multi-step chain, LangChain has pre-built components for all of it.
Next.js integration. The AI SDK was designed for React Server Components and the App Router. useChat, useCompletion, and useObject are React hooks that handle streaming UI out of the box. No glue code needed.
Simplicity. The learning curve is almost flat. If you know TypeScript and React, you can ship an AI feature in an afternoon. There is no framework to learn, just functions you call.
Streaming-first architecture. Every function in the AI SDK is built around streaming. streamText, streamObject, streamUI. This is not bolted on. It is the default. For user-facing applications where perceived latency matters, this is a significant advantage.
Provider switching. Swap openai("gpt-4o") for anthropic("claude-sonnet-4-20250514") or google("gemini-2.0-flash"). Same API, same types, same streaming behavior. The provider abstraction is clean and does not leak.
Bundle size. The AI SDK is lightweight. LangChain pulls in a substantial dependency tree. For frontend-heavy applications, this matters.
Pick the Vercel AI SDK if:
Pick LangChain if:
Most TypeScript developers building web applications should start with the Vercel AI SDK. It does less, and that is the point. You add AI capabilities to your app without adopting a framework. When you hit the limits, you will know, and you can bring in LangChain for the specific pipeline that needs it.
LangChain is powerful, but it carries the weight of its Python heritage. The TypeScript version has improved dramatically, but the abstraction layer can feel heavy when all you need is a streaming chat endpoint. The indirection through chains, prompts, and executors adds cognitive overhead that does not always pay for itself.
The good news: they are not mutually exclusive. Use the AI SDK for your user-facing streaming features and LangChain for your backend RAG pipeline. That is a pattern that works well in production.
For a deeper comparison of AI frameworks and how they fit into agentic workflows, check out the AI agent frameworks guide, how to build AI agents in TypeScript, and the frameworks guide on SubAgent. If you are choosing tools by budget as well as architecture, pair this with the AI coding tools pricing comparison.
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.
The TypeScript toolkit for building AI apps. Unified API across OpenAI, Anthropic, Google. Streaming, tool calling, stru...
View ToolMost popular LLM framework. 100K+ GitHub stars. Chains, RAG, vector stores, tool use. LangGraph adds stateful multi-agen...
View ToolTypeScript-first AI agent framework. Workflows, RAG, tool use, evals, and integrations. Built for production Node.js app...
View ToolLLM data framework for connecting custom data sources to language models. Best-in-class RAG, data connectors, and query...
View ToolStep-by-step guide to building an MCP server in TypeScript - from project setup to tool definitions, resource handling, testing, and deployment.
AI AgentsDeep comparison of the top AI agent frameworks - architecture, code examples, strengths, weaknesses, and when to use each one.
AI AgentsLimit which tools a subagent can access.
Claude Code
The AI SDK is the fastest way to add streaming AI responses to your Next.js app. Here is how to use it with Claude, GPT,...

A practical guide to building AI agents with TypeScript using the Vercel AI SDK. Tool use, multi-step reasoning, and rea...

A practical comparison of the five major AI agent frameworks in 2026 - architecture, code examples, and a decision matri...

AI agents use LLMs to complete multi-step tasks autonomously. Here is how they work and how to build them in TypeScript.

A practical guide to using Claude Code in Next.js projects. CLAUDE.md config for App Router, common workflows, sub-agent...
A step-by-step guide to building AI agents that actually work. Choose a framework, define tools, wire up the loop, and s...

New tutorials, open-source projects, and deep dives on coding agents - delivered weekly.