You want to build an AI-powered app in TypeScript. You search for frameworks and land on two names: LangChain and the Vercel AI SDK.
Both are production-ready. Both support multiple LLM providers. Both have TypeScript-first APIs. But they solve different problems, and picking the wrong one costs you time.
Here is an honest breakdown.
Vercel AI SDK is minimal by design. It gives you streaming, tool calling, and structured output with almost no abstraction layer. You write normal TypeScript. The SDK handles the transport and provider differences so you do not have to.
LangChain is an orchestration framework. It provides chains, agents, memory, retrievers, document loaders, and dozens of integrations out of the box. It is opinionated about how you compose AI workflows, and it gives you building blocks for complex pipelines.
The core tension: the AI SDK trusts you to build your own patterns. LangChain gives you pre-built patterns and asks you to learn its abstractions.
Here is the same basic task in both frameworks: stream a chat completion to the browser.
Vercel AI SDK:
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai("gpt-4o"),
messages,
});
return result.toDataStreamResponse();
}
Five lines of real logic. The useChat hook on the client handles the rest. No configuration objects, no chain definitions, no execution context.
LangChain:
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
import { HttpResponseOutputParser } from "langchain/output_parsers";
export async function POST(req: Request) {
const { messages } = await req.json();
const model = new ChatOpenAI({
modelName: "gpt-4o",
streaming: true,
});
const parser = new HttpResponseOutputParser();
const stream = await model
.pipe(parser)
.stream(messages.map((m: any) =>
new HumanMessage(m.content)
));
return new Response(stream, {
headers: { "Content-Type": "text/event-stream" },
});
}
More imports, more setup, and you are managing the stream format yourself. LangChain's strength is not simple chat. It is what comes after.
This is where both frameworks shine, but differently.
Vercel AI SDK:
import { generateText, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
const result = await generateText({
model: openai("gpt-4o"),
tools: {
getWeather: tool({
description: "Get the weather for a location",
parameters: z.object({
city: z.string(),
}),
execute: async ({ city }) => {
return { temp: 72, condition: "sunny" };
},
}),
},
prompt: "What is the weather in San Francisco?",
});
Tools are defined inline with Zod schemas. The SDK handles the function calling protocol, parses the response, executes your function, and feeds the result back to the model. Clean and predictable.
LangChain:
import { ChatOpenAI } from "@langchain/openai";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { AgentExecutor, createOpenAIFunctionsAgent } from "langchain/agents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";
const weatherTool = new DynamicStructuredTool({
name: "getWeather",
description: "Get the weather for a location",
schema: z.object({
city: z.string(),
}),
func: async ({ city }) => {
return JSON.stringify({ temp: 72, condition: "sunny" });
},
});
const model = new ChatOpenAI({ modelName: "gpt-4o" });
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant."],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]);
const agent = createOpenAIFunctionsAgent({ llm: model, tools: [weatherTool], prompt });
const executor = new AgentExecutor({ agent, tools: [weatherTool] });
const result = await executor.invoke({
input: "What is the weather in San Francisco?",
});
More ceremony. But the AgentExecutor gives you something the AI SDK does not out of the box: a loop. The agent can call multiple tools, reason about intermediate results, and decide when it is done. The AI SDK can do this too with maxSteps, but LangChain's agent abstraction is more structured.
Get the weekly deep dive
Tutorials on Claude Code, AI agents, and dev tools - delivered free every week.
RAG pipelines. LangChain has document loaders for PDFs, CSVs, web pages, Notion, and dozens of other sources. It has text splitters, embedding integrations, and vector store connectors. Building a retrieval-augmented generation pipeline in LangChain takes a fraction of the custom code you would write with the AI SDK.
Complex agent workflows. LangGraph (LangChain's agent framework) lets you define stateful, multi-step agent graphs with branching, cycles, and human-in-the-loop checkpoints. If you are building an agent that needs to plan, execute, reflect, and retry, LangGraph has the primitives.
Ecosystem breadth. LangChain integrates with nearly every vector database, document store, and LLM provider. If you need Pinecone + Cohere + a custom retriever + a multi-step chain, LangChain has pre-built components for all of it.
Next.js integration. The AI SDK was designed for React Server Components and the App Router. useChat, useCompletion, and useObject are React hooks that handle streaming UI out of the box. No glue code needed.
Simplicity. The learning curve is almost flat. If you know TypeScript and React, you can ship an AI feature in an afternoon. There is no framework to learn, just functions you call.
Streaming-first architecture. Every function in the AI SDK is built around streaming. streamText, streamObject, streamUI. This is not bolted on. It is the default. For user-facing applications where perceived latency matters, this is a significant advantage.
Provider switching. Swap openai("gpt-4o") for anthropic("claude-sonnet-4-20250514") or google("gemini-2.0-flash"). Same API, same types, same streaming behavior. The provider abstraction is clean and does not leak.
Bundle size. The AI SDK is lightweight. LangChain pulls in a substantial dependency tree. For frontend-heavy applications, this matters.
Pick the Vercel AI SDK if:
Pick LangChain if:
Most TypeScript developers building web applications should start with the Vercel AI SDK. It does less, and that is the point. You add AI capabilities to your app without adopting a framework. When you hit the limits, you will know, and you can bring in LangChain for the specific pipeline that needs it.
LangChain is powerful, but it carries the weight of its Python heritage. The TypeScript version has improved dramatically, but the abstraction layer can feel heavy when all you need is a streaming chat endpoint. The indirection through chains, prompts, and executors adds cognitive overhead that does not always pay for itself.
The good news: they are not mutually exclusive. Use the AI SDK for your user-facing streaming features and LangChain for your backend RAG pipeline. That is a pattern that works well in production.
For a deeper comparison of AI frameworks and how they fit into agentic workflows, check out the frameworks guide on SubAgent.
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.
The TypeScript toolkit for building AI apps. Unified API across OpenAI, Anthropic, Google. Streaming, tool calling, stru...
View ToolMost popular LLM framework. 100K+ GitHub stars. Chains, RAG, vector stores, tool use. LangGraph adds stateful multi-agen...
View ToolNew tutorials, open-source projects, and deep dives on coding agents - delivered weekly.
Anthropic's Python SDK for building production agent systems. Tool use, guardrails, agent handoffs, and orchestration. R...

Building a Perplexity Style LLM Answer Engine: Frontend to Backend Tutorial This tutorial guides viewers through the process of building a Perplexity style Large Language Model (LLM) answer...

In this video, I dive deep into the Groq Inference API, which I've found to be the fastest inference API out there. I share my insights on the various approaches to leveraging this API, focusing...

#FullStackApp #AutonomousApp #Langchain #NextJS #BraveAPI #OpenAIGPT #VercelPostgres #WebDevelopment #CodingTutorial #AIIntegration #AutonomousSystems In this comprehensive tutorial, we delve...
AI agents use LLMs to complete multi-step tasks autonomously. Here is how they work and how to build them in TypeScript.
The AI SDK is the fastest way to add streaming AI responses to your Next.js app. Here is how to use it with Claude, GPT,...
Aider is open source and works with any model. Claude Code is Anthropic's commercial agent. Here is how they compare for...