TL;DR
One command, zero config. DD Traces is a local-first OpenTelemetry viewer for developers who use AI coding tools and want to see what happened.
Every time you run an AI coding tool, a lot happens behind the scenes. Claude Code calls models, executes tools, reads files, runs bash commands, edits code, and makes decisions at each step. Codex does the same. So does Cursor.
But when something goes wrong - or when you just want to understand what your agent actually did - there is no good way to see it. You scroll through terminal output. You guess at timings. You have no idea how many tokens were used or what they cost.
The observability gap for AI development is real. Traditional distributed tracing tools like Jaeger and Zipkin exist, but they were built for microservices, not for AI agent workflows. Setting them up locally means Docker containers, config files, and a UI designed for SRE teams, not individual developers.
Cloud-hosted alternatives like LangSmith and Langfuse require accounts, API keys, and sending your data to someone else's servers. For local development, that is friction you do not need.
DD Traces solves this with a single command:
npx dd-traces
That starts a local OTLP collector on port 4318 and a web dashboard on port 6006. No Docker. No accounts. No config files. No data leaving your machine.
Point your app at http://localhost:4318, use your AI tools normally, and watch traces stream in live.
If you are building AI applications with the Vercel AI SDK, DD Traces fits in cleanly. The AI SDK has built-in OpenTelemetry support through its experimental_telemetry option. When enabled, every generateText and streamText call emits spans with model info, token counts, tool calls, and timing data.
Here is the full setup. Two files, under a minute.
Install the exporter packages:
npm install @vercel/otel @opentelemetry/exporter-trace-otlp-protoCreate instrumentation.ts in your project root:
import { registerOTel } from "@vercel/otel";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";
export function register() {
registerOTel({
serviceName: "my-ai-app",
traceExporter: new OTLPTraceExporter({
url: "http://localhost:4318/v1/traces",
}),
});
}
Add experimental_telemetry to your AI SDK calls:
import { streamText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-20250514"),
messages,
experimental_telemetry: {
isEnabled: true,
functionId: "chat",
},
});
return result.toDataStreamResponse();
}
That is it. Every call now emits a full trace with parent-child spans, token usage, tool calls, and timing data. DD Traces picks them up automatically.
If you have many AI calls, a small helper keeps things clean:
// lib/telemetry.ts
import type { TelemetrySettings } from "ai";
export function aiTelemetry(
functionId: string,
meta?: Record<string, string>
): { experimental_telemetry: TelemetrySettings } {
return {
experimental_telemetry: {
isEnabled: true,
functionId,
metadata: meta,
},
};
}
// Usage in any route or server action:
const result = await generateText({
model: openai("gpt-4o"),
prompt: "Summarize this document",
...aiTelemetry("summarize", { userId: "u-123" }),
});
You can also skip the explicit exporter URL by setting an environment variable:
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
The @vercel/otel package reads this automatically.
Get the weekly deep dive
Tutorials on Claude Code, AI agents, and dev tools - delivered free every week.
Once traces flow in, DD Traces gives you several views designed for AI development workflows.
The main trace view is a waterfall timeline showing every span in a trace as a horizontal bar. Parent-child relationships are rendered as nested indentation, so you can see the full call hierarchy at a glance.
A typical AI trace looks like this:
POST /api/chat ============================== 4,217ms
auth.middleware == 23ms
ai.generateText (chat) ========================== 3,102ms
ai.generateText.doGenerate =================== 2,100ms
ai.toolCall: searchDocs ====== 340ms
ai.generateText.doGenerate ======= 620ms
db.insert (save response) === 45ms
Each bar is color-coded by type: pink for LLM calls, amber for tool calls, emerald for HTTP spans, blue for database queries. Duration bars scale proportionally so slow spans are immediately obvious.
Click any span and a detail panel shows everything the AI SDK reported:
recordInputs and recordOutputs are enabled)For streaming calls, you also see msToFirstChunk and avgCompletionTokensPerSecond so you can measure perceived latency separately from total duration.
DD Traces calculates costs per span and per trace using a built-in model pricing table. You see exactly how many tokens each LLM call consumed and what it cost. Totals are aggregated at the trace level so you can answer "how much did this agent session cost?" in one glance.
The dashboard also tracks running totals across all traces in a session: total tokens per service, total cost per model, and the most expensive traces.
The service map renders a visual graph of how your services connect. For AI applications, this shows the flow from your HTTP endpoint through model calls, tool executions, and database writes. Nodes are color-coded by health status and annotated with request rates and error percentages.
Filter traces by status (success, error, slow), search by trace ID, service name, or operation. Real-time updates stream in via WebSocket so you do not need to refresh.
The AI observability space is growing. Here is an honest comparison.
LangSmith is the most mature option. It has deep LangChain integration, team features, and a polished cloud dashboard. But it requires an account, sends data to Anthropic's servers, and is primarily designed for LangChain workflows. If you are using the Vercel AI SDK or building without LangChain, the integration is less natural.
Langfuse is open source and can be self-hosted. It has a first-class AI SDK plugin and good cost tracking. The self-hosted path requires Docker and Postgres, which is more setup than most developers want for local work.
DD Traces is different in three ways:
Local-first. Your data never leaves your machine. There is no account to create, no API key to configure, no cloud service to trust with your prompts and responses.
Zero config. npx dd-traces and you are running. No Docker, no database, no environment variables beyond the OTLP endpoint.
Standard OTLP. DD Traces speaks native OpenTelemetry. It is not a proprietary SDK wrapper. Any tool that exports OTLP traces works out of the box - the AI SDK, Next.js auto-instrumentation, Express, Fastify, or your own custom spans.
The trade-off is clear. LangSmith and Langfuse are better for teams that need persistent storage, collaboration features, and managed infrastructure. DD Traces is better for individual developers who want fast local observability during development without any overhead.
DD Traces accepts standard OTLP, so it works with anything that exports traces.
Next.js auto-instrumentation gives you HTTP request spans, server-side rendering spans, and fetch spans for free when you add @vercel/otel. Combined with AI SDK telemetry, a single trace shows the full request lifecycle from HTTP request to model call to tool execution to response.
Express and Fastify work through the standard @opentelemetry/instrumentation-http and framework-specific instrumentation packages.
Database queries from Prisma, Drizzle, or raw pg show up as child spans when instrumented with their respective OTEL packages.
The AI SDK spans are the headline feature, but DD Traces is a general-purpose local OTLP viewer. If it emits OTLP, you can see it.
The full setup takes about 60 seconds.
Terminal 1 - Start DD Traces:
npx dd-traces
Terminal 2 - In your Next.js project:
npm install @vercel/otel @opentelemetry/exporter-trace-otlp-protoCreate instrumentation.ts:
import { registerOTel } from "@vercel/otel";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";
export function register() {
registerOTel({
serviceName: "my-app",
traceExporter: new OTLPTraceExporter({
url: "http://localhost:4318/v1/traces",
}),
});
}
Add experimental_telemetry: { isEnabled: true } to your AI SDK calls. Start your dev server. Open http://localhost:6006. Traces appear as requests come in.
DD Traces is actively being developed. The roadmap includes native integrations for Claude Code, Codex, and OpenCode trace formats, agent decision tree visualization, trace comparison (diff two traces side by side), and a cloud mode at traces.developersdigest.tech for team sharing and persistent storage.
The local-first experience is the foundation. Everything else builds on top of it.
If you build AI applications and want to actually see what is happening during development, give it a try. One command, and you have observability.
npx dd-traces
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.
StackBlitz's in-browser AI app builder. Full-stack apps from a prompt - runs Node.js, installs packages, and deploys....
View ToolOpen-source AI pair programming in your terminal. Works with any LLM - Claude, GPT, Gemini, local models. Git-aware ed...
View Tool
New tutorials, open-source projects, and deep dives on coding agents - delivered weekly.
Open-source AI code assistant for VS Code and JetBrains. Bring your own model - local or API. Tab autocomplete, chat,...
Install Ollama, pull your first model, and run AI locally for coding, chat, and automation - with zero cloud dependency.
Getting StartedInstall the dd CLI and scaffold your first AI-powered app in under a minute.
Getting StartedConfigure Claude Code for maximum productivity -- CLAUDE.md, sub-agents, MCP servers, and autonomous workflows.
AI Agents
Learn The Fundamentals Of Becoming An AI Engineer On Scrimba; https://v2.scrimba.com/the-ai-engineer-path-c02v?via=developersdigest My AI-powered Video Editor; https://dub.sh/dd-descript ...

Learn The Fundamentals Of Becoming An AI Engineer On Scrimba; https://v2.scrimba.com/the-ai-engineer-path-c02v?via=developersdigest Discovering Scrimba: Interactive Learning for Developers...

This comprehensive video guide demonstrates how Portkey's AI Gateway can simplify LLM integrations within applications. The tutorial explores how to interact with multiple AI providers such...

Claude Code's popularity isn't an accident. It's built on bash, grep, and text files - tools with decades of stability...

Two platforms, two philosophies. Here is how Anthropic and OpenAI compare on APIs, SDKs, documentation, pricing, and the...

Convex and Supabase both work for AI-powered apps. Here is when to use each, based on building production apps with both...