Deploy an AI app with Vercel
Vercel is the fastest way to ship a Next.js AI app: connect a repo, push to main, get a URL with edge caching, preview deploys, and built-in observability.
Prerequisites
- +A Next.js project on GitHub
- +Vercel account
- +API keys for any AI providers you use
Step-by-Step
- 1
Import the repo
On vercel.com click Add New > Project, pick your repo, and accept the defaults. Vercel auto-detects Next.js.
- 2
Add environment variables
In Project Settings > Environment Variables. Scope each var to Production/Preview/Development.
OPENAI_API_KEY=sk-... ANTHROPIC_API_KEY=sk-ant-... - 3
Tune the runtime
AI routes that stream want Edge Runtime for low latency. Add export const runtime = 'edge' to your route handler.
// app/api/chat/route.ts export const runtime = 'edge'; export async function POST(req: Request) { /* ... */ } - 4
Deploy and preview
Push to a feature branch and Vercel creates a preview URL. Push to main for production.
- 5
Add a custom domain
Project Settings > Domains. Add your domain, follow the DNS instructions. Cert provisions in seconds.
- 6
Monitor
Enable Web Analytics and Speed Insights. For AI usage, integrate Helicone or Langfuse to track cost and latency.
Common Pitfalls
- !Edge runtime cannot use Node-only deps (fs, sharp). Check your imports.
- !Function timeouts default to 10s on Hobby. Bump on Pro for long AI calls.
- !Forgetting to mark NEXT_PUBLIC_ on client-side env vars.
DevDigest Academy
Structured AI engineering courses with hands-on labs. Build production-ready apps faster.
What's Next
- ->Add Vercel Postgres or KV for storage.
- ->Set up branch protection so production only deploys from main.
