The prompt is the product. Every AI coding tool you use, whether it is Claude Code, Cursor, or Copilot, generates code based on what you tell it. Vague input produces vague output. Structured input produces production code.
Most developers treat prompts as search queries. They type "make a login page" and wonder why the result is a half-baked form with no validation, no error handling, and inline styles from 2019. The fix is not a better model. The fix is a better prompt.
This guide covers seven concrete patterns for writing prompts that produce code you can actually ship. No theory. No abstract frameworks. Just the patterns that work.
Every effective coding prompt has four parts. You do not need all four every time, but the more you include, the better your output.
1. Context. What exists in the project right now. What you are building. What files are relevant. AI tools cannot see your mental model. You have to externalize it.
2. Constraints. Tech stack, design patterns, naming conventions, rules. "Use server actions, not API routes." "Follow the existing Tailwind design system." "No default exports." These boundaries keep the AI from wandering.
3. Examples. Show, do not tell. Point the AI at an existing file that demonstrates the pattern you want. "Follow the same structure as src/components/Button.tsx" beats a paragraph of description every time.
4. Output format. What do you expect back? A complete file? A diff? A plan before implementation? Specifying the format prevents the AI from guessing wrong.
Here is what this looks like in practice:
I need a new API route at app/api/projects/route.ts.
Context: We use Convex for the database. The schema has a projects
table with name (string), description (string), userId (string),
and createdAt (number). See convex/schema.ts.
Constraints: Use server actions pattern from app/api/users/route.ts.
Validate input with Zod. Return proper HTTP status codes.
No try/catch blocks around Convex calls (Convex handles its own errors).
Output: The complete route.ts file, ready to use.
Compare that to "make a projects API." The first prompt produces working code on the first try. The second produces something you spend 20 minutes fixing.
Repeating the same context in every prompt is a waste. Write it once in a CLAUDE.md file and let the tool read it automatically.
# CLAUDE.md
## Stack
- Next.js 16 + React 19 + TypeScript
- Convex for backend
- Tailwind for styling
- Clerk for auth
## Rules
- Always use server actions, never API routes
- Run `pnpm typecheck` after every change
- Never use default exports
- No inline styles. Tailwind only.
Claude Code loads this file at the start of every session. Every prompt you write after that inherits this context without you typing it. Over weeks, your CLAUDE.md becomes a detailed specification of how your project works, what patterns you follow, and what mistakes to avoid.
Three levels exist: project root (CLAUDE.md for the team), user-level (~/.claude/CLAUDE.md for personal preferences), and project-user (.claude/CLAUDE.md for your personal overrides on a specific repo). Layer them. The team file defines standards. Your personal file defines style. The project-user file handles edge cases.
The CLAUDE.md Generator can scaffold one for your stack in seconds.
When you want the AI to follow an existing convention, point it at a concrete example.
Follow the pattern in src/components/Button.tsx to create
a new Card component. Same prop interface style, same Tailwind
class organization, same export pattern.
This works because AI models are excellent at pattern matching. Showing them a reference file gives them a concrete template to follow rather than forcing them to guess your conventions from a verbal description. The output will mirror the structure, naming, and style of the reference file almost exactly.
This pattern scales. When you have a well-organized codebase, every new file becomes easier because you can reference an existing one. "Build a new page like app/blog/page.tsx but for the guides section" produces correct code because the model can see your routing conventions, data fetching patterns, and component structure in the reference.
Constraints are the most underused part of prompt engineering. They eliminate entire categories of bad output.
Build a settings page for user preferences.
Constraints:
- Tailwind only. No inline styles. No CSS modules.
- No gradients. Solid colors from the design system.
- Use the existing Form component from components/ui/form.tsx
- Store preferences in Convex, not localStorage
- Pill-shaped buttons only. Use the btn-pill class.
- Must pass TypeScript strict mode
Without constraints, the AI picks defaults. It might use inline styles because that is simpler. It might use localStorage because the prompt did not specify a database. It might use square buttons because that is what it was trained on.
Constraints turn "probably correct" into "definitely correct." The more opinionated your codebase, the more constraints you should specify. Or better yet, put them in your CLAUDE.md so every prompt inherits them automatically.
Writing tests first is a good practice with or without AI. With AI, it becomes a superpower.
Write unit tests for a calculateDiscount function that:
- Takes a price (number) and a coupon code (string)
- Returns the discounted price
- Handles invalid codes by returning the original price
- Handles negative prices by throwing
- Supports percentage and fixed-amount coupons
Use Vitest. Write the tests first. Then implement the function
to make all tests pass.
When you give the AI tests first, you give it a specification it can verify against. The AI does not just generate code and hope it works. It generates code, mentally runs it against the tests, and adjusts. The result is more correct on the first pass.
This pattern also forces you to think about edge cases upfront. What happens with negative prices? Empty strings? Expired coupons? Writing the tests first surfaces these questions before implementation begins.
Sometimes you do not want the AI to rewrite a file. You want to see what it plans to change.
Show me the changes needed to add rate limiting to
app/api/chat/route.ts. Output as a diff. Do not apply
the changes yet.
This is defensive prompting. On large files, having the AI rewrite the entire thing risks introducing regressions. The diff pattern lets you review the proposed changes before they touch your codebase. You catch problems before they become problems.
In Claude Code, you can also ask it to enter plan mode: "Outline your approach before writing any code." This produces a numbered plan that you review and approve before any files get modified. Use this for any change that touches more than three files.
Single-threaded AI assistance is slow. If your tool supports it, parallelize.
Spawn three agents in parallel:
1. API agent: Build the webhook handler at app/api/webhooks/stripe/route.ts
2. Frontend agent: Build the pricing page at app/pricing/page.tsx
3. Test agent: Write integration tests for the billing flow
Claude Code sub-agents let you decompose work across multiple focused instances. Each agent gets its own context, its own files, and its own task. The API agent does not need to know about the pricing page layout. The test agent does not need to know about webhook verification. Context isolation improves quality.
This mirrors how engineering teams actually work. You do not have one developer build the API, the frontend, and the tests sequentially. You split the work. AI development should work the same way.
For complex features, asking the AI to plan before coding produces dramatically better results.
I need to add organization support to this app. Users should be
able to create organizations, invite members, and share projects
within an organization.
Before writing any code:
1. List the schema changes needed
2. List the new API routes or server functions
3. List the new UI components
4. Identify which existing files need modification
5. Flag any potential issues or edge cases
Then wait for my approval before implementing.
The plan-first pattern prevents the AI from charging forward with a bad architecture. Reviewing a plan takes 30 seconds. Undoing a bad implementation takes 30 minutes. The trade-off is obvious.
This pattern works especially well for features that touch multiple layers of your stack. Authentication changes, billing integrations, multi-tenancy. Anything where one wrong assumption cascades into broken code across multiple files.
Get the weekly deep dive
Tutorials on Claude Code, AI agents, and dev tools - delivered free every week.
Being too vague. "Make it better" tells the AI nothing. Better how? Faster? Prettier? More accessible? More type-safe? Specificity is the difference between useful output and random changes.
Over-specifying implementation. "Use a useState hook called isOpen, default to false, and toggle it with a function called handleToggle that calls setIsOpen with the negation of the current value." You just wrote the code yourself. Tell the AI what you want, not how to build it. "Add a collapsible sidebar that remembers its state across page loads" gives the AI room to use the best approach.
Asking for everything at once. "Build a full e-commerce platform with auth, payments, inventory, shipping, reviews, and an admin panel." No AI tool produces good output for a prompt this broad. Break it into features. Build one at a time. Each feature becomes context for the next.
Ignoring file context. If you do not tell the AI which files to read, it guesses. If it guesses wrong, the output will not fit your project. "Read src/lib/auth.ts and src/middleware.ts before making changes to the auth flow" takes three seconds to type and saves minutes of debugging.
No error recovery instructions. AI tools make mistakes. A good prompt anticipates this: "If the TypeScript compiler throws errors, fix them before moving on." Without this, some tools generate code, declare success, and leave you with a broken build.
Claude Code rewards preparation. The more context it has before you start prompting, the better every response will be.
CLAUDE.md files for persistent context. Project rules, stack details, and conventions load automatically at session start. The CLAUDE.md Generator helps you scaffold one..claude/commands/ for workflows you repeat. A /review command that checks for type safety, security, and performance issues saves you from typing the same review prompt every session.Cursor excels at file-aware editing and fast iteration loops.
@file references to point the AI at specific files. @src/components/Button.tsx injects the file content into your prompt context automatically..cursorrules or .cursor/rules files serve the same purpose as CLAUDE.md for Cursor. Write your stack details and conventions there.Copilot works best as an autocomplete engine, not a conversational partner.
// Validate email format and check for duplicates against the database produces better completions than writing the function name alone.Prompt engineering is not a one-time skill. It compounds. Your CLAUDE.md gets better over time. Your custom commands handle more edge cases. Your constraint lists become more precise. Your reference files become cleaner patterns for future generation.
After a month of deliberate prompting, you will notice something: the AI tools produce code that feels like your code. Same style, same patterns, same conventions. Not because the model learned your preferences (it did not). Because you taught it through structured context, constraints, and examples.
That is the real skill. Not writing clever prompts. Writing the right context so the AI never needs a clever prompt in the first place.
Start with your CLAUDE.md. Add constraints from your last five "the AI got it wrong" moments. Point it at your best files as references. The rest follows.
For more on getting the most out of AI coding tools, see the vibe coding guide, the Claude Code tips and tricks deep dive, and the Prompt Tester tool on this site.
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.
Anthropic's agentic coding CLI. Runs in your terminal, edits files autonomously, spawns sub-agents, and maintains memory...
View ToolStackBlitz's in-browser AI app builder. Full-stack apps from a prompt - runs Node.js, installs packages, and deploys....
View ToolNew tutorials, open-source projects, and deep dives on coding agents - delivered weekly.
Configure Claude Code for maximum productivity -- CLAUDE.md, sub-agents, MCP servers, and autonomous workflows.
AI Agents
Claude Code Review: Next-Level AI-Assisted Coding In this video, I share my insights after using Claude Code for 30 days. Discover why I believe Claude Code is one of the best AI coding agents...

Courses: https://links.zerotomastery.io/Courses/DD/Jan25 AI Courses: https://links.zerotomastery.io/AICourses/DD/Jan25 Career Path Quiz: https://links.zerotomastery.io/CPQuiz/DD/Jan25 Prompt...

Complete pricing breakdown for every major AI coding tool - Claude Code, Cursor, Copilot, Windsurf, Codex, and more. Fre...

The exact tools, patterns, and processes I use to ship code 10x faster with AI. From morning briefing to production depl...
A ranked list of the most useful MCP servers for Claude Code, Cursor, and other AI coding tools. Tested configurations i...