Windsurf SWE-1.5 Launches Same Day as Cursor 2.0

October 29th, 2025. Cursor drops Composer. Same day, Windsurf releases SWE-1.5. Both claim to be the fastest AI coding model.
Both say they're the best. Let's look at what the actual data shows.
What is SWE-1.5?
SWE-1.5 is Windsurf's latest frontier model - a model with hundreds of billions of parameters that achieves near-SOTA (state-of-the-art) coding performance. But here's the kicker: it runs at up to 950 tokens per second.
To put that in perspective:
- 13x faster than Claude Sonnet 4.5
- 6x faster than Claude Haiku 4.5
- Near-frontier intelligence at unprecedented speed
This is achieved through a partnership with Cerebras, an AI inference provider.

Why Speed Actually Matters
When you're coding, waiting 20 seconds for AI to respond breaks your flow. That's the problem both Cursor and Windsurf are solving.
Cursor's Composer: Completes most tasks in under 30 seconds Windsurf's SWE-1.5: Runs at 950 tokens/second
Both models achieve something similar - fast enough to keep you in flow state. The difference is in how they got there and what they optimize for.
Training Philosophy
SWE-1.5 Training:
- End-to-end reinforcement learning in realistic coding environments
- Trained on diverse, real-world scenarios
- Focused on writing clean, maintainable code (not just code that passes tests)
- Worked with senior engineers and open-source maintainers for high-quality training data
- Custom Cascade agent harness
- Infrastructure powered by thousands of GB200 NVL72 chips
Result: Less verbose output, fewer unnecessary try-catch blocks, solutions that follow best practices.
Performance Benchmarks
On SWE-Bench Pro (a benchmark of real-world coding tasks), SWE-1.5 achieves near-frontier performance while completing tasks faster than any other model.

The chart shows the trade-off between speed and intelligence - SWE-1.5 is an outlier that achieves both.
Real-World Use Cases
Windsurf's engineers use SWE-1.5 daily for:
- Exploring large codebases - Quickly understand unfamiliar code (powers Windsurf's new Codemaps feature)
- Full-stack development - Build complete features from frontend to backend
- Infrastructure work - Edit Kubernetes manifests, Terraform configs, complex YAML files without memorizing field names
Tasks that used to take 20+ seconds now complete in under 5 seconds.
Technical Integration
When a model runs 10x faster, everything else becomes a bottleneck. Windsurf rewrote critical components to keep up:
- Lint checking optimizations
- Command execution improvements
- Custom request priority system for smooth agent sessions under load
These improvements reduce overhead by up to 2 seconds per step and benefit all models in Windsurf, not just SWE-1.5.
Cursor Composer vs Windsurf SWE-1.5
Cursor Composer:
- 4x faster than GPT-4/Claude Opus
- 30-second completions for most tasks
- Agent-first interface (not file-first)
- Multiple agents run in parallel
- Git worktrees for isolated workspaces
- Built-in browser tool
Windsurf SWE-1.5:
- 13x faster than Sonnet 4.5
- 950 tokens/second throughput
- Near-SOTA coding performance
- Trained specifically for software engineering (not just coding)
- Integrated with Cascade agent harness
- Optimized for Windsurf's tool ecosystem
The Key Difference:
Cursor optimized for multi-agent workflows and speed. Windsurf optimized for integrated agent experience and throughput.
Both achieve sub-30-second completion times. Both use reinforcement learning. Both trained on real developer workflows.
Which One Should You Use?
Choose Cursor Composer if:
- You want multi-agent parallelization
- Agent-first interface appeals to you
- Git worktrees matter for your workflow
- You're already in the Cursor ecosystem
Choose Windsurf SWE-1.5 if:
- Raw speed is your priority (950 tok/s)
- You want near-SOTA performance
- Integrated agent experience matters
- You're exploring the Windsurf ecosystem
Real talk: Both are excellent. The competition between them is pushing the entire space forward.
What This Means for AI Coding
October 29th, 2025 marked a shift:
- First in-house models from major AI coding tools - Both companies stopped relying solely on OpenAI/Anthropic
- Speed is now table stakes - Sub-30-second completions are the baseline
- Specialized models beat general models - Training on real coding workflows matters
- The editor enables the model - Both companies use their tool data to improve training
The Bigger Picture
We're past the era of "just use GPT-4 for coding." Custom models trained on real developer workflows, optimized for speed, integrated with purpose-built editors - that's the new standard.
Both Cursor and Windsurf proved it's possible on the same day. And developers are the winners.
Try Them Yourself
Windsurf: https://windsurf.com/download Cursor: https://cursor.com/download
Both models are available now. Test them with your actual workflow and see which one fits better.