GPT-OSS: OpenAI's First Open Source Model

6 min read
GPT-OSS: OpenAI's First Open Source Model

First Open-Weight Models Since GPT-2

OpenAI has released its first open-weight models in over five years. GPT-OSS 12B and GPT-OSS 20B are now available under the Apache 2.0 license, marking a significant shift in strategy for the company. These are reasoning models built on a Mixture of Experts (MoE) architecture, designed to run efficiently on consumer hardware while delivering competitive performance against frontier closed models.

Architecture overview of GPT-OSS MoE design

Model Specifications

Two variants are available:

GPT-OSS 20B - The efficient option. Activates 3.6 billion parameters per token and runs on a laptop with 16GB of RAM. Suitable for offline, private deployments where data cannot leave the local environment.

GPT-OSS 120B - The larger variant. Activates 5.1 billion parameters per token despite its name, deployable on a single 80GB GPU such as an NVIDIA A100. This model targets production applications requiring higher capability.

Both models support a 128,000 token context window and were trained primarily on English text with emphasis on STEM, coding, and general knowledge. OpenAI is also releasing the O200K tokenizer used for GPT-4 and GPT-4o mini, now open-sourced as part of this announcement.

Chain-of-Thought with Tool Integration

The standout feature is the integration of tool use within the reasoning process. During the post-training phase, OpenAI trained these models to invoke tools like web search and code execution before finalizing responses. This happens inside the chain-of-thought trace.

This architecture eliminates the need for external agent orchestration. The model can search, evaluate results, and decide to search again if the first query fails, all within its internal reasoning loop. For developers building agentic applications, this reduces complexity significantly. No separate agent framework is required to handle tool selection, reflection, and iterative refinement.

Workflow diagram showing tool use during reasoning

Performance Benchmarks

The 120B model outperforms o3-mini across standard benchmarks, even without tool access. Against the full o3 model, it remains competitive.

BenchmarkGPT-OSS 120BGPT-OSS 20B
MMLU90.0%85.3%
GPQA Diamond80.1%71.5%
Humanity's Last ExamStrongStrong for size
Competition MathNear o3/o4-miniCompetitive

On artificial analysis aggregations, these models sit respectably against Gemini 2.5, Grok 2, and other frontier systems. The critical caveat: these are not code-generation specialists. They will not build full web applications from prompts like Claude Opus or similar top-tier coding models. They excel at reasoning, analysis, and tool-augmented tasks rather than end-to-end application generation.

Benchmark comparison chart

Deployment Costs and Options

Because these are Apache 2.0 licensed, hosting competition is already aggressive:

GPT-OSS 120B:

  • Fireworks: $0.10 per million input tokens / $0.50 output
  • Groq: $0.15 per million input tokens / $0.75 output

GPT-OSS 20B:

  • Fireworks: $0.05 per million input tokens / $0.20 output
  • Groq: $0.10 per million input tokens / $0.50 output

Groq delivers over 1,000 tokens per second on the 20B model and approximately 500 tokens per second on the 120B variant. OpenRouter provides unified billing across providers with transparent latency and throughput metrics if you prefer a single integration point.

Pricing comparison across hosting providers

Running Locally and Getting Started

For local execution, HuggingFace hosts the model weights. Ollama provides the simplest setup path:

ollama run gpt-oss  # Defaults to 20B model

For the 120B model, you need hardware like an A100 or an M3 Max with substantial RAM.

Cloud deployment options include Groq for low-latency inference, Fireworks for cost optimization, and OpenRouter for multi-provider access. Each platform exposes the standard OpenAI-compatible API, making migration straightforward.

The Bottom Line

GPT-OSS fills a specific niche: capable reasoning with tool integration at low cost and manageable hardware requirements. These models are not replacements for top-tier closed models on creative or complex coding tasks. They are practical choices for applications requiring reasoning, moderate coding assistance, and agentic tool use without the infrastructure overhead of massive parameter counts or closed API dependencies.


Watch the Video

<iframe width="100%" height="415" src="https://www.youtube.com/embed/nRQEQaPehjc" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>