OpenAI's next major model is close — closer than most people expected. The company finished pre-training the model internally codenamed Spud on March 24, 2026, at the Stargate data center in Abilene, Texas. CEO Sam Altman described the launch as coming in "a few weeks." That was three weeks ago.
Yet as of April 15, GPT-6 still isn't here. The April 14 rumor came and went unconfirmed. Polymarket traders now assign 78% odds of a launch before April 30. Here's everything confirmed, everything rumored, and how it stacks up against Claude Opus 4.6 and Gemini Ultra.
What's Actually Confirmed
Strip away the speculation and three facts are solid:
- Pre-training is done. OpenAI completed training Spud on March 24 at the Abilene Stargate facility — a data center that represents the first major output of the OpenAI-SoftBank-Oracle infrastructure deal.
- Sam Altman said "a few weeks." Speaking at BlackRock's US Infrastructure Summit on March 12, Altman called the model in training "what we think will be the best model in the world, hopefully by a lot." Post-training completed March 24; he reiterated "a few weeks" timeline the same day.
- The April 14 date was never official. It circulated widely — but OpenAI made no announcement. The model is still in post-training evaluation (safety testing, RLHF fine-tuning, red-teaming).
The 'Spud' Codename — What It Tells Us
OpenAI's internal naming convention offers clues. GPT-4 was "Arrakis." GPT-5 launched without a catchy codename leak. "Spud" is humble — a potato — which some analysts read as intentional understatement, consistent with Altman's pattern of under-promising and over-delivering on capability.
The name also suggests this is a distinct generation rather than a GPT-5.5 update. Multiple credible AI research trackers have confirmed "Spud" maps to the GPT-6 lineage, not a point release.
Expected Features
No official spec sheet exists. Based on Altman's statements, leaked evaluations, and credible analyst notes, here's what Spud is expected to bring:
- Persistent memory — ChatGPT that remembers your preferences, routines, and context across sessions without manual setup
- Agentic reasoning leap — multi-step autonomous task completion, stronger than GPT-5.4's current agent mode
- 40% coding/reasoning improvement — rumored benchmark jump in code generation and logical inference tasks
- Personal chatbot creation — users build bots that "mirror personal tastes" using their own interaction history
- Enhanced multimodal reasoning — deeper image, document, and data analysis in a single prompt
The memory angle is the most significant. Altman has called memory "the key to making ChatGPT truly personal" — the model needs to "remember preferences, routines, and quirks and adapt accordingly." Current ChatGPT memory is opt-in and shallow. GPT-6 is expected to make it structural.
Release Date: The Honest Breakdown
The standard post-training evaluation cycle at OpenAI runs 4–6 weeks. Stargate pre-training completed March 24 — a 4-week window closes April 21, a 6-week window closes May 5. Those dates align with the Polymarket consensus.
Why hasn't it shipped yet? Three likely reasons: safety red-teaming takes time, deployment infrastructure at Stargate scale requires careful staging, and OpenAI may be waiting for a strategic moment (a major conference, a competitor announcement) to maximize impact.
GPT-6 vs. The Competition Right Now
- Pre-training complete, safety eval ongoing
- Expected: memory, agentic leaps, 40% coding boost
- No pricing confirmed
- Polymarket: 78% by April 30
- Current top-ranked reasoning model
- Strong coding and analysis benchmark scores
- $15/MTok input, $75/MTok output (API)
- Extended thinking mode available
The honest answer: until GPT-6 ships and benchmarks drop, Claude Opus 4.6 and Gemini Ultra are the best available options for demanding AI tasks. OpenAI's own GPT-5.4 holds its own in coding. GPT-6 will matter — when it arrives.
What About GPT-5.5?
Some trackers distinguish between a "GPT-5.5 Spud" (an incremental release) and a full GPT-6. OpenAI has used confusing version nomenclature before — GPT-4o, GPT-4o mini, and GPT-4.5 all shipped under the GPT-4 brand.
The current best read: Spud is substantial enough that OpenAI will market it as GPT-6, not a point release. Altman's language — "best model in the world by a lot" — doesn't describe a 0.5 bump.
Should You Wait?
- Likely the most capable model available when it ships
- Memory features could replace multiple workflow tools
- Potential price competition may drop Claude/Gemini costs
- Agentic improvements could automate more complex tasks
- Still 2–6 weeks away at minimum
- Launch pricing could be higher than current GPT-5 tiers
- No guarantee benchmarks match the hype
- Competitors will respond quickly with their own updates
For most users: don't wait. GPT-5.4, Claude Opus 4.6, and Gemini 2.5 Pro are all excellent right now. GPT-6 will be a meaningful upgrade — but the productivity cost of pausing AI-assisted work for a few weeks is real.
For developers building new long-term projects: it's reasonable to keep architecture flexible. If GPT-6 ships with a genuinely different API (memory endpoints, agent orchestration primitives), early adopters will have an edge.
The Bottom Line
Watch for: an OpenAI blog post, a Sam Altman tweet, or a surprise livestream. The company doesn't do quiet launches for flagship models. When GPT-6 drops, you won't miss it.