Sam Altman Claims Critics of GPT-5 Are Mistaken

OpenAI’s August launch of the GPT-5 large language model was less than ideal. The livestream experienced technical issues, with the model producing charts that featured glaringly inaccurate figures. During a Reddit AMA with OpenAI staff, users expressed dissatisfaction with the model’s perceived lack of friendliness and urged the company to revert to the earlier version. Critics most vocally lamented that GPT-5 did not meet the high expectations that OpenAI had fostered over the years. Marketed as a revolutionary milestone, GPT-5 may have altered the gameplay, yet it was ultimately still playing the same game.
Skeptics seized the opportunity to declare the conclusion of the AI boom, and some even forecasted the onset of another AI Winter. “GPT-5 was the most hyped AI system ever,” lamented Gary Marcus, a prominent critic, during his busy schedule of celebrations. “It was promised to achieve two things: AGI and PhD-level cognition, and it failed to deliver on both.” Furthermore, he asserts, the seemingly underwhelming new model illustrates that OpenAI’s route to AGI—extensively enlarging data and chip sets to enhance its systems’ intelligence—can no longer be pursued. Remarkably, Marcus’ sentiments found resonance among a significant segment of the AI community. In the days following the launch, GPT-5 seemed to embody AI’s equivalent of New Coke.
Sam Altman isn’t having it. A month post-launch, he enters a conference room at the company’s newer headquarters in San Francisco’s Mission Bay area, eager to convey to me and my colleague Kylie Robison that GPT-5 is everything he had claimed and that the pursuit of AGI remains on track. “The vibes were a bit off at launch,” he acknowledges. “But they’re fantastic now.” Yes, fantastic. It’s accurate that the backlash has subsided. Indeed, the company’s recent release of a mind-blowing tool for generating stunning AI video content has shifted the focus away from the disappointing GPT-5 debut. Altman’s message, however, is that skeptics are mistaken in their judgments. He insists that the journey toward AGI is still progressing.
Numbers Game
While critics may interpret GPT-5 as the fading twilight of an AI summer, Altman and his team assert that it reinforces AI technology as an essential tutor, a search-engine-replacing resource, and, notably, a sophisticated partner for scientists and programmers. Altman claims that users are beginning to see it from his perspective. “GPT-5 marks the first instance where users are saying, ‘Holy cow, it’s tackling this crucial piece of physics.’ Or a biologist might remark, ‘Wow, it really helped me solve this issue,’” he explains. “There’s a significant development here that didn’t occur with prior GPT models, which is AI starting to accelerate the process of new scientific discoveries.” (OpenAI has not disclosed the identities of those physicists or biologists.)
So what accounted for the lukewarm initial response? Altman and his team have identified several factors. One reason, they assert, is that since the release of GPT-4, the company has introduced versions that were transformative on their own, particularly the advanced reasoning capabilities they incorporated. “The leap from 4 to 5 was greater than the transition from 3 to 4,” Altman remarks. “We’ve had numerous enhancements along the way.” OpenAI president Greg Brockman concurs: “I’m not surprised that many felt that [underwhelmed] reaction, considering we’ve been showcasing our progress.”
OpenAI also contends that because GPT-5 is tailored for specialized applications like science and programming, everyday users are gradually coming to recognize its benefits. “Most individuals are not physics researchers,” Altman notes. As Mark Chen, OpenAI’s head of research, elaborates, unless you’re proficient in mathematics, you’re unlikely to appreciate that GPT-5 ranks among the top five Math Olympians, while last year it was in the top 200.
Regarding the assertion that GPT-5 demonstrates the failure of scaling, OpenAI clarifies that this stems from a misunderstanding. Unlike earlier models, GPT-5 did not achieve its primary advancements through a drastically expanded dataset and increased computational power. Instead, the new model gained improvements through reinforcement learning—a method that involves expert human feedback. Brockman explains that OpenAI has progressed its models to a stage where they can generate their own data, fueling the reinforcement learning process. “When the model is less capable, your goal is to simply train a larger version,” he states. “When the model becomes proficient, you aim to sample from it. You seek to learn from its own data.”