Five breakthroughs that make OpenAI’s o3 a turning point for AI — and one big challenge


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


The end of the year 2024 has brought reckonings for artificial intelligence, as industry insiders feared progress toward even more intelligent AI is slowing down. But OpenAI’s o3 model, announced just last week, has sparked a fresh wave of excitement and debate, and suggests big improvements are still to come in 2025 and beyond.

This model, announced for safety testing among researchers, but not yet released publicly, achieved an impressive score on the important ARC metric. The benchmark was created by François Chollet, a renowned AI researcher and creator of the Keras deep learning framework, and is specifically designed to measure a model’s ability to handle novel, intelligent tasks. As such, it provides a meaningful gauge of progress toward truly intelligent AI systems.

Notably, o3 scored 75.7% on the ARC benchmark under standard compute conditions and 87.5% using high compute, significantly surpassing previous state-of-the-art results, such as the 53% scored by Claude 3.5.

This achievement by o3 represents a surprising advancement, according to Chollet, who had been a critic of the ability of large language models (LLMs) to achieve this sort of intelligence. It highlights innovations that could accelerate progress toward superior intelligence, whether we call it artificial general intelligence (AGI) or not.

AGI is a hyped term, and ill-defined, but it signals a goal: intelligence capable of adapting to novel challenges or questions in ways that surpass human abilities.

OpenAI’s o3 tackles specific hurdles in reasoning and adaptability that have long stymied large language models. At the same time, it exposes challenges, including the high costs and efficiency bottlenecks inherent in pushing these systems to their limits. This article will explore five key innovations behind the o3 model, many of which are underpinned by advancements in reinforcement learning (RL). It will draw on insights from industry leaders, OpenAI’s claims, and above all Chollet’s important analysis, to unpack what this breakthrough means for the future of AI as we move into 2025.

The five core innovations of o3

1. “Program synthesis” for task adaptation

OpenAI’s o3 model introduces a new capability called “program synthesis,” which enables it to dynamically combine things that it learned during pre-training—specific patterns, algorithms, or methods—into new configurations. These things might include mathematical operations, code snippets, or logical procedures that the model has encountered and generalized during its extensive training on diverse datasets. Most significantly, program synthesis allows o3 to address tasks it has never directly seen in training, such as solving advanced coding challenges or tackling novel logic puzzles that require reasoning beyond rote application of learned information. François Chollet describes program synthesis as a system’s ability to recombine known tools in innovative ways—like a chef crafting a unique dish using familiar ingredients. This feature marks a departure from earlier models, which primarily retrieve and apply pre-learned knowledge without reconfiguration — and it’s also one that Chollet had advocated for months ago as the only viable way forward to better intelligence. 

At the heart of o3’s adaptability is its use of Chains of Thought (CoTs) and a sophisticated search process that takes place during inference—when the model is actively generating answers in a real-world or deployed setting. These CoTs are step-by-step natural language instructions the model generates to explore solutions. Guided by an evaluator model, o3 actively generates multiple solution paths and evaluates them to determine the most promising option. This approach mirrors human problem-solving, where we brainstorm different methods before choosing the best fit. For example, in mathematical reasoning tasks, o3 generates and evaluates alternative strategies to arrive at accurate solutions. Competitors like Anthropic and Google have experimented with similar approaches, but OpenAI’s implementation sets a new standard.

3. Evaluator model: A new kind of reasoning

O3 actively generates multiple solution paths during inference, evaluating each with the help of an integrated evaluator model to determine the most promising option. By training the evaluator on expert-labeled data, OpenAI ensures that o3 develops a strong capacity to reason through complex, multi-step problems. This feature enables the model to act as a judge of its own reasoning, moving large language models closer to being able to “think” rather than simply respond.

4. Executing Its own programs

One of the most groundbreaking features of o3 is its ability to execute its own Chains of Thought (CoTs) as tools for adaptive problem-solving. Traditionally, CoTs have been used as step-by-step reasoning frameworks to solve specific problems. OpenAI’s o3 extends this concept by leveraging CoTs as reusable building blocks, allowing the model to approach novel challenges with greater adaptability. Over time, these CoTs become structured records of problem-solving strategies, akin to how humans document and refine their learning through experience. This ability demonstrates how o3 is pushing the frontier in adaptive reasoning. According to OpenAI engineer Nat McAleese, o3’s performance on unseen programming challenges, such as achieving a CodeForces rating above 2700, showcases its innovative use of CoTs to rival top competitive programmers. This 2700 rating places the model at “Grandmaster” level, among the top echelon of competitive programmers globally.

O3 leverages a deep learning-driven approach during inference to evaluate and refine potential solutions to complex problems. This process involves generating multiple solution paths and using patterns learned during training to assess their viability. François Chollet and other experts have noted that this reliance on ‘indirect evaluations’—where solutions are judged based on internal metrics rather than tested in real-world scenarios—can limit the model’s robustness when applied to unpredictable or enterprise-specific contexts.

Additionally, o3’s dependence on expert-labeled datasets for training its evaluator model raises concerns about scalability. While these datasets enhance precision, they also require significant human oversight, which can restrict the system’s adaptability and cost-efficiency. Chollet highlights that these trade-offs illustrate the challenges of scaling reasoning systems beyond controlled benchmarks like ARC-AGI.

Ultimately, this approach demonstrates both the potential and limitations of integrating deep learning techniques with programmatic problem-solving. While o3’s innovations showcase progress, they also underscore the complexities of building truly generalizable AI systems.

The big challenge to o3

OpenAI’s o3 model achieves impressive results but at significant computational cost, consuming millions of tokens per task — and this costly approach is model’s biggest challenge. François Chollet, Nat McAleese, and others highlight concerns about the economic feasibility of such models, emphasizing the need for innovations that balance performance with affordability.

The o3 release has sparked attention across the AI community. Competitors such as Google with Gemini 2 and Chinese firms like DeepSeek 3 are also advancing, making direct comparisons challenging until these models are more widely tested.

Opinions on o3 are divided: some laud its technical strides, while others cite high costs and a lack of transparency, suggesting its real value will only become clear with broader testing. One of the biggest critiques came from Google DeepMind’s Denny Zhou, who implicitly attacked the model’s reliance on reinforcement learning (RL) scaling and search mechanisms as a potential “dead end,” arguing instead that a model should be able to learn to reason from simpler fine-tuning processes.

What this means for enterprise AI

Whether or not it represents the perfect direction for further innovation, for enterprises, o3’s new-found adaptability shows that AI will in one way or another continue to transform industries, from customer service and scientific research, in the future.

Industry players will need some time to digest what o3 has delivered here. For enterprises concerned about o3’s high computational costs, OpenAI’s upcoming release of the scaled-down “o3-mini” version of the model provides a potential alternative. While it sacrifices some of the full model’s capabilities, o3-mini promises a more affordable option for businesses to experiment with — retaining much of the core innovation while significantly reducing test-time compute requirements.

It may be some time before enterprise companies can get their hands on the o3 model. OpenAI says the o3-mini is expected to launch by the end of January. The full o3 release will follow after, though the timelines depend on feedback and insights gained during the current safety testing phase. Enterprise companies will be well advised to test it out. They’ll want to ground the model with their data and use cases and see how it really works.

But in the mean time, they can already use the many other competent models that are already out and well tested, including the flagship o4 model and other competing models — many of which are already robust enough for building intelligent, tailored applications that deliver practical value.

Indeed, next year, we’ll be operating on two gears. The first is in achieving practical value from AI applications, and fleshing out what models can do with AI agents, and other innovations already achieved. The second will be sitting back with the popcorn and seeing how the intelligence race plays out — and any progress will just be icing on the cake that has already been delivered.

For more on o3’s innovations, watch the full YouTube discussion between myself and Sam Witteveen below, and follow VentureBeat for ongoing coverage of AI advancements.



Source link

About The Author

Scroll to Top