In a move to address growing competition from rivals like Chinese AI company DeepSeek, OpenAI is rolling out a significant update to its newest AI model, o3-mini , that provides users with more visibility into its step-by-step reasoning process.
On Thursday, OpenAI announced that both free and paid users of ChatGPT , the company’s popular AI-powered chatbot platform, will now see an enhanced “chain of thought” (CoT) feature. This update offers a clearer breakdown of how the model arrives at its answers, providing users with greater transparency and confidence in its responses. Subscribers to premium ChatGPT plans using o3-mini in the “high reasoning” configuration will also benefit from this expanded insight.
“We’re introducing an updated [chain of thought] for o3-mini designed to make it easier for people to understand how the model thinks,” an OpenAI spokesperson told TechCrunch via email. “With this update, you will be able to follow the model’s reasoning, giving you more clarity and confidence in its responses.”
What’s Changing?

Previously, OpenAI only provided users with summarized versions of the reasoning steps taken by models like o3-mini , o1 , and o1-mini . While these summaries aimed to simplify complex processes, they were occasionally criticized for being incomplete or misleading.
Now, OpenAI has struck what it describes as a “balance.” The updated system allows o3-mini to “think freely” during its reasoning phase, then organizes those thoughts into more detailed and accessible summaries.
“To improve clarity and safety, we’ve added an additional post-processing step where the model reviews the raw chain of thought, removing any unsafe content, and then simplifies any complex ideas,” the spokesperson explained. “Additionally, this post-processing step enables non-English users to receive the chain of thought in their native language, creating a more accessible and friendly experience.”
Why Transparency Matters
Reasoning models like o3-mini are designed to fact-check themselves before delivering results, which helps mitigate common pitfalls such as factual inaccuracies or logical inconsistencies. However, this self-checking process often comes at the cost of speed, with reasoning models typically taking seconds to minutes longer to generate responses compared to traditional models.
DeepSeek’s R1 model , another prominent reasoning-focused AI, has set a high bar by revealing its full thought process—a feature many AI researchers argue is essential for both transparency and usability. By showing intermediate reasoning steps, users can better gauge whether the model is on the right track or veering off course.
While OpenAI still isn’t revealing the full chain of thought for o3-mini , the company believes its updated approach strikes the right balance between transparency and protecting proprietary knowledge.
Hints of Change
The announcement follows hints dropped by Kevin Weil , OpenAI’s Chief Product Officer, during a Reddit AMA last week.
“We’re working on showing a bunch more than we show today — [showing the model thought process] will be very, very soon,” Weil said. “TBD on all — showing all chain of thought leads to competitive distillation, but we also know people (at least power users) want it, so we’ll find the right way to balance it.”
This update reflects OpenAI’s ongoing effort to cater to user demands while navigating the challenges of maintaining a competitive edge in the rapidly evolving AI landscape.
Broader Implications
The decision to enhance transparency could have far-reaching benefits, particularly for researchers, developers, and businesses relying on AI tools for critical tasks. By making the reasoning process more visible, OpenAI aims to build trust and foster deeper engagement with its technology.
For non-English speakers, the ability to access the chain of thought in their native language further underscores OpenAI’s commitment to accessibility and inclusivity.
Looking Ahead
As AI models continue to evolve, the demand for transparency and explainability is only expected to grow. OpenAI’s latest update positions o3-mini as a more user-friendly and trustworthy tool, bridging the gap between cutting-edge AI capabilities and human understanding.
Also Read : Boston Dynamics Teams Up with Marc Raibert’s Robotics Institute to Accelerate Atlas Humanoid Learning