OpenAI’s o3-Mini Is a Leaner AI Model that Keeps Pace with DeepSeek
On the heels of DeepSeek R1, the latest model from OpenAI promises more advanced capabilities at a cheaper price….


OpenAI is making a smaller, more efficient version of its cleverest artificial intelligence model available for free as it seeks to answer the hype and enthusiasm swirling around a new open-source offering from Chinese AI startup DeepSeek.
WIRED previously reported that OpenAI was prepping the new model, called o3-mini, for release on January 31. The companyās researchers have been working overtime to get it ready for prime time, according to sources who spoke on the condition of anonymity.
o3-mini, which OpenAI teased in December, is a smaller version of the model that features the most advanced AI reasoning capabilities of any OpenAI offering to date. The model can break difficult problems into constituent parts in order to figure out how best to solve them.
āThis powerful and fast model advances the boundaries of what small models can achieve,ā the company said in a blog post announcing o3-miniās availability.
OpenAI is making o3-mini available to all Plus, Team, and Pro users of ChatGPT. Users of the free version of ChatGPT will also be able to try o3-mini but won’t be able to send as many queries, the company says.
OpenAI has evidently been using PhD students to help train a new model for some time. Several weeks ago, the company began recruiting PhD computer science students at $100 per hour for a āresearch collaborationā that would āinvolve working on unreleased modelsā, according to an email viewed by WIRED.
OpenAI also appears to have been recruiting PhD students with expertise in other areas through a company called Mercor that it regularly uses to find staff for model training. A recent job posting from Mercor on LinkedIn states: āThe overall goal of this project that you may become a part of is to create challenging scientific coding questions designed to test the capabilities of large language models in generating code for solving realistic scientific research problems.ā
The job posting goes on to give an example problem that is strikingly similar to a problem in a benchmark called SciCode that is designed to test a large language modelsā ability to solve complex science problems.
The news comes as DeepSeekās R1 continues to roil the US tech industry. The fact that such a powerful model could be released for free puts pressure on Google and Anthropic to lower their prices.
OpenAI is particularly eager to demonstrate that it remains at the forefront of developing and commercializing AI, according to sources inside the company.
DeepSeekās freely available model incorporates innovations that made it more efficient to both train and serve. The company appears to have developed it using far fewer resources than OpenAI and other US companies currently building frontier AI models, although the precise details of DeepSeekās expenditure remain unknown. OpenAI says it believes R1 may have incorporated the output from its models into its training.
Got a Tip?
Are you a current or former employee at OpenAI? Weād like to hear from you. Using a nonwork phone or computer, contact Will Knight at will_knight@wired.com or on Signal via his username is wak01.
OpenAIās newest model may not outshine R1 in terms of price, but it shows that the company will make efficiency part of its focus going forward. OpenAI also says that the model is especially strong in math, science, and coding.
The company says that the latest model will also incorporate new features, including the ability to tap into web searches, call functions from a userās code, and toggle between different reasoning levels that trade off speed for problem solving capabilities.
DeepSeekās sudden rise has also raised questions about the US government strategy to curb Chinaās rise in AI. The past two US administrations have introduced a number of sanctions to curb Chinaās ability to access the most advanced Nvidia chips typically used to build cutting-edge AI models. DeepSeek described several types of Nvidia chips in its research but it remains unclear what exactly was used.