AI Models Achieve Gold Medal Standard at International Mathematical Olympiad
In a groundbreaking progress, advanced artificial intelligence models have demonstrated human-level proficiency in mathematics, with Google’s Gemini and OpenAI’s experimental reasoning model both achieving scores equivalent too a gold medal at the recent International Mathematical Olympiad (IMO) in Queensland, Australia. The IMO, held from July 10 to 20, saw 641 students from 112 countries compete, with approximately 10 percent earning gold medals and five achieving perfect scores.
Google announced that its Gemini chatbot successfully solved five out of the six complex mathematical problems presented at the IMO. Gregor Dolinar, President of the IMO, confirmed Google DeepMind‘s achievement, stating, “He reached the desired goal, obtained a score of 35 out of 42 points, a gold medal score.” Dolinar further commented on the AI’s performance, noting that its solutions were “surprising in many ways” and that IMO evaluators found them “clear, precise and most easy to follow.”
Similarly, OpenAI reported that its experimental reasoning model also attained a gold level, securing 35 points on the IMO test. Alexander Wei of OpenAI explained the evaluation process on social media, stating, “We evaluate our models with the IMO problems of 2025, under the same rules as human contestants.” He added that “For each problem, three IMO medalists independently described the responses presented by the models.”
The event also marked the inauguration of the AI Mathematical Olympiad Award, a $10 million competition designed to foster the development of open-source AI models. The inaugural prize was awarded to nemoskills, developed by Nvidiaai. The company describes Nemoskills as a platform that “facilitates the implementation of powerful training and inference processes, exchanging components and climbing from local prototypes to massive experiments… with only a line change.”
For the first time, a selection of AI companies were invited to a parallel event at the IMO, where their representatives showcased their latest advancements to the participating students.These companies also conducted private tests of their closed-code AI models using the problems from the 2025 IMO.