4 Sources
4 Sources
[1]
Gemini AI solves coding problem that stumped 139 human teams at ICPC World Finals
Like the rest of its Big Tech cadre, Google has spent lavishly on developing generative AI models. Google's AI can clean up your text messages and summarize the web, but the company is constantly looking to prove that its generative AI has true intelligence. The International Collegiate Programming Contest (ICPC) helps make the point. Google says Gemini 2.5 participated in the 2025 ICPC World Finals, turning in a gold medal performance. According to Google this marks "a significant step on our path toward artificial general intelligence." Every year, thousands of college-level coders participate in the ICPC event, facing a dozen deviously complex coding and algorithmic puzzles over five grueling hours. This is the largest and longest-running competition of its type. To compete in the ICPC, Google connected Gemini 2.5 Deep Think to a remote online environment approved by the ICPC. The human competitors were given a head start of 10 minutes before Gemini began "thinking." According to Google, it did not create a freshly trained model for the ICPC like it did for the similar International Mathematical Olympiad (IMO) earlier this year. The Gemini 2.5 AI that participated in the ICPC is the same general model that we see in other Gemini applications. However, it was "enhanced" to churn through thinking tokens for the five-hour duration of the competition in search of solutions. At the end of the time limit, Gemini managed to get correct answers for 10 of the 12 problems, which earned it a gold medal. Only four of 139 human teams managed the same feat. "The ICPC has always been about setting the highest standards in problem-solving," said ICPC director Bill Poucher. "Gemini successfully joining this arena, and achieving gold-level results, marks a key moment in defining the AI tools and academic standards needed for the next generation." More than human At the ICPC, only correct solutions earn points, and the time it takes to come up with the solution affects the final score. Gemini reached the upper rankings quickly, completing eight problems correctly in just 45 minutes. After 677 minutes, Gemini 2.5 Deep Think had 10 correct answers, securing a second-place finish among the university teams. You can take a look at all of Gemini's solutions on GitHub, but Google points to Problem C as especially impressive. This question, a multi-dimensional optimization problem revolving around fictitious "flubber" storage and drainage rates, stumped every human team. But not Gemini. According to Google, there are an infinite number of possible configurations for the flubber reservoirs, making it challenging to find the optimal setup. Gemini tackled the problem by assuming that each reservoir had a priority value, which allowed the model to find the most efficient configuration using a dynamic programming algorithm. After 30 minutes of churning on this problem, Deep Think used nested ternary search to pin down the correct values. Gemini's solutions for this year's ICPC were scored by the event coordinators, but Google also turned Gemini 2.5 loose on previous ICPC problems. The company reports that its internal analysis showed Gemini also reached gold medal status for the 2023 and 2024 question sets. Google believes Gemini's ability to perform well in these kinds of advanced academic competitions portends AI's future in industries like semiconductor engineering and biotechnology. The ability to tackle a complex problem with multi-step logic could make AI models like Gemini 2.5 invaluable to the people working in those fields. The company points out that if you combine the intelligence of the top-ranking university teams and Gemini, you get correct answers to all 12 ICPC problems. Of course, five hours of screaming-fast inference processing doesn't come cheap. Google isn't saying how much power it took for an AI model to compete in the ICPC, but we can safely assume it was a lot. Even simpler consumer-facing models are too expensive to turn a profit right now, but AI that can solve previously unsolvable problems could justify the technology's high cost.
[2]
Gemini just aced the world's most elite coding competition - what it means for AGI
A Gemini model won gold at a challenging coding competition.The model correctly answered 10 out of 12 problems.The win could have major implications for AGI, says Google. In recent years, large language models (LLMs) have become an integral part of many software developers' toolkits, helping them build, refine, and deploy apps more quickly and effectively. Now, Google says that one of its most advanced models has achieved a major coding breakthrough that could help lead to new scientific discoveries -- including, potentially, the attainment of artificial general intelligence, or AGI. Also: Will AI think like humans? We're not even close - and we're asking the wrong question Gemini 2.5 Deep Think, a state-of-the-art version of Google's flagship AI model that uses advanced reasoning capabilities to break problems down into multiple components, has achieved gold medal performance at the 2025 International Collegiate Programming Contest (ICPC) World Finals, the company announced Wednesday. Google wrote in a blog post that the "advanced version" of Gemini 2.5 Deep Think operates as a kind of automated and integrated team. "To tackle a problem, multiple Gemini agents each propose their own solutions using terminals to execute code and tests, and then iterate the solutions based on all of the attempts," the company wrote. The ICPC is widely recognized as the world's most prestigious and difficult university-level coding competition. Teams hailing from close to 3,000 universities across 103 countries competed in this year's finals, which were held Sept. 4 in Baku, Azerbaijan. Each team must solve a set of complex problems within a five-hour time period. There's no room for error: Only perfect answers get points. Also: I did 24 days of coding in 12 hours with a $20 AI tool - but there's one big pitfall Gemini correctly solved 10 out of the 12 problems in this year's ICPC finals, achieving a gold medal-level performance and the second-highest score overall compared to a group of human contestants. Gemini 2.5 Deep Think, along with an experimental reasoning model from OpenAI, also achieved gold medal-level performance at this year's International Mathematical Olympiad, the companies announced in July. "Together, these breakthroughs in competitive programming and mathematical reasoning demonstrate Gemini's profound leap in abstract problem-solving -- marking a significant step on our path toward artificial general intelligence (AGI)," Google wrote in its blog post. In what Google describes in a blog post as "an unprecedented moment," Gemini quickly and correctly solved one of the 12 problems in the competition that stymied all of the human competitors. There were two problems that it didn't manage to solve, on the other hand, which other teams did successfully. Also: OpenAI has new agentic coding partner for you now: GPT-5-Codex The third problem in the challenge, Problem C, asked competitors to devise a solution for distributing liquid through a series of interconnected ducts, so that reservoirs connected to each duct would be filled as quickly as possible. Each duct could be closed, open, or partially open, meaning there was an infinite number of possible configurations. In its search for the optimal configuration, Gemini took a surprising approach: It began by assigning a numerical value to each reservoir to determine the priority it should be assigned relative to the others. The model then deployed an algorithm and a game-theoretical concept known as the minimax theorem to find a solution. The whole process took less than half an hour. No human competitor was able to solve it. Also: I built a business plan with ChatGPT and it turned into a cautionary tale Although less monumental in its significance, this kind of problem-solving capability is reminiscent of the famous Move 37 during AlphaGo's 2016 game against Go world champion Lee Sedol, in which that AI model (developed by Google DeepMind) adopted a strategy that surprised human experts in the moment, but turned out to be decisive to its victory. Since then, "Move 37" has become shorthand for moments in which AI acts in creative or unexpected ways which challenge our conventional norms of intelligent problem-solving. Gemini's top-tier performance at the 2025 ICPC has implications far beyond software development, according to Google. "The skills needed for the ICPC -- understanding a complex problem, devising a multi-step logical plan, and implementing it flawlessly -- are the same skills needed in many scientific and engineering fields, such as designing new drugs, or microchips," the company wrote in its blog post, saying that this development shows AI could help solve difficult problems for the benefit of humanity (a familiar pseudo-promise AI companies often make). Also: Google's new open protocol secures AI agent transactions - and 60 companies already support it The notion that AI could eventually assist with scientific discovery has long been a dream for many computer scientists. Earlier this month, OpenAI launched an internal initiative aimed at this very goal. Earlier this month, Harvard Medical School designed an AI model that could help target degenerative disease and cancer treatment. According to Google, the best path forward in this regard will likely be some form of human-AI collaboration, through which advanced agentic models like Gemini 2.5 Deep Think suggest novel solutions to particularly difficult technical problems.
[3]
DeepMind achieves gold at 'coding Olympics' in AI milestone
Google DeepMind's latest artificial intelligence model has achieved a "gold-medal level" performance at a competition known as the "coding Olympics", in what the group describes as a milestone in the development of the revolutionary technology. The London-based laboratory run by British Nobel laureate Sir Demis Hassabis said on Wednesday that its AI reasoning model, Gemini 2.5 Deep Think, achieved the result against the best human competitors at the International Collegiate Programming Contest (ICPC) World Finals in early September. The competition is considered the most prestigious programming contest in the world. Former participants include Google co-founder Sergey Brin and OpenAI's chief scientist Jakub Pachocki. DeepMind said the Gemini model's performance would have ranked second overall in the competition. It was also able to solve a problem that no human competitor could solve. The breakthrough comes as the newest generation of AI models are increasingly being used by software engineers to assist with computer programming. Meanwhile, DeepMind's technology has already been used to win against humans in other elite competitions, from beating the world's best player at the board game Go, to achieving gold at the International Mathematical Olympiad. Quoc Le, vice-president and Google fellow said: "This is a historic moment towards AGI," referring to artificial general intelligence -- systems that surpass human capabilities -- a major goal for AI researchers for decades. "It's impressive for a purely AI system with no human in the loop to be able to get the performance that they did," said Jelani Nelson, the chair of University of California, Berkeley's electrical engineering and computer sciences department, who has coached several ICPC teams at Massachusetts Institute of Technology, Harvard and UC Berkeley. "If someone had told me just a few years ago that we would have new technology that was able to perform at this level in math and in computer science, I would not have believed them," added Nelson. In the coding competition, teams of three are given one computer with which to solve 12 hard programming problems in five hours. Teams are ranked on speed, accuracy and the number of questions they answer. This year, competitors were able to solve 10 out of the 12 questions. From the 139 competing teams this year, only four teams won gold medals. To solve the problems, participants have to understand complex problems, have a logical plan to solve them, and execute them without errors. Hard maths problems also require abstract reasoning skills and creativity. DeepMind's AI tool had a crucial advantage over people: it did not have to work in a team. "When I coach my teams, the assumption is that I don't have to teach them how to solve problemsβ.β.β.βI can only give them advice on how to work together in a stressful situation," said Bartek Klin, an associate professor of computer science of the University of Oxford, and an ICPC coach. The DeepMind team used "reinforcement learning" -- a technique that rewards AI systems for desired outcomes -- to train its Gemini model further with very hard maths, reasoning and coding problems. Competitive coding is the "ultimate thinking game", because it requires models to come up with new approaches and generalise learnings, instead of just memorising solutions, said Heng-Tze Cheng, research director and principal scientist at Google DeepMind. But Oxford university's Klin said success in a competitive coding environment that prioritises speed does not necessarily translate to great software development in practice. "In real life, the hardest problems are the ones that take half a year to think about," he said. While the Gemini model was able to solve a problem the competitors were not, it was also not able to solve all the tasks that some of its human counterparts did complete. DeepMind said the experiment showed how AI models could "provide unique, novel contributions that complement the skills and knowledge of human experts". Le says the advancement also has the potential to transform many scientific and engineering disciplines, which require mathematical understanding and coding, such as designing new drugs and computer chips. "Solving math and computer competitive coding is a key step to understanding how our intelligence works," said Le.
[4]
Gemini achieves gold-level performance at the International Collegiate Programming Contest World Finals
Gemini 2.5 Deep Think achieves breakthrough performance at the world's most prestigious computer programming competition, demonstrating a profound leap in abstract problem solving. An advanced version of Gemini 2.5 Deep Think has achieved gold-medal level performance at the 2025 International Collegiate Programming Contest (ICPC) World Finals. This milestone builds directly on Gemini 2.5 Deep Think's gold-medal win at the International Mathematical Olympiad (IMO) just two months ago. Innovations from these efforts will continue to be integrated into future versions of Gemini Deep Think, expanding the frontier of advanced AI capabilities accessible to students and researchers. Solving complex tasks at these competitions requires deep abstract reasoning, creativity, the ability to synthesize novel solutions to problems never seen before and a genuine spark of ingenuity. Together, these breakthroughs in competitive programming and mathematical reasoning demonstrate Gemini's profound leap in abstract problem-solving -- marking a significant step on our path toward artificial general intelligence (AGI). The ICPC is globally recognized as the oldest, largest and most prestigious algorithmic programming competition at college level. This is a step up from high school level olympiads such as the IMO. Every year, participants from nearly 3000 universities and over 103 countries compete in solving real-world coding problems. This year's world finals took place in Baku, Azerbaijan on September 4, and brought together the top teams from earlier phases of the competition. Over a five-hour period, each team tackled a set of complex algorithmic problems. Final rankings hinged on two unforgiving principles: only perfect solutions earned points, and every minute counted. From the 139 competing teams, only the top four teams won gold medals. An advanced version of Gemini 2.5 Deep Think competed live in a remote online environment following ICPC rules, under the guidance of the competition organizers. It started 10 minutes after the human contestants and correctly solved 10 out of 12 problems, achieving gold-medal level performance under the same five-hour time constraint. See our solutions here. Gemini solved eight problems within just 45 minutes and two more problems within three hours, using a wide variety of advanced data structures and algorithms to generate its solutions. By solving 10 problems in a combined total time of 677 minutes, Gemini 2.5 Deep Think would be ranked in 2nd place overall, if compared with the university teams in the competition. Dr. Bill Poucher, ICPC Global Executive Director, stated: "The ICPC has always been about setting the highest standards in problem solving. Gemini successfully joining this arena, and achieving gold-level results, marks a key moment in defining the AI tools and academic standards needed for the next generation. Congratulations to Google DeepMind; this work will help us fuel a digital renaissance for the benefit of all."
Share
Share
Copy Link
Google's Gemini 2.5 Deep Think AI model has achieved gold medal performance at the 2025 International Collegiate Programming Contest (ICPC) World Finals, solving complex coding problems and outperforming most human teams. This breakthrough demonstrates significant progress in artificial intelligence and its potential applications in various scientific fields.
Google's advanced AI model, Gemini 2.5 Deep Think, has achieved a remarkable feat by earning a gold medal at the 2025 International Collegiate Programming Contest (ICPC) World Finals
1
2
3
. This prestigious competition, often referred to as the 'coding Olympics,' brings together top programming talent from nearly 3,000 universities across 103 countries3
4
.Gemini 2.5 Deep Think demonstrated exceptional problem-solving abilities during the five-hour competition:
1
2
3
1
3
2
4
4
Notably, Gemini solved a problem (Problem C) that stumped all human competitors
1
2
. This multi-dimensional optimization challenge involved fictitious 'flubber' storage and drainage rates, which Gemini tackled using a novel approach combining priority values, dynamic programming, and nested ternary search1
.Google views this achievement as a significant step towards artificial general intelligence
2
3
. The company highlights that the skills required for ICPC success - understanding complex problems, devising multi-step logical plans, and flawless implementation - are analogous to those needed in various scientific and engineering fields2
.Quoc Le, vice-president and Google fellow, described the event as 'a historic moment towards AGI'
3
. The breakthrough demonstrates Gemini's profound leap in abstract problem-solving, building on its recent success at the International Mathematical Olympiad4
.Related Stories
The implications of Gemini's performance extend beyond software development:
1
2
3
Google plans to integrate innovations from these competitions into future versions of Gemini Deep Think, expanding advanced AI capabilities accessible to students and researchers
4
.While experts acknowledge the impressive nature of Gemini's achievement, some caution against overstating its implications:
3
3
As AI continues to advance, its potential to complement human expertise in solving complex problems becomes increasingly evident. However, the path to AGI remains a subject of ongoing research and debate within the scientific community.
Summarized by
Navi
[3]
20 Jul 2025β’Science and Research
01 Aug 2025β’Technology
26 Mar 2025β’Technology
1
Business and Economy
2
Policy and Regulation
3
Technology