Curated by THEOUTPOST
On Tue, 1 Apr, 4:04 PM UTC
4 Sources
[1]
DeepMind is holding back release of AI research to give Google an edge
Google's artificial intelligence arm DeepMind has been holding back the release of its world-renowned research, as it seeks to retain a competitive edge in the race to dominate the burgeoning AI industry. The group, led by Nobel Prize winner Sir Demis Hassabis, has introduced a tougher vetting process and more bureaucracy that made it harder to publish studies about its work on AI, according to seven current and former research scientists at Google DeepMind. Three former researchers said the group was most reluctant to share papers that reveal innovations that could be exploited by competitors, or cast Google's own Gemini AI model in a negative light compared with others. The changes represent a significant shift for DeepMind, which has long prided itself on its reputation for releasing groundbreaking papers and as a home for the best scientists building AI. Meanwhile, huge breakthroughs by Google researchers -- such as its 2017 "transformers" paper that provided the architecture behind large language models -- played a central role in creating today's boom in generative AI. Since then, DeepMind has become a central part of its parent company's drive to cash in on the cutting-edge technology, as investors expressed concern that the big tech group had ceded its early lead to the likes of ChatGPT maker OpenAI. "I cannot imagine us putting out the transformer papers for general use now," said one current researcher. Among the changes in the company's publication policies is a six-month embargo before "strategic" papers related to generative AI are released. Researchers also often need to convince several staff members of the merits of publication, said two people with knowledge of the matter. A person close to DeepMind said the changes were to benefit researchers who had become frustrated spending time on work that would not be approved for strategic or competitive reasons. They added that the company still publishes hundreds of papers each year and is among the largest contributors to major AI conferences. Concern that Google was falling behind in the AI race contributed to the merger of London-based DeepMind and California-based Brain AI units in 2023. Since then, it has been faster to roll out a wide array of AI-infused products. "The company has shifted to one that cares more about product and less about getting research results out for the general public good," said one former DeepMind research scientist. "It's not what I signed up for." DeepMind said it had "always been committed to advancing AI research and we are instituting updates to our policies that preserve the ability for our teams to publish and contribute to the broader research ecosystem." While the company had a publication review process in place before DeepMind's merger with Brain, the system has become more bureaucratic, according to those with knowledge of the changes. Former staffers suggested the new processes had stifled the release of commercially sensitive research to avoid the leaking of potential innovations. One said publishing papers on generative AI was "almost impossible." In one incident, DeepMind stopped the publication of research that showed Google's Gemini language model is not as capable or is less safe than rivals, especially OpenAI's GPT-4, according to one current employee. However, the employee added it had also blocked a paper that revealed vulnerabilities in OpenAI's ChatGPT, over concerns the release seemed like a hostile tit-for-tat. A person close to DeepMind said it did not block papers that discuss security vulnerabilities, adding that it routinely publishes such work under a "responsible disclosure policy," in which researchers must give companies the chance to fix any flaws before making them public. But the clampdown has unsettled some staffers, where success has long been measured through appearing in top-tier scientific journals. People with knowledge of the matter said the new review processes had contributed to some departures. "If you can't publish, it's a career killer if you're a researcher," said a former researcher. Some ex-staff added that projects focused on improving its Gemini suite of AI-infused products were increasingly prioritised in the internal battle for access to data sets and computing power. In the past few years, Google has produced a range of AI-powered products that have impressed the markets. This includes improving its AI-generated summaries that appear above search results, to unveiling an "Astra" AI agent that can answer real-time queries across video, audio and text. The company's share price has increased by as much as a third over the past year, though those gains pared back in recent weeks as concern over US tariffs hit tech stocks. In recent years, Hassabis has balanced the desire of Google's leaders to commercialise its breakthroughs with his life mission of trying to make artificial general intelligence -- AI systems with abilities that can match or surpass humans. "Anything that gets in the way of that he will remove," said one current employee. "He tells people this is a company, not a university campus; if you want to work at a place like that, then leave." Additional reporting by George Hammond. © 2025 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.
[2]
DeepMind slows down research releases in battle to keep competitive edge
Google's artificial intelligence arm DeepMind has been holding back the release of its world-renowned research, as it seeks to retain a competitive edge in the race to dominate the burgeoning AI industry. The group, led by Nobel Prize winner Sir Demis Hassabis, has introduced a tougher vetting process and more bureaucracy that made it harder to publish studies about its work on AI, according to seven current and former research scientists at Google DeepMind. Three former researchers said that the group was most reluctant to share papers that reveal innovations that could be exploited by competitors, or cast Google's own Gemini AI model in a negative light compared to others. The changes represent a significant shift for DeepMind, which has long prided itself on its reputation for releasing groundbreaking papers and home for the best scientists building AI. Meanwhile, huge breakthroughs by Google researchers -- such as its 2017 "transformers" paper that provided the architecture behind large language models -- played a central role in creating today's boom in generative AI. Since then, DeepMind has become a central part of its parent company's drive to cash in on the cutting-edge technology, as investors expressed concern the Big Tech giant had ceded its early lead to the likes of ChatGPT maker OpenAI. "I cannot imagine us putting out the transformer papers for general use now," said one current researcher. Among the changes in the company's publication policies are a six-month embargo before "strategic" papers related to generative AI are released. Researchers also often need to convince several staff members of the merits of publication, said two people with knowledge of the matter. A person close to DeepMind said the changes were to benefit researchers who had become frustrated spending time on work that would not be approved for strategic or competitive reasons. They added that the company still publishes hundreds of papers each year and is among the largest contributors to major AI conferences. Concern that Google was falling behind in the AI race contributed to the merger of the London-based DeepMind and California-based Brain AI units in 2023. Since then, it has been faster to roll out a wide array of AI-infused products. "The company has shifted to one that cares more about product and less about getting research results out for the general public good," said one former DeepMind research scientist. "It's not what I signed up for." DeepMind said it has "always been committed to advancing AI research and we are instituting updates to our policies that preserve the ability for our teams to publish and contribute to the broader research ecosystem". While the company had a publication review process in place before DeepMind's merger with Brain, the system has become more bureaucratic, according to those with knowledge of the changes. Former staffers suggested the new processes have stifled the release of commercially sensitive research to avoid the leaking of potential innovations. One said that publishing papers on generative AI was "almost impossible". In one incident, DeepMind stopped the publication of research that showed Google's Gemini language model is not as capable or is less safe than rivals, especially OpenAI's GPT-4, according to one current employee. However, the employee added it had also blocked a paper that revealed vulnerabilities in OpenAI's ChatGPT, over concerns the release seemed like a hostile tit-for-tat. A person close to DeepMind said it does not block papers that discuss security vulnerabilities, adding it routinely publishes such work under a "responsible disclosure policy," in which researchers must give companies the chance to fix any flaws before making them public. But the clampdown has unsettled some staffers, where success has long been measured through appearing in top-tier scientific journals. People with knowledge of the matter said the new review processes had contributed to some departures. "If you can't publish, it's a career killer if you're a researcher," said a former researcher. Some ex-staff added projects focused on improving its Gemini suite of AI-infused products were increasingly prioritised in the internal battle for access to data sets and computing power. In the past few years, Google has produced a range of AI-powered products that have impressed the markets. This includes improving its AI-generated summaries that appear above search results, to unveiling an "Astra" AI agent that can answer real-time queries across video, audio and text. The company's share price has increased by as much as a third over the past year, though those gains pared back in recent weeks as concern over US tariffs hit tech stocks. In recent years, Hassabis has balanced the desire of Google's leaders to commercialise its breakthroughs with his life mission of trying to make artificial general intelligence (AGI) -- AI systems with abilities that can match or surpass humans. "Anything that gets in the way of that he will remove," said one current employee. "He tells people this is a company not a university campus; if you want to work at a place like that, then leave."
[3]
DeepMind's new research rules threaten AI innovation, warns Iris.ai CEO
New rules at Google DeepMind could slow down the pace of AI progress Google DeepMind's reported clampdown on sharing research will stifle AI innovation, warns the CEO of Iris.ai, one of Europe's leading startups in the space. The UK-based lab has tightened its rules on releasing AI studies, the Financial Times reported this week. Citing seven current and former DeepMind scientists, the newspaper said the company has introduced stricter vetting and additional bureaucracy, making it harder to publish research. The changes aim to protect the company's edge in AI, the sources said. Acquired by Google in 2014, DeepMind has long been a leader in computer science breakthroughs. In recent years, however, the lab has faced increasing competition from the likes of OpenAI and DeepSeek. Under growing pressure to stay ahead, the company is reportedly erecting new barriers around its innovations and reputation. The new constraints have alarmed Anita Schjøll Abildgaard, co-founder and CEO of Iris.ai, a Norwegian startup developing an AI-powered engine for science. She fears DeepMind's restrictions will hinder technological advances. "DeepMind's decision marks the end of an era of openness and collaboration in AI research," she said. On first impression, the changes at DeepMind may appear beneficial for other AI labs. The company's pioneering innovations and enormous citation counts have overshadowed other researchers in the field, who could now receive a larger share of the spotlight. But Abildgaard warns the drawbacks will far outweigh the benefits. "Researchers across industries will have less access to DeepMind's undoubtedly impressive work," she said. She pointed to the example of DeepMind's AlphaFold, a system that predicts protein structure with remarkable accuracy. The software has been hailed as a solution to one of biology's biggest mysteries, with potential to fuel countless advances, from discovering new drugs to tackling climate change. "It's hard to imagine projects of this importance being released so readily under this new diktat," Abildgaard said. The impacts, she warned, could be severe. In response, she urged AI companies to strengthen their commitment to openness. "Europe, in particular, has one of the most fertile open-source research communities in the world," she said. "As DeepMind looks inwards, smaller research communities can differentiate themselves from the American giants by embracing collaboration."
[4]
Google's DeepMind slows research publications to maintain AI edge - FT By Investing.com
Investing.com-- Google's (NASDAQ:GOOGL) artificial intelligence research arm, DeepMind, has imposed stricter controls on the release of its research papers, limiting the publication of key advancements to maintain a competitive edge, the Financial Times reported on Tuesday. The move marks a shift for DeepMind, led by Nobel Prize winner Sir Demis Hassabis, from its long-standing tradition of open research to a stronger focus on commercializing AI technology, the FT reported. According to the report, internal policies now include a six-month embargo on certain papers and a requirement for researchers to gain multiple approvals before publishing. Three former DeepMind scientists told the FT that the company is reluctant to release studies that could either benefit competitors or highlight weaknesses in Google's Gemini AI model compared to rivals like Microsoft-backed (NASDAQ:MSFT) OpenAI's GPT-4. The shift comes amid growing investor pressure on Google to reclaim its leadership in AI, following concerns that it has fallen behind OpenAI. DeepMind defended the changes, stating it remains committed to advancing AI research while balancing strategic and competitive interests, according to the FT report.
Share
Share
Copy Link
Google's AI research arm, DeepMind, has implemented stricter controls on releasing research papers, marking a shift from open research to a more commercially-focused approach in the competitive AI landscape.
Google's artificial intelligence arm, DeepMind, has implemented significant changes to its research publication policies, signaling a strategic shift in the competitive AI landscape. Led by Nobel Prize winner Sir Demis Hassabis, the company has introduced a more rigorous vetting process and increased bureaucracy, making it more challenging for researchers to publish studies about their AI work 1.
The new policies include a six-month embargo on "strategic" papers related to generative AI and a requirement for researchers to obtain approval from multiple staff members before publication 2. These changes represent a significant departure from DeepMind's previous reputation for openly sharing groundbreaking research.
A current researcher at DeepMind stated, "I cannot imagine us putting out the transformer papers for general use now," referring to the influential 2017 paper that laid the foundation for large language models 1.
The shift in policy appears to be driven by several factors:
The new policies have created tension within the research community at DeepMind. Some researchers express frustration with the shift towards product-focused development and away from public-good research 2.
A former DeepMind research scientist commented, "The company has shifted to one that cares more about product and less about getting research results out for the general public good. It's not what I signed up for" 1.
Anita Schjøll Abildgaard, CEO of Iris.ai, warns that DeepMind's new restrictions could hinder technological advances. She stated, "DeepMind's decision marks the end of an era of openness and collaboration in AI research" 3.
DeepMind has defended its new policies, stating that the changes aim to benefit researchers who were frustrated with spending time on work that wouldn't be approved for strategic or competitive reasons. The company maintains that it still publishes hundreds of papers annually and remains a significant contributor to major AI conferences 2.
This shift at DeepMind reflects the growing tension between open scientific collaboration and commercial interests in the AI field. As competition intensifies, other AI research institutions may face similar pressures to balance innovation with strategic advantage.
The AI community now faces the challenge of maintaining the spirit of open research while navigating the increasingly competitive and commercially driven landscape of artificial intelligence development.
Reference
[2]
Google showcases AI advancements, including Gemini 2.0 and new hardware, while industry experts debate the future of AI progress amid data scarcity concerns.
8 Sources
8 Sources
OpenAI CEO Sam Altman admits the company has been on the "wrong side of history" regarding open-source AI development, as Chinese startup DeepSeek's success sparks industry-wide debate on AI strategies and market dynamics.
14 Sources
14 Sources
Chinese startup DeepSeek launches a powerful, cost-effective AI model, challenging industry giants and raising questions about open-source AI development, intellectual property, and global competition.
16 Sources
16 Sources
OpenAI's release of Deep Research, an AI-powered research agent, prompts Hugging Face to create an open-source alternative within 24 hours, highlighting the rapid replication of AI tools and growing competition in the field.
61 Sources
61 Sources
The AI Action Summit in Paris marks a significant shift in global attitudes towards AI, emphasizing economic opportunities over safety concerns. This change in focus has sparked debate among industry leaders and experts about the balance between innovation and risk management.
7 Sources
7 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved