Curated by THEOUTPOST
On Thu, 6 Mar, 8:02 AM UTC
3 Sources
[1]
Scientists Say Google's "AI Scientist" Is Dead on Arrival
Is Google's so-called "AI co-scientist" poised to revolutionize scientific research as we know it? Not according to its human colleagues. The Gemini 2.0 based tool, announced by Google last month, can purportedly come up with hypotheses and detailed research plans by using "advanced reasoning" to "mirror the reasoning process underpinning the scientific method." This process is powered by multiple Gemini "agents" that essentially debate and bounce ideas off each other, refining them over time. The yet-unnamed tool would give scientists "superpowers," Alan Karthikesalingam, an AI researcher at Google, told New Scientist last month. And even biomedical researchers at Imperial College London, who got to use an early version of the AI model, eagerly claimed it would "supercharge science." But the superlative-heavy hype seems to be just that: hype. "This preliminary tool, while interesting, doesn't seem likely to be seriously used," Sarah Beery, a computer vision researcher at MIT, told TechCrunch. "I'm not sure that there is demand for this type of hypothesis-generation system from the scientific community." In its announcement, Google boasted that the AI co-scientist came up with novel approaches for repurposing drugs to treat acute myeloid leukemia. According to pathologist Favia Dubyk, however, "no legitimate scientist" would take the results seriously -- they're just too vague. "The lack of information provided makes it really hard to understand if this can truly be helpful," Dubyk, who's affiliated with Northwest Medical Center-Tucson in Arizona, told TechCrunch. Google's claims that the AI uncovered novel ways of treating liver fibrosis have also been shot down. "The drugs identified are all well established to be antifibrotic," Steven O'Reilly at UK biotech company Alcyomics, told New Scientist last month. "There is nothing new here." To be sure, the tool isn't without its potential advantages. It can parse through and pull from vast amounts of scientific literature in minutes, compiling what it finds into helpful summaries. That could be an amazing timesaver -- if you can overlook the high likelihood of hallucinations, or made-up outputs, creeping into the work, a problem inherent to all large language models. But that's not what Google is aiming for here; it's touting the AI model as a bonafide hypothesis-generating machine -- something that can probe our understanding of a field with meaningful questions -- not merely an automated research assistant. That's a very, very high bar. And more importantly, it's not something scientists are asking for. "For many scientists, myself included, generating hypotheses is the most fun part of the job," Lana Sinapayen, an AI researcher at Sony Computer Science Laboratories in Japan told TechCrunch. "Why would I want to outsource my fun to a computer, and then be left with only the hard work to do myself?" "In general, many generative AI researchers seem to misunderstand why humans do what they do," Sinapayen added, "and we end up with proposals for products that automate the very part that we get joy from."
[2]
Experts don't think AI is ready to be a 'co-scientist' | TechCrunch
Last month, Google announced the "AI co-scientist," an AI the company said was designed to aid scientists in creating hypotheses and research plans. Google pitched it as a way to uncover new knowledge, but experts think it -- and tools like it -- fall well short of PR promises. "This preliminary tool, while interesting, doesn't seem likely to be seriously used," Sarah Beery, a computer vision researcher at MIT, told TechCrunch. "I'm not sure that there is demand for this type of hypothesis-generation system from the scientific community." Google is the latest tech giant to advance the notion that AI will dramatically speed up scientific research someday, particularly in literature-dense areas such as biomedicine. In an essay earlier this year, OpenAI CEO Sam Altman said that "superintelligent" AI tools could "massively accelerate scientific discovery and innovation." Similarly, Anthropic CEO Dario Amodei has boldly predicted that AI could help formulate cures for most cancers. But many researchers don't consider AI today to be especially useful in guiding the scientific process. Applications like Google's AI co-scientist appear to be more hype than anything, they say, unsupported by empirical data. For example, in its blog post describing the AI co-scientist, Google said the tool had already demonstrated potential in areas such as drug repurposing for acute myeloid leukemia, a type of blood cancer that affects bone marrow. Yet the results are so vague that "no legitimate scientist would take [them] seriously," said Favia Dubyk, a pathologist affiliated with Northwest Medical Center-Tucson in Arizona. "This could be used as a good starting point for researchers, but [...] the lack of detail is worrisome and doesn't lend me to trust it," Dubyk told TechCrunch. "The lack of information provided makes it really hard to understand if this can truly be helpful." It's not the first time Google has been criticized by the scientific community for trumpeting a supposed AI breakthrough without providing a means to reproduce the results. In 2020, Google claimed one of its AI systems trained to detect breast tumors achieved better results than human radiologists. Researchers from Harvard and Stanford published a rebuttal in the journal Nature, saying the lack of detailed methods and code in Google's research "undermine[d] its scientific value." Scientists have also chided Google for glossing over the limitations of its AI tools aimed at scientific disciplines such as materials engineering. In 2023, the company said around 40 "new materials" had been synthesized with the help of one of its AI systems, called GNoME. Yet, an outside analysis found not a single one of the materials was, in fact, net new. "We won't truly understand the strengths and limitations of tools like Google's 'co-scientist' until they undergo rigorous, independent evaluation across diverse scientific disciplines," Ashique KhudaBukhsh, an assistant professor of software engineering at Rochester Institute of Technology, told TechCrunch. "AI often performs well in controlled environments but may fail when applied at scale." Part of the challenge in developing AI tools to aid in scientific discovery is anticipating the untold number of confounding factors. AI might come in handy in areas where broad exploration is needed, like narrowing down a vast list of possibilities. But it's less clear whether AI is capable of the kind of out-of-the-box problem-solving that leads to scientific breakthroughs. "We've seen throughout history that some of the most important scientific advancements, like the development of mRNA vaccines, were driven by human intuition and perseverance in the face of skepticism," KhudaBukhsh said. "AI, as it stands today, may not be well-suited to replicate that." Lana Sinapayen, an AI researcher at Sony Computer Science Laboratories in Japan, believes that tools such as Google's AI co-scientist focus on the wrong kind of scientific legwork. Sinapayen sees a genuine value in AI that could automate technically difficult or tedious tasks, like summarizing new academic literature or formatting work to fit a grant application's requirements. But there isn't much demand within the scientific community for an AI co-scientist that generates hypotheses, she says -- a task from which many researchers derive intellectual fulfillment. "For many scientists, myself included, generating hypotheses is the most fun part of the job," Sinapayen told TechCrunch. "Why would I want to outsource my fun to a computer, and then be left with only the hard work to do myself? In general, many generative AI researchers seem to misunderstand why humans do what they do, and we end up with proposals for products that automate the very part that we get joy from." Beery noted that often the hardest step in the scientific process is designing and implementing the studies and analyses to verify or disprove a hypothesis -- which isn't necessarily within reach of current AI systems. AI can't use physical tools to carry out experiments, of course, and it often performs worse on problems for which extremely limited data exists. "Most science isn't possible to do entirely virtually -- there is frequently a significant component of the scientific process that is physical, like collecting new data and conducting experiments in the lab," Beery said. "One big limitation of systems [like Google's AI co-scientist] relative to the actual scientific process, which definitely limits its usability, is context about the lab and researcher using the system and their specific research goals, their past work, their skillset, and the resources they have access to." AI's technical shortcomings and risks -- such as its tendency to hallucinate -- also make scientists wary of endorsing it for serious work. KhudaBukhsh fears AI tools could simply end up generating noise in the scientific literature, not elevating progress. It's already a problem. A recent study found that AI-fabricated "junk science" is flooding Google Scholar, Google's free search engine for scholarly literature. "AI-generated research, if not carefully monitored, could flood the scientific field with lower-quality or even misleading studies, overwhelming the peer-review process," KhudaBukhsh said. "An overwhelmed peer-review process is already a challenge in fields like computer science, where top conferences have seen an exponential rise in submissions." Even well-designed studies could end up being tainted by misbehaving AI, Sinapayen said. While she likes the idea of a tool that could assist with literature review and synthesis, Sinapayen said she wouldn't trust AI today to execute that work reliably. "Those are things that various existing tools are claiming to do, but those are not jobs that I would personally leave up to current AI," Sinapayen said, adding that she takes issue with the way many AI systems are trained and the amount of energy they consume, as well. "Even if all the ethical issues [...] were solved, current AI is just not reliable enough for me to base my work on their output one way or another."
[3]
Google's AI super system will help scientists generate novel hypotheses and research proposals but will it dumb them down as well?
Scientists can interact naturally, providing ideas or feedback to guide AI research Artificial intelligence has already had a major impact on scientific research by accelerating discoveries, improving accuracy, and handling vast datasets that would be near-impossible for humans to analyze efficiently. AI-powered algorithms can assist in the discovery of new drugs, optimize materials for energy storage, and aid in modeling climate change. A number of projects have been set up to make AI more useful and more reliable in a scientific setting. We've previously written about the concept of the "exocortex," which aims to provide a bridge between the human mind and a network of AI agents, and more recently, an Australian research team developed a generative AI tool called LLM4SD (Large Language Model for Scientific Discovery), designed to speed up scientific breakthroughs. Now, Google is also launching a similar initiative, which aims to turn AI into a co-scientist that can accelerate scientific discoveries. The tech giant explains, "The AI co-scientist is a multi-agent AI system that is intended to function as a collaborative tool for scientists." The AI co-scientist is built on Google's Gemini 2.0 and is the result of collaboration between Google Research, Google DeepMind, and Google Cloud AI teams. It is designed to "mirror the reasoning process underpinning the scientific method." Google says that its system is intended to "uncover new, original knowledge and to formulate demonstrably novel research hypotheses and proposals, building upon prior evidence and tailored to specific research objectives." The system will use a number of specialized agents - Generation, Reflection, Ranking, Evolution, Proximity, and Meta-review - that can iteratively generate, evaluate, and refine hypotheses. Google says that scientists will be able to interact with the system in whatever way best suits their needs. This will include providing their own seed ideas or feedback on generated outputs in natural language. "The AI co-scientist also uses tools, like web search and specialized AI models, to enhance the grounding and quality of generated hypotheses," Google says. Not wishing to rush its deployment, the company plans to offer access to the system for research organizations through a trusted tester program.
Share
Share
Copy Link
Google's announcement of an AI co-scientist tool based on Gemini 2.0 has sparked debate in the scientific community. While the company touts its potential to revolutionize research, many experts remain skeptical about its practical applications and impact on the scientific process.
Google has recently announced the development of an "AI co-scientist," a tool based on its Gemini 2.0 model, designed to assist scientists in generating hypotheses and research plans. The company claims this AI system can "mirror the reasoning process underpinning the scientific method" and potentially revolutionize scientific research 13.
According to Google, the AI co-scientist employs multiple specialized agents for generating, evaluating, and refining hypotheses. The system purportedly allows scientists to interact naturally, providing ideas or feedback to guide AI research 3. Google has highlighted potential applications in areas such as drug repurposing for acute myeloid leukemia and uncovering novel approaches to treat liver fibrosis 1.
Despite Google's enthusiasm, many experts in the scientific community have expressed skepticism about the tool's practical value and impact:
Sarah Beery, a computer vision researcher at MIT, questions the demand for such hypothesis-generation systems within the scientific community 12.
Favia Dubyk, a pathologist, criticizes the vagueness of the results, stating that "no legitimate scientist" would take them seriously without more detailed information 12.
Steven O'Reilly from Alcyomics argues that the AI's findings in liver fibrosis treatment are not novel, as the identified drugs are already well-established 1.
Several limitations and concerns have been raised regarding the AI co-scientist and similar tools:
Lack of Physical Experimentation: The AI cannot conduct physical experiments or collect new data, which are crucial aspects of the scientific process 2.
Risk of Hallucinations: Like all large language models, there's a high likelihood of the AI generating false or misleading information 1.
Oversimplification of Scientific Process: Critics argue that generating hypotheses is often the most enjoyable part of scientific work for researchers, and outsourcing this task may be counterproductive 12.
Limited Context Understanding: The AI may lack crucial context about specific research goals, past work, skillsets, and available resources of individual researchers or labs 2.
This is not the first time Google has faced criticism for announcing AI breakthroughs without providing means to reproduce results. In 2020, similar concerns were raised about a breast tumor detection AI system 2. As AI continues to evolve, there's a growing need for rigorous, independent evaluation across diverse scientific disciplines to truly understand its strengths and limitations 2.
While AI has shown promise in accelerating discoveries and handling vast datasets in fields like drug discovery and climate modeling 3, experts emphasize that human intuition and perseverance remain crucial for groundbreaking scientific advancements 2.
As Google plans to offer access to the AI co-scientist through a trusted tester program 3, the scientific community awaits more concrete evidence of its capabilities and potential impact on the research landscape.
Reference
Google introduces an advanced AI system called "AI Co-Scientist," designed to assist researchers in generating hypotheses, refining ideas, and proposing innovative research directions across various scientific disciplines.
14 Sources
14 Sources
Sakana AI, a Tokyo-based startup, has developed an AI scientist capable of automating scientific research and discovery. This breakthrough could potentially accelerate scientific progress and lead to groundbreaking discoveries in various fields.
3 Sources
3 Sources
MIT scientists have created an AI system called SciAgents that can autonomously generate and evaluate research hypotheses across various fields, potentially revolutionizing the scientific discovery process.
3 Sources
3 Sources
Australian researchers develop LLM4SD, an AI tool that simulates scientists by analyzing research, generating hypotheses, and providing transparent explanations for predictions across various scientific disciplines.
2 Sources
2 Sources
AI is transforming scientific research, offering unprecedented speed and efficiency. However, it also raises concerns about accessibility, understanding, and the future of human-led science.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved