Researchers at Sakana.AI, a Tokyo-based company, have worked on developing a large language model (LLM) designed specifically for scientific research.
Researchers at Sakana.AI have developed an artificial intelligence (AI) model that may be able to automate the entire scientific research process.
The "AI Scientist" can identify a problem, develop hypotheses, implement ideas, run experiments, analyse results, and write reports.
The researchers also incorporated a secondary language model to peer review and evaluate the quality of these reports and validate the findings.
"We sort of think of this as a type of GPT-1 moment for generative scientific discovery," Robert Lange, research scientist and founding member at Sakana.AI, told Euronews Next, adding that much like AI's early stages in other fields, its true potential in science is only just beginning to be realised.
Integrating AI into scientific research
AI's integration into science has faced some limitations due to the complexities of the field and ongoing issues with these tools, such as hallucinations and questions about ownership.
Yet, its influence in science may already be more widespread than many realise, often used without clear disclosure by researchers.
Earlier this year, a study that analysed writing patterns and specific word usage in academic papers following the release of the now well-known AI chatbot, ChatGPT, estimated that around 60,000 research papers may have been enhanced or polished using AI tools.
Although the use of AI in scientific research could raise some ethical concerns, it could also present an opportunity for new advancements in the field when done properly, with the European Commission saying that AI can act as a "catalyst for scientific breakthroughs and a key instrument in the scientific process".
The AI Scientist project is still in its early stages with researchers publishing a paper in pre-print last month, and the system has some notable limitations.
Some of the flaws, as detailed by the researchers, include incorrect implementation of ideas, unfair comparisons to baselines, and critical errors in writing and evaluating results.
Still, Lange sees these issues as crucial stepping stones and expects that the AI model will significantly improve with more resources and time.
"When you think about the history of machine learning models, like image generation models, chatbots right now, also and text-to-video models, they oftentimes start out with some flaws and some maybe images which are generated, which are not super visually pleasing," Lange said.
"But over time, as we put in more collective resources as a community, they become much more powerful and much more capable," he added.
A tool to support scientists, not replace them
The AI Scientist, when tested, displayed at times a degree of autonomy by exhibiting behaviours that mimic the actions of human researchers such as taking extra unexpected steps to ensure success.
For instance, instead of optimising its code to run faster when an experiment took longer than expected, it tried to change its settings to extend the time limit.
Still, the AI Scientist is not meant to replace human researchers but to complement their work, its creators say.
"I think with many of the AI tools, we are hoping that they are not going to replace humans entirely, but rather make it possible for humans to work at the level of abstraction that they want to work on and at which they're really, really great," Lange said.
He further explained that given the current limitations of AI models, human verification is still important to ensure the accuracy and reliability of AI-generated research. They will also remain essential in areas like peer review and setting research directions, he said.
Ethical use of AI in science
As the integration of AI into scientific research progresses, Lange emphasises that transparency is necessary.
One way to do that is to add watermarks to AI-generated papers, which could ensure that AI contributions are openly disclosed.
"I'm a big believer in sort of making sure that all of these things are developed collectively, as well as iteratively, so that we can make sure that they are safely deployed," Lange said.
Having open source code for the models and being transparent about their development could also support the ethical use of these AI systems in science.
"We're thinking that open source models can add a lot to this discussion. So basically, along the lines of democratisation, I think given that this process is so cheap, everyone can get involved and should, early on, get involved," Lange said.
He added that he is hopeful the AI Scientist project will ignite a larger conversation about the future of scientific research and where AI fits into it.
"We hope that this paper can, or this project can, sort of spark a big discussion in the community for how to conduct science in the next years going forward, and also, maybe on a broader scale, about what a scientific contribution actually, at its core, is".