AI labs and publishers can work together to share paywalled content ethically, promoting research and innovation.
As part of Google's series of mega AI announcements, the company launched a Deep Research tool that is changing the world of academic research and education. It works on the Gemini bot and can search hundreds of websites within minutes.
"Gemini models are moving into agent-based areas that reason, plan, and act in the real world," said Google Deepmind CEO Demis Hassabis in a podcast, reflecting on this leap towards the next generation of agent-based systems serving as universal digital assistants. Zoubin Ghahramani, VP of research at Google Deepmind, revealed it was his dream to bring these "intelligent agents" to life to simplify research.
It has, perhaps, been the most useful innovation for Google since NotebookLM. "The new Deep Research feature from Google feels like one of the most appropriate 'Google-y' uses of AI to date and is quite impressive," said The Wharton School's professor Ethan Mollick, who had early access to the tool. He mentioned that although the paywall around academic sources imposes certain limitations, the content is, at least, accurate at an undergraduate level.
Previously, Mollick also noted that o1 excels in solving PhD-level problems and has applications in science and finance but requires R&D to unlock its full potential.
While reasoning features in LLMs are already offered by Anthropic (Claude Haiku), OpenAI (o1), DeepSeek (R1 Lite Preview), Perplexity, etc., several others are now incorporating search features into them as well.
Perplexity CEO Aravind Srinivas has taken to X to compare both tools. Unlike Perplexity Pro, which is useful for more regular searches, Gemini's deep research is tailored for more intensive searches.
"You cannot ask Gemini Deep Research normal LLM questions, answers take minutes - longer than o1-pro - to produce, and it looks at hundreds of sources," Dean W Ball, a tech journalist, shared on X. He noted that policy research is well-received, while medical queries may face paywalls.
"Perplexity and ChatGPT search has chipped away at Google's search dominance before, but Google has the most training data (Google Index, YouTube, etc.), distribution channel and lots of AI talent," Hyperbolic co-founder Yuchen Jin posted on X while commenting on Google's swift comeback.
Yet, many see hallucinations and inaccuracies as very real challenges that exist with LLMs. Meta's Galactica, which was removed for producing misleading, biased outputs with fabricated citations, raised concerns about misinformation and its potential to undermine scientific integrity.
In August, Japan-based Sakana AI introduced the 'AI Scientist' - a system that uses LLMs to independently conduct research, from generating ideas to writing and reviewing papers, costing under $15 per paper. Studies have shown how AI has transitioned from a passive tool to an active partner in scientific research.
Recently, researchers from Stanford introduced a multi-agent AI architecture designed to mimic an interdisciplinary team of scientists. Additionally, tools like 01 are already playing a key role in sectors like health sciences, assisting in the search for cures for rare diseases. For instance, Derya Unutmaz, a professor at The Jackson Laboratory, revealed in a post on X that he is using o1 pro for a cancer therapy project.
Collaboration between AI labs and publishers will enable ethical access to paywalled content, supporting agentic research, knowledge sharing, and innovation.
Additionally, reports show how "education equity" efforts may not be working as intended in low-income areas, and personalised education will overcome this gap. "Personalised education is necessary and important. One size fits all, and feel-good equity doesn't serve the kids," said Y Combinator chief Garry Tan on funding startups that work at the intersection of Agentic AI, research, and education.
Ultimately, these tools remain useful and accessible only as long as people continue to use them. "People don't realise that the AI Labs mostly do lab things - shipping models to beat the other labs. AI is a general technology. They have no idea of what the ideal use cases for your job or industry are or how good their models are at those things. You have to figure it out," commented Mollick on X, reflecting on the potential of current LLMs in improving our daily lives.