3 Sources
[1]
Hugging Face co-founder Thomas Wolf just challenged Anthropic CEO's vision for AI's future -- and the $130 billion industry is taking notice
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Thomas Wolf, co-founder of AI company Hugging Face, has issued a stark challenge to the tech industry's most optimistic visions of artificial intelligence, arguing that today's AI systems are fundamentally incapable of delivering the scientific revolutions their creators promise. In a provocative blog post published on his personal website this morning, Wolf directly confronts the widely circulated vision of Anthropic CEO Dario Amodei, who predicted that advanced AI would deliver a "compressed 21st century" where decades of scientific progress could unfold in just years. "I'm afraid AI won't give us a 'compressed 21st century,'" Wolf writes in his post, arguing that current AI systems are more likely to produce "a country of yes-men on servers" rather than the "country of geniuses" that Amodei envisions. The exchange highlights a growing divide in how AI leaders think about the technology's potential to transform scientific discovery and problem-solving, with major implications for business strategies, research priorities, and policy decisions. From straight-A student to 'mediocre researcher': Why academic excellence doesn't equal scientific genius Wolf grounds his critique in personal experience. Despite being a straight-A student who attended MIT, he describes discovering he was a "pretty average, underwhelming, mediocre researcher" when he began his PhD work. This experience shaped his view that academic success and scientific genius require fundamentally different mental approaches -- the former rewarding conformity, the latter demanding rebellion against established thinking. "The main mistake people usually make is thinking Newton or Einstein were just scaled-up good students," Wolf explains. "A real science breakthrough is Copernicus proposing, against all the knowledge of his days -- in ML terms we would say 'despite all his training dataset' -- that the earth may orbit the sun rather than the other way around." Amodei's vision, published last October in his "Machines of Loving Grace" essay, presents a radically different perspective. He describes a future where AI, operating at "10x-100x human speed" and with intellect exceeding Nobel Prize winners, could deliver a century's worth of progress in biology, neuroscience, and other fields within 5-10 years. Amodei envisions "reliable prevention and treatment of nearly all natural infectious disease," "elimination of most cancer," effective cures for genetic disease, and potentially doubling human lifespan, all accelerated by AI. "I think the returns to intelligence are high for these discoveries, and that everything else in biology and medicine mostly follows from them," he writes. Are we testing AI for conformity instead of creativity? The benchmark problem holding back scientific discovery This fundamental tension in Wolf's critique reveals an often-overlooked reality in AI development: our benchmarks are primarily designed to measure convergent thinking rather than divergent thinking. Current AI systems excel at producing answers that align with existing knowledge consensus, but struggle with the kind of contrarian, paradigm-challenging insights that drive scientific revolutions. The industry has invested heavily in measuring how well AI systems can answer questions with established answers, solve problems with known solutions, and fit within existing frameworks of understanding. This creates a systemic bias toward systems that conform rather than challenge. Wolf specifically critiques current AI evaluation benchmarks like "Humanity's Last Exam" and "Frontier Math," which test AI systems on difficult questions with known answers rather than their ability to generate innovative hypotheses or challenge existing paradigms. "These benchmarks test if AI models can find the right answers to a set of questions we already know the answer to," Wolf writes. "However, real scientific breakthroughs will come not from answering known questions, but from asking challenging new questions and questioning common conceptions and previous ideas." This critique points to a deeper issue in how we conceptualize artificial intelligence. The current focus on parameter count, training data volume, and benchmark performance may be creating the AI equivalent of excellent students rather than revolutionary thinkers. Billions at stake: How the 'obedient students vs. revolutionaries' debate will shape AI investment strategy This intellectual divide has substantial implications for the AI industry and the broader business ecosystem. Companies aligning with Amodei's vision might prioritize scaling AI systems to unprecedented sizes, expecting discontinuous innovation to emerge from increased computational power and broader knowledge integration. This approach underpins the strategies of firms like Anthropic, OpenAI, and other frontier AI labs that have collectively raised tens of billions of dollars in recent years. Conversely, Wolf's perspective suggests greater returns might come from developing AI systems specifically designed to challenge existing knowledge, explore counterfactuals, and generate novel hypotheses -- capabilities not necessarily emerging from current training methodologies. "We're currently building very obedient students, not revolutionaries," Wolf explains. "This is perfect for today's main goal in the field of creating great assistants and overly compliant helpers. But until we find a way to incentivize them to question their knowledge and propose ideas that potentially go against past training data, they won't give us scientific revolutions yet." For enterprise leaders betting on AI to drive innovation, this debate raises crucial strategic questions. If Wolf is correct, organizations investing in current AI systems with the expectation of revolutionary scientific breakthroughs may need to temper their expectations. The real value may be in more incremental improvements to existing processes, or in deploying human-AI collaborative approaches where humans provide the paradigm-challenging intuitions while AI systems handle computational heavy lifting. The $184 billion question: Is AI ready to deliver on its scientific promises? This exchange comes at a pivotal moment in the AI industry's evolution. After years of explosive growth in AI capabilities and investment, both public and private stakeholders are increasingly focused on practical returns from these technologies. Recent data from venture capital analytics firm PitchBook shows AI funding reached $130 billion globally in 2024, with healthcare and scientific discovery applications attracting particular interest. Yet questions about tangible scientific breakthroughs from these investments have grown more insistent. The Wolf-Amodei debate represents a deeper philosophical divide in AI development that has been simmering beneath the surface of industry discussions. On one side stand the scaling optimists, who believe that continuous improvements in model size, data volume, and training techniques will eventually yield systems capable of revolutionary insights. On the other side are architecture skeptics, who argue that fundamental limitations in how current systems are designed may prevent them from making the kind of cognitive leaps that characterize scientific revolutions. What makes this debate particularly significant is that it's occurring between two respected leaders who have both been at the forefront of AI development. Neither can be dismissed as simply uninformed or resistant to technological progress. Beyond scaling: How tomorrow's AI might need to think more like scientific rebels The tension between these perspectives points to a potential evolution in how AI systems are designed and evaluated. Wolf's critique doesn't suggest abandoning current approaches, but rather augmenting them with new techniques and metrics specifically aimed at fostering contrarian thinking. In his post, Wolf suggests that new benchmarks should be developed to test whether scientific AI models can "challenge their own training data knowledge" and "take bold counterfactual approaches." This represents a call not for less AI investment, but for more thoughtful investment that considers the full spectrum of cognitive capabilities needed for scientific progress. This nuanced view acknowledges AI's tremendous potential while recognizing that current systems may excel at particular types of intelligence while struggling with others. The path forward likely involves developing complementary approaches that leverage the strengths of current systems while finding ways to address their limitations. For businesses and research institutions navigating AI strategy, the implications are substantial. Organizations may need to develop evaluation frameworks that assess not just how well AI systems answer existing questions, but how effectively they generate new ones. They may need to design human-AI collaboration models that pair the pattern-matching and computational abilities of AI with the paradigm-challenging intuitions of human experts. Finding the middle path: How AI could combine computational power with revolutionary thinking Perhaps the most valuable outcome of this exchange is that it pushes the industry toward a more balanced understanding of both AI's potential and its limitations. Amodei's vision offers a compelling reminder of the transformative impact AI could have across multiple domains simultaneously. Wolf's critique provides a necessary counterbalance, highlighting the specific types of cognitive capabilities needed for truly revolutionary progress. As the industry moves forward, this tension between optimism and skepticism, between scaling existing approaches and developing new ones, will likely drive the next wave of innovation in AI development. By understanding both perspectives, organizations can develop more nuanced strategies that maximize the potential of current systems while also investing in approaches that address their limitations. For now, the question isn't whether Wolf or Amodei is correct, but rather how their contrasting visions can inform a more comprehensive approach to developing artificial intelligence that doesn't just excel at answering the questions we already have, but helps us discover the questions we haven't yet thought to ask.
[2]
Hugging Face's chief science officer worries AI is becoming 'yes-men on servers' | TechCrunch
AI company founders have a reputation for making bold claims about the technology's potential to reshape fields, particularly the sciences. But Thomas Wolf, Hugging Face's co-founder and chief science officer, has a more measured take. In an essay published to X on Thursday, Wolf said that he feared AI becoming "yes-men on servers" absent a breakthrough in AI research. He elaborated that current AI development paradigms won't yield AI capable of outside-the-box, creative problem-solving -- the kind of problem-solving that wins Nobel Prizes. "The main mistake people usually make is thinking [people like] Newton or Einstein were just scaled-up good students, that a genius comes to life when you linearly extrapolate a top-10% student," Wolf wrote. "To create an Einstein in a data center, we don't just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask." Wolf's assertions stand in contrast to those from OpenAI CEO Sam Altman, who in an essay earlier this year said that "superintelligent" AI could "massively accelerate scientific discovery." Similarly, Anthropic CEO Dario Amodei has predicted AI could help formulate cures for most types of cancer. Wolf's problem with AI today -- and where he thinks the technology is heading -- is that it doesn't generate any new knowledge by connecting previously unrelated facts. Even with most of the internet at its disposal, AI as we currently understand it mostly fills in the gaps between what humans already know, Wolf said. Some AI experts, including ex-Google engineer Francois Chollet, have expressed similar views, arguing that while AI might be capable of memorizing reasoning patterns, it's unlikely it can generate "new reasoning" based on novel situations. Wolf thinks that AI labs are building what are essentially "very obedient students" -- not scientific revolutionaries in any sense of the phrase. AI today isn't incentivized to question and propose ideas that potentially go against its training data, he said, limiting it to answering known questions. "To create an Einstein in a data center, we don't just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask," Wolf said. "One that writes 'What if everyone is wrong about this?' when all textbooks, experts, and common knowledge suggest otherwise." Wolf thinks that the "evaluation crisis" in AI is partly to blame for this disenchanting state of affairs. He points to benchmarks commonly used to measure AI system improvements, most of which consist of questions that have clear, obvious, and "close-ended" answers. As a solution, Wolf proposes that the AI industry "move to a measure of knowledge and reasoning" that's able to elucidate whether AI can take "bold counterfactual approaches," make general proposals based on "tiny hints," and ask "non-obvious questions" that lead to "new research paths." The trick will be figuring out what this measure looks like, Wolf admits. But he thinks that it could be well worth the effort. "[T]he most crucial aspect of science [is] the skill to ask the right questions and to challenge even what one has learned," Wolf said. "We don't need an A+ [AI] student who can answer every question with general knowledge. We need a B student who sees and questions what everyone else missed."
[3]
Sam Altman Predicts an AI-Fueled Scientific Explosion -- Hugging Face's Chief Scientist Isn't Convinced
Thomas Wolf, chief science officer at AI firm Hugging Face, believes that today's AI isn't built for true scientific breakthroughs. Credit: Pexels. As the race to build ever more powerful artificial intelligence continues, OpenAI CEO Sam Altman believes decades of AI progress will unfold within the next few years. However, not everyone in the industry is as optimistic. Thomas Wolf, chief science officer at AI firm Hugging Face, believes that today's AI isn't built for true scientific breakthroughs and instead risks becoming "yes-men on servers." Wolf warned that unless AI learns to challenge assumptions and ask bold new questions, it won't fuel the scientific explosion Altman predicts is just around the corner. Hugging Face Not Convinced by AI's Scientific Explosion In a lengthy post published to X on Thursday, March 8, Hugging Face's chief scientist argued that AI is like a group of "straight A" students. "The main mistake people usually make is thinking Newton or Einstein were just scaled-up good students, that a genius comes to life when you linearly extrapolate a top-10% student," Wolf wrote . Wolf said this misses the most crucial aspect of science: the skill to ask the right questions and to challenge one's own learning. "A real science breakthrough is Copernicus proposing, against all the knowledge of his days -in ML terms we would say 'despite all his training dataset'-, that the earth may orbit the sun rather than the other way around." Wolf notes that the recent tests against AI usually featured extremely difficult questions but came with "clear, closed-end answers." "However, real scientific breakthroughs will come not from answering known questions, but from asking challenging new questions and questioning common conceptions and previous ideas," Wolf wrote. Sam Altman's AI Vision Wolf's perspective casts doubt on Altman's grand vision for AI. Last month, Altman wrote on his blog that he believed that "superintelligent" AI could "massively accelerate scientific discovery." "...we can now imagine a world where we cure all diseases, have much more time to enjoy with our families, and can fully realize our creative potential," Altman said. The OpenAI CEO said that the company was beginning to roll out AI agents, which would eventually begin to feel like virtual co-workers. "Imagine that this agent will eventually be capable of doing most things a software engineer at a top company with a few years of experience could do, for tasks up to a couple of days long," he wrote. Altman claimed the agent "will not have the biggest new ideas, it will require lots of human supervision and direction, and it will be great at some things but surprisingly bad at others." Regardless, Altman claimed that AI, in some ways, "may turn out to be like the transistor economically -- a big scientific discovery that scales well and that seeps into almost every corner of the economy." AI Reasoning Wolf believes that in order to combat a possibly narrow-minded future of AI, we do not "need a system that knows all the answers [...] but rather one that can ask questions nobody else has thought of or dared to ask." "One that writes 'What if everyone is wrong about this?' when all textbooks, experts, and common knowledge suggest otherwise," Wolf added. Wolf is not alone in his concerns for future AI. François Chollet, an ex-Google engineer, expressed skepticism about AI's ability to generate new reasoning in novel situations. Chollet believes that while AI models may excel at memorizing and reproducing known reasoning patterns, they lack the capacity to adapt beyond their training data.
Share
Copy Link
Thomas Wolf, co-founder of Hugging Face, argues that current AI systems lack the ability to drive scientific revolutions, contradicting optimistic visions of AI's future in scientific discovery.
Thomas Wolf, co-founder and chief science officer of AI company Hugging Face, has sparked a debate in the AI industry by challenging the optimistic visions of AI's potential to revolutionize scientific discovery. In a provocative blog post, Wolf argues that current AI systems are fundamentally incapable of delivering the scientific breakthroughs their creators promise 1.
Wolf's critique directly confronts the vision presented by Anthropic CEO Dario Amodei, who predicted that advanced AI would deliver a "compressed 21st century" where decades of scientific progress could unfold in just years. Amodei envisioned AI operating at "10x-100x human speed" with intellect exceeding Nobel Prize winners, potentially leading to breakthroughs in biology, neuroscience, and other fields 1.
Similarly, OpenAI CEO Sam Altman has expressed belief that "superintelligent" AI could "massively accelerate scientific discovery," potentially leading to cures for all diseases and significant advancements in human potential 3.
Wolf argues that current AI systems are more likely to produce "a country of yes-men on servers" rather than the "country of geniuses" envisioned by AI optimists. He contends that today's AI excels at producing answers that align with existing knowledge consensus but struggles with the kind of contrarian, paradigm-challenging insights that drive scientific revolutions 2.
Wolf criticizes current AI evaluation benchmarks, such as "Humanity's Last Exam" and "Frontier Math," which test AI systems on difficult questions with known answers. He argues that these benchmarks fail to measure AI's ability to generate innovative hypotheses or challenge existing paradigms 1.
"We're currently building very obedient students, not revolutionaries," Wolf explains. "This is perfect for today's main goal in the field of creating great assistants and overly compliant helpers. But until we find a way to incentivize them to question their knowledge and propose ideas that potentially go against past training data, they won't give us scientific revolutions yet." 1
This debate has significant implications for the AI industry and broader business ecosystem. Companies aligning with Amodei's and Altman's visions might prioritize scaling AI systems to unprecedented sizes, expecting discontinuous innovation to emerge from increased computational power and broader knowledge integration 1.
Wolf's perspective, however, suggests that greater returns might come from developing AI systems specifically designed to challenge existing knowledge, explore counterfactuals, and generate novel hypotheses. He proposes that the AI industry "move to a measure of knowledge and reasoning" that can elucidate whether AI can take "bold counterfactual approaches," make general proposals based on "tiny hints," and ask "non-obvious questions" that lead to "new research paths" 2.
As the debate unfolds, it's clear that the future direction of AI development and its potential impact on scientific discovery remain contentious issues within the industry. The outcome of this intellectual divide could shape the trajectory of AI research and investment for years to come.
Summarized by
Navi
[2]
Google is providing free users of its Gemini app temporary access to the Veo 3 AI video generation tool, typically reserved for paying subscribers, for a limited time this weekend.
3 Sources
Technology
23 hrs ago
3 Sources
Technology
23 hrs ago
The UK's technology secretary and OpenAI's CEO discussed a potential multibillion-pound deal to provide ChatGPT Plus access to all UK residents, highlighting the government's growing interest in AI technology.
2 Sources
Technology
7 hrs ago
2 Sources
Technology
7 hrs ago
Multiple news outlets, including Wired and Business Insider, have been duped by AI-generated articles submitted under a fake freelancer's name, raising concerns about the future of journalism in the age of artificial intelligence.
4 Sources
Technology
2 days ago
4 Sources
Technology
2 days ago
Google inadvertently revealed a new smart speaker during its Pixel event, sparking speculation about its features and capabilities. The device is expected to be powered by Gemini AI and could mark a significant upgrade in Google's smart home offerings.
5 Sources
Technology
1 day ago
5 Sources
Technology
1 day ago
As AI and new platforms transform search behavior, brands must adapt their strategies beyond traditional SEO to remain visible in an increasingly fragmented digital landscape.
2 Sources
Technology
1 day ago
2 Sources
Technology
1 day ago