2 Sources
[1]
Generative AI's most prominent skeptic doubles down
Vancouver (AFP) - Two and a half years since ChatGPT rocked the world, scientist and writer Gary Marcus still remains generative artificial intelligence's great skeptic, playing a counter-narrative to Silicon Valley's AI true believers. Marcus became a prominent figure of the AI revolution in 2023, when he sat beside OpenAI chief Sam Altman at a Senate hearing in Washington as both men urged politicians to take the technology seriously and consider regulation. Much has changed since then. Altman has abandoned his calls for caution, instead teaming up with Japan's SoftBank and funds in the Middle East to propel his company to sky-high valuations as he tries to make ChatGPT the next era-defining tech behemoth. "Sam's not getting money anymore from the Silicon Valley establishment," and his seeking funding from abroad is a sign of "desperation," Marcus told AFP on the sidelines of the Web Summit in Vancouver, Canada. Marcus's criticism centers on a fundamental belief: generative AI, the predictive technology that churns out seemingly human-level content, is simply too flawed to be transformative. The large language models (LLMs) that power these capabilities are inherently broken, he argues, and will never deliver on Silicon Valley's grand promises. "I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world." His skepticism stands in stark contrast to the prevailing mood at the Web Summit, where most conversations among 15,000 attendees focused on generative AI's seemingly infinite promise. Many believe humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability. That optimism has driven OpenAI's valuation to $300 billion, unprecedented levels for a startup, with billionaire Elon Musk's xAI racing to keep pace. Yet for all the hype, the practical gains remain limited. The technology excels mainly at coding assistance for programmers and text generation for office work. AI-created images, while often entertaining, serve primarily as memes or deepfakes, offering little obvious benefit to society or business. Marcus, a longtime New York University professor, champions a fundamentally different approach to building AI -- one he believes might actually achieve human-level intelligence in ways that current generative AI never will. "One consequence of going all-in on LLMs is that any alternative approach that might be better gets starved out," he explained. This tunnel vision will "cause a delay in getting to AI that can help us beyond just coding -- a waste of resources." 'Right answers matter' Instead, Marcus advocates for neurosymbolic AI, an approach that attempts to rebuild human logic artificially rather than simply training computer models on vast datasets, as is done with ChatGPT and similar products like Google's Gemini or Anthropic's Claude. He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: "There are too many white-collar jobs where getting the right answer actually matters." This points to AI's most persistent problem: hallucinations, the technology's well-documented tendency to produce confident-sounding mistakes. Even AI's strongest advocates acknowledge this flaw may be impossible to eliminate. Marcus recalls a telling exchange from 2023 with LinkedIn founder Reid Hoffman, a Silicon Valley heavyweight: "He bet me any amount of money that hallucinations would go away in three months. I offered him $100,000 and he wouldn't take the bet." Looking ahead, Marcus warns of a darker consequence once investors realize generative AI's limitations. Companies like OpenAI will inevitably monetize their most valuable asset: user data. "The people who put in all this money will want their returns, and I think that's leading them toward surveillance," he said, pointing to Orwellian risks for society. "They have all this private data, so they can sell that as a consolation prize." Marcus acknowledges that generative AI will find useful applications in areas where occasional errors don't matter much. "They're very useful for auto-complete on steroids: coding, brainstorming, and stuff like that," he said. "But nobody's going to make much money off it because they're expensive to run, and everybody has the same product."
[2]
Generative AI's most prominent skeptic doubles down
Many believe humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability. That optimism has driven OpenAI's valuation to $300 billion, unprecedented levels for a startup, with billionaire Elon Musk's xAI racing to keep pace.Two and a half years since ChatGPT rocked the world, scientist and writer Gary Marcus still remains generative artificial intelligence's great skeptic, playing a counter-narrative to Silicon Valley's AI true believers. Marcus became a prominent figure of the AI revolution in 2023, when he sat beside OpenAI chief Sam Altman at a Senate hearing in Washington as both men urged politicians to take the technology seriously and consider regulation. Much has changed since then. Altman has abandoned his calls for caution, instead teaming up with Japan's SoftBank and funds in the Middle East to propel his company to sky-high valuations as he tries to make ChatGPT the next era-defining tech behemoth. "Sam's not getting money anymore from the Silicon Valley establishment," and his seeking funding from abroad is a sign of "desperation," Marcus told AFP on the sidelines of the Web Summit in Vancouver, Canada. Marcus's criticism centers on a fundamental belief: generative AI, the predictive technology that churns out seemingly human-level content, is simply too flawed to be transformative. The large language models (LLMs) that power these capabilities are inherently broken, he argues, and will never deliver on Silicon Valley's grand promises. "I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world." His skepticism stands in stark contrast to the prevailing mood at the Web Summit, where most conversations among 15,000 attendees focused on generative AI's seemingly infinite promise. Many believe humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability. That optimism has driven OpenAI's valuation to $300 billion, unprecedented levels for a startup, with billionaire Elon Musk's xAI racing to keep pace. Yet for all the hype, the practical gains remain limited. The technology excels mainly at coding assistance for programmers and text generation for office work. AI-created images, while often entertaining, serve primarily as memes or deepfakes, offering little obvious benefit to society or business. Marcus, a longtime New York University professor, champions a fundamentally different approach to building AI -- one he believes might actually achieve human-level intelligence in ways that current generative AI never will. "One consequence of going all-in on LLMs is that any alternative approach that might be better gets starved out," he explained. This tunnel vision will "cause a delay in getting to AI that can help us beyond just coding -- a waste of resources." 'Right answers matter' Instead, Marcus advocates for neurosymbolic AI, an approach that attempts to rebuild human logic artificially rather than simply training computer models on vast datasets, as is done with ChatGPT and similar products like Google's Gemini or Anthropic's Claude. He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: "There are too many white-collar jobs where getting the right answer actually matters." This points to AI's most persistent problem: hallucinations, the technology's well-documented tendency to produce confident-sounding mistakes. Even AI's strongest advocates acknowledge this flaw may be impossible to eliminate. Marcus recalls a telling exchange from 2023 with LinkedIn founder Reid Hoffman, a Silicon Valley heavyweight: "He bet me any amount of money that hallucinations would go away in three months. I offered him $100,000 and he wouldn't take the bet." Looking ahead, Marcus warns of a darker consequence once investors realize generative AI's limitations. Companies like OpenAI will inevitably monetize their most valuable asset: user data. "The people who put in all this money will want their returns, and I think that's leading them toward surveillance," he said, pointing to Orwellian risks for society. "They have all this private data, so they can sell that as a consolation prize." Marcus acknowledges that generative AI will find useful applications in areas where occasional errors don't matter much. "They're very useful for auto-complete on steroids: coding, brainstorming, and stuff like that," he said. "But nobody's going to make much money off it because they're expensive to run, and everybody has the same product."
Share
Copy Link
Gary Marcus, a prominent AI skeptic, challenges the hype surrounding generative AI and large language models, advocating for alternative approaches to achieve true artificial intelligence.
Two and a half years after ChatGPT's debut, scientist and writer Gary Marcus continues to be generative artificial intelligence's most prominent skeptic. At the Web Summit in Vancouver, Canada, Marcus reiterated his counter-narrative to Silicon Valley's AI enthusiasm, challenging the fundamental promises of the technology 1.
Source: France 24
Marcus's skepticism is rooted in his belief that generative AI, particularly the large language models (LLMs) powering it, is inherently flawed. He argues that these models will never fulfill the grand promises made by Silicon Valley. "I'm skeptical of AI as it is currently practiced," Marcus stated, adding, "I think AI could have tremendous value, but LLMs are not the way there" 1.
Despite the hype surrounding generative AI, Marcus points out that its practical gains remain limited. The technology primarily excels at coding assistance and text generation for office work. AI-generated images, while entertaining, often serve as memes or deepfakes with little tangible benefit to society or business 2.
Source: Economic Times
As a longtime New York University professor, Marcus champions a fundamentally different approach to building AI. He advocates for neurosymbolic AI, which attempts to rebuild human logic artificially rather than training computer models on vast datasets. Marcus warns that the current focus on LLMs may starve out potentially superior alternative approaches 1.
One of the most significant issues with current AI technology is its tendency to produce confident-sounding mistakes, known as hallucinations. Marcus recalls a telling exchange with LinkedIn founder Reid Hoffman, who was overly optimistic about solving this problem quickly. This persistent flaw undermines the reliability of generative AI in many professional contexts 2.
Looking ahead, Marcus warns of potential darker consequences as investors realize generative AI's limitations. He predicts that companies like OpenAI may turn to monetizing user data to satisfy investors seeking returns. "The people who put in all this money will want their returns, and I think that's leading them toward surveillance," Marcus cautioned, highlighting potential Orwellian risks for society 1.
While critical of the current trajectory, Marcus acknowledges that generative AI will find useful applications in areas where occasional errors are less consequential. He sees potential in "auto-complete on steroids" for coding and brainstorming. However, he remains skeptical about the profitability of these applications, citing high operational costs and lack of product differentiation 2.
Yoshua Bengio, a renowned AI researcher, has launched LawZero, a non-profit organization aimed at developing safer AI systems. The initiative focuses on creating a 'Scientist AI' to act as a guardrail against potentially harmful AI agents.
5 Sources
Science and Research
15 hrs ago
5 Sources
Science and Research
15 hrs ago
Elon Musk's AI startup xAI is pursuing multiple fundraising avenues, including a $300 million share sale and a $5 billion debt offering, as the company aims for a $113 billion valuation. This comes as Musk refocuses on his tech ventures after stepping back from his political role.
9 Sources
Business and Economy
22 hrs ago
9 Sources
Business and Economy
22 hrs ago
Microsoft has introduced Bing Video Creator, a free AI video generation tool powered by OpenAI's Sora model, available on the Bing mobile app. This marks the first time Sora has been accessible for free, showcasing the ongoing partnership between Microsoft and OpenAI.
10 Sources
Technology
22 hrs ago
10 Sources
Technology
22 hrs ago
Snowflake's acquisition of Crunchy Data for $250 million aims to enhance its AI capabilities and launch Snowflake Postgres, an enterprise-grade PostgreSQL offering within its AI Data Cloud platform.
10 Sources
Business and Economy
22 hrs ago
10 Sources
Business and Economy
22 hrs ago
TSMC CEO C.C. Wei addresses the impact of US tariffs on the semiconductor industry, highlighting robust AI demand that continues to outpace supply. The company also denies rumors of expanding operations to the Middle East.
8 Sources
Business and Economy
15 hrs ago
8 Sources
Business and Economy
15 hrs ago