Curated by THEOUTPOST
On Fri, 7 Mar, 12:02 AM UTC
4 Sources
[1]
Ex-Google CEO Eric Schmidt Cautions Trump Administration Against Global Race For Superintelligent AI Citing Possible 'Hostile Countermeasures' - Alphabet (NASDAQ:GOOG), Alphabet (NASDAQ:GOOGL)
Former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks have issued a warning against a global race to develop superintelligent AI. What Happened: The trio, in a paper titled "Superintelligence Strategy," expressed concerns over the U.S. government's potential pursuit of artificial general intelligence (AGI) in a manner akin to the Manhattan Project. The experts worry that such a race could trigger dangerous global conflicts, reminiscent of the nuclear arms race. "What begins as a push for a superweapon and global control risks prompting hostile countermeasures and escalating tensions, thereby undermining the very stability the strategy purports to secure," wrote the co-authors in the paper. The authors argue for a cautious approach to AI development, rather than a competition to surpass global rivals. They introduce a unique concept -- Mutual Assured AI Malfunction (MAIM) -- inspired by the nuclear arms race's Mutually Assured Destruction (MAD). The paper also recommends that nations participate in nonproliferation efforts and deterrence strategies, akin to their approach with nuclear weapons. SEE ALSO: Trump Vs. Musk: Tesla To Be Hurt By Tariffs On Mexico Despite Friendship Why It Matters: Schmidt's concerns were amplified by President Donald Trump's announcement in February of a $500 billion investment in AI, dubbed the 'Stargate Project.' The Trump administration even reversed AI regulations implemented by the previous administration. Even on earlier occasions, Schmidt cautioned about the West's need to prioritize a combination of open and closed-source AI models to prevent China from taking the lead. Notably, OpenAI's GPT-4, Alphabet Inc.'s GOOG GOOGL Google Gemini, and Anthropic's Claude are closed-source. In a sharp contrast to Schmidt, Vice President, JD Vance stated "We believe that excessive regulation of the AI sector could kill a transformative industry just as it's taking off." Interestingly, the U.S. and the U.K. also stayed away from signing a global AI safety declaration at the AI Action Summit in Paris in February. READ MORE: Vladimir Putin Said 'Nobody Can Ban Bitcoin,' But Russia Isn't Looking To Follow Donald Trump's Push For A Crypto Reserve Image via Shutterstock Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. GOOGAlphabet Inc$174.830.36%Stock Score Locked: Edge Members Only Benzinga Rankings give you vital metrics on any stock - anytime. Unlock RankingsEdge RankingsMomentum83.77Growth63.21Quality91.65Value50.05Price TrendShortMediumLongOverviewGOOGLAlphabet Inc$172.770.24%Market News and Data brought to you by Benzinga APIs
[2]
Ex-Googler Schmidt warns US against AI 'Manhattan Project'
That's Mutual Assured AI Malfunction in the race for superintelligence ANALYSIS Former Google chief Eric Schmidt says the US should refrain from pursuing a latter-day "Manhattan Project" to gain AI supremacy, as this will provoke preemptive cyber responses from rivals such as China that could lead to escalation. China would not sit idle waiting to be dictated to by the US once it achieves superintelligence, the authors warn. This assumes that rivals would accept a lasting imbalance of power rather than act to prevent it, thereby undermining the very stability the strategy purports to secure ... Schmidt is one of three co-authors of a paper that likens artificial intelligence to nuclear weapons during the Cold War, and warns that the race to develop increasingly sophisticated AIs could disrupt the global balance of power and raise the odds of great power conflict. The paper, "Superintelligence Strategy," posits that rapid advances in AI are poised to reshape nearly every aspect of society, but that governments see them as a means to military dominance, which will drive a "bitter race" to maximize AI capabilities. The paper claims that the development of a "superintelligent" AI surpassing humans in nearly every domain would be the most precarious technological advancement since the atomic bomb. The other authors are Dan Hendrycks, director of the Center for AI Safety, and Alexandr Wang, founder and CEO of Scale AI. The unstated assumption is that the US is the country that will lead the way in AI development, while China would be the fearful aggressor making some kind of preemptive strike. It doesn't seem to have occurred to the authors that China recently surprised the world with AI capabilities that it was not thought to be capable of, or that most cyber threats come from Russia. Another unstated assumption is that "superintelligence" is actually possible at all, and not just some pipe dream. Any state that succeeds in producing a superior AI poses a direct threat to the survival of its peers, and the paper authors assert that states seeking to secure their own survival will be forced to sabotage such destabilizing AI projects for deterrence. This might take the form of covert operations to degrade training runs to outright physical damage disabling AI infrastructure. Thus, the paper states, we are already approaching a dynamic similar to nuclear Mutual Assured Destruction (MAD) - such as the state of détente that developed during the Cold War - in which no power dares attempt an outright grab for strategic monopoly, as any such effort would invite a "debilitating" response. Schmidt and company christen this Mutual Assured AI Malfunction or (MAIM), a combination of words that seems likely to have been chosen for that acronym. Under MAIM, they posit that AI projects developed by states are constrained by mutual threats of sabotage. Yet AI technology also has the potential to deliver benefits across numerous areas of society, from medical breakthroughs to automation. Embracing AI's benefits is important for economic growth and progress in the modern world, the authors believe. States grappling with the challenges can follow one of three strategies, according to the paper. The first is a hands-off approach with no restrictions on AI developers, chips, or models. Proponents of this strategy insist that the US government impose no limitations on AI companies, lest they curtail innovation and give China an advantage. The second is a worldwide voluntary moratorium strategy to halt further AI advances, either immediately or once certain hazardous capabilities, such as hacking or autonomous operation, are detected. Third is a monopoly strategy, where an international consortium along the lines of CERN in Europe would lead global AI development. After outlining these three alternatives, the authors highlight a proposal from the US-China Economic and Security Review Commission (USCC) to pour US government funding into a kind of Manhattan Project to build superintelligence. This would invoke the Defense Production Act to channel resources into a remote site dedicated to developing a super AI to gain a strategic monopoly. Such a strategy would inevitably raise alarm, and China would not sit idle waiting to be dictated to by the US once it achieves superintelligence, the authors warn. This assumes that rivals would accept a lasting imbalance of power rather than act to prevent it, thereby undermining the very stability the strategy purports to secure. The paper concludes that states should prioritize deterrence over winning the race for superintelligence. MAIM implies that any state seeking a strategic monopoly on AI power will face retaliatory responses from rivals, as well as non-proliferation agreements - similar to nuclear arms control - aimed at restricting AI chips and open-weight models to limit rogue actors. "States that act with pragmatism instead of fatalism or denial may find themselves beneficiaries of a great surge in wealth. As AI diffuses across countless sectors, societies can raise living standards and individuals can improve their well-being however they see fit." Fat chance of that happening. The US is far more likely to choose the hands-off strategy and let its tech sector do whatever it wants with no restrictions. At least we can console ourselves that any "superintelligence" is likely to be a long way away from realization. ®
[3]
Eric Schmidt argues against a 'Manhattan Project for AGI'
In a policy paper published Wednesday, former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks said that the U.S. should not pursue a Manhattan Project-style push to develop AI systems with "superhuman" intelligence, also known as AGI. The paper, titled "Superintelligence Strategy," asserts that an aggressive bid by the U.S. to exclusively control superintelligent AI systems could prompt fierce retaliation from China, potentially in the form of a cyberattack, which could destabilize international relations. "[A] Manhattan Project [for AGI] assumes that rivals will acquiesce to an enduring imbalance or omnicide rather than move to prevent it," the co-authors write. "What begins as a push for a superweapon and global control risks prompting hostile countermeasures and escalating tensions, thereby undermining the very stability the strategy purports to secure." Co-authored by three highly influential figures in America's AI industry, the paper comes just a few months after a U.S. congressional commission proposed a 'Manhattan Project-style' effort to fund AGI development, modeled after America's atomic bomb program in the 1940s. U.S. Secretary of Energy Chris Wright recently said the U.S. is at "the start of a new Manhattan Project" on AI while standing in front of a supercomputer site alongside OpenAI co-founder Greg Brockman. The Superintelligence Strategy paper challenges the idea, championed by several American policy and industry leaders in recent months, that a government-backed program pursuing AGI is the best way to compete with China. In the opinion of Schmidt, Wang, and Hendrycks, the U.S. is in something of an AGI standoff not dissimilar to mutually assured destruction. In the same way that global powers do not seek monopolies over nuclear weapons -- which could trigger a preemptive strike from an adversary -- Schmidt and his co-authors argue that the U.S. should be cautious about racing toward dominating extremely powerful AI systems. While likening AI systems to nuclear weapons may sound extreme, world leaders already consider AI to be a top military advantage. Already, the Pentagon says that AI is helping speed up the military's kill chain. Schmidt et al. introduce a concept they call Mutual Assured AI Malfunction (MAIM), in which governments could proactively disable threatening AI projects rather than waiting for adversaries to weaponize AGI. Schmidt, Wang, and Hendrycks propose that the U.S. shift its focus from "winning the race to superintelligence" to developing methods that deter other countries from creating superintelligent AI. The co-authors argue the government should "expand [its] arsenal of cyberattacks to disable threatening AI projects" controlled by other nations as well as limit adversaries' access to advanced AI chips and open-source models. The co-authors identify a dichotomy that has played out in the AI policy world. There's the "doomers," who believe that catastrophic outcomes from AI development are a foregone conclusion and advocate for countries slowing AI progress. On the other side, there's the "ostriches," who believe nations should accelerate AI development and essentially just hope it'll all work out. The paper proposes a third way: a measured approach to developing AGI that prioritizes defensive strategies. That strategy is particularly notable coming from Schmidt, who has previously been vocal about the need for the U.S. to compete aggressively with China in developing advanced AI systems. Just a few months ago, Schmidt released an op-ed saying DeepSeek marked a turning point in America's AI race with China. The Trump administration seems deadset on pushing ahead in America's AI development. However, as the co-authors note, America's decisions around AGI don't exist in a vacuum. As the world watches America push the limit of AI, Schmidt and his co-authors suggest it may be wiser to take a defensive approach.
[4]
Eric Schmidt Suggests Countries Could Engage in Mutual Assured AI Malfunction (MAIM)
The U.S. should not create its own Manhattan Project for AI, because such a project would invite retaliation from adversaries. Former Google CEO Eric Schmidt and Scale AI founder Alexandr Wang are co-authors on a new paper called "Superintelligence Strategy" that warns against the U.S. government creating a Manhattan Project for so-called Artificial General Intelligence (AGI) because it could quickly get out of control around the world. The gist of the argument is that the creation of such a program would lead to retaliation or sabotage by adversaries as countries race to have the most powerful AI capabilities on the battlefield. Instead, the U.S. should focus on developing methods like cyberattacks that could disable threatening AI projects. Schmidt and Wang are big boosters of AI's potential to advance society through applications like drug development and workplace efficiency. Governments, meanwhile, see it as the next frontier in defense, and the two industry leaders are essentially concerned that countries are going to end up in a race to create weapons with increasingly dangerous potential. Similar to how international agreements have reined in the development of nuclear weapons, Schmidt and Wang believe nation states should go slow on AI development and not fall prey to racing one another in AI-powered killing machines. At the same time, however, both Schmidt and Wang are building AI products for the defense sector. The former's White Stork is building autonomous drone technologies, while Wang's Scale AI this week signed a contract with the Department of Defense to create AI "agents" that can assist with military planning and operations. After years of shying away from selling technology that could be used in warfare, Silicon Valley is now patriotically lining up to collect lucrative defense contracts. All military defense contractors have a conflict of interest to promote kinetic warfare, even when not morally justified. Other countries have their own military industrial complexes, the thinking goes, so the U.S. needs to maintain one too. But in the end, innocent people suffer and die while powerful people play chess. Palmer Luckey, the founder of defense tech darling Anduril, has argued that AI-powered targeted drone strikes are safer than launching nukes that could have a larger impact zone or planting land mines that have no targeting. And if other countries are going to continue building AI weapons, we should have the same capabilities as deterrence. Anduril has been supplying Ukraine with drones that can target and attack Russian military equipment over enemy lines. Anduril recently ran an ad campaign that displayed the basic text “Work at Anduril.com†covered with the word “Don't†written in giant, graffiti-style spray-painted letters, seemingly playing to the idea that working for the military industrial complex is the counterculture now. Schmidt and Wang have argued that humans should always remain in the loop on any AI-assisted decision making. But as recent reporting has demonstrated, the Israeli military is already relying on faulty AI programs to make lethal decisions. Drones have long been a divisive topic, as critics say that soldiers are more complacent when they are not directly in the line of fire or do not see the consequences of their actions first-hand. Image recognition AI is notorious for making mistakes, and we are quickly heading to a point where killer drones will fly back and forth hitting imprecise targets. The Schmidt and Wang paper makes a lot of assumptions that AI is soon going to be "superintelligent," capable of performing as good if not better as humans in most tasks. That is a big assumption as the most cutting-edge "thinking" models continue to produce major gaffs, and companies get flooded with poorly-written job applications assisted by AI. These models are crude imitations of humans with often unpredictable and strange behavior. Schmidt and Wang are selling a vision of the world and their solutions. If AI is going to be all-powerful and dangerous, governments should go to them and buy their products because they are the responsible actors. In the same vein, OpenAI's Sam Altman has been criticized for making lofty claims about the risks of AI, which some say is an attempt to influence policy in Washington and capture power. It is sort of like saying, "AI is so powerful it can destroy the world, but we have a safe version we are happy to sell you." Schmidt's warnings are not likely to have much impact as President Trump drops Biden-era guidelines around AI safety and pushes the U.S. to become a dominant force in AI. Last November, a Congressional commission proposed the Manhattan Project for AI that Schmidt is warning about and as people like Sam Altman and Elon Musk gain greater influence in Washington, it's easy to see it gaining traction. If that continues, the paper warns, countries like China might retaliate in ways such as intentionally degrading models or attacking physical infrastructure. It is not an unheard of threat, as China has wormed its way into major U.S. tech companies like Microsoft, and others like Russia are reportedly using freighter ships to strike undersea fiber optic cables. Of course, we would do the same to them. It's all mutual. It is unclear how the world could come to any agreement to stop playing with these weapons. In that sense, the idea of sabotaging AI projects to defend against them might be a good thing.
Share
Share
Copy Link
Eric Schmidt, along with other tech leaders, cautions against a global race for superintelligent AI, warning of potential conflicts and proposing a new deterrence strategy.
Eric Schmidt, former CEO of Google, along with Scale AI CEO Alexandr Wang and Center for AI Safety Director Dan Hendrycks, has issued a stark warning against a global race to develop superintelligent AI. In a paper titled "Superintelligence Strategy," the trio expresses concerns over the potential pursuit of artificial general intelligence (AGI) by governments in a manner akin to the Manhattan Project 1.
The authors argue that a government-backed program pursuing AGI, similar to the one proposed by a U.S. congressional commission, could lead to dangerous global conflicts reminiscent of the nuclear arms race. They warn that such an approach could provoke preemptive cyber responses from rivals like China, potentially escalating tensions and undermining global stability 2.
Schmidt and his co-authors introduce a novel concept called Mutual Assured AI Malfunction (MAIM), inspired by the Cold War's Mutually Assured Destruction (MAD) doctrine. Under MAIM, they posit that AI projects developed by states would be constrained by mutual threats of sabotage 3.
The paper outlines three potential strategies for states grappling with AI challenges:
The authors argue against the monopoly strategy, warning that it could prompt hostile countermeasures from rival nations 2.
Schmidt, Wang, and Hendrycks propose that the U.S. shift its focus from "winning the race to superintelligence" to developing methods that deter other countries from creating superintelligent AI. They suggest expanding the arsenal of cyberattacks to disable threatening AI projects and limiting adversaries' access to advanced AI chips and open-source models 3.
The paper comes at a time when the U.S. government, under the Trump administration, is pushing for accelerated AI development. The administration has reversed AI regulations and announced a $500 billion investment in AI, dubbed the 'Stargate Project' 1.
While the authors advocate for caution in AI development, they also emphasize the importance of embracing AI's potential benefits across numerous areas of society, from medical breakthroughs to automation. They argue that embracing AI's benefits is crucial for economic growth and progress in the modern world 2.
Some critics argue that the paper makes assumptions about the imminent arrival of "superintelligent" AI systems, which may be premature given the current state of AI technology. Others point out potential conflicts of interest, as both Schmidt and Wang are involved in building AI products for the defense sector 4.
Reference
[2]
Eric Schmidt, former Google CEO, expresses concerns about AI's rapid evolution and potential dangers, suggesting the need for an "unplug" option while also promoting AI solutions.
4 Sources
4 Sources
Eric Schmidt, ex-Google CEO, expresses concerns about AI being weaponized for terror, highlighting potential misuse by rogue states and terrorists. He advocates for balanced oversight while warning against over-regulation that could stifle innovation.
3 Sources
3 Sources
The AI Action Summit in Paris marks a significant shift in global attitudes towards AI, emphasizing economic opportunities over safety concerns. This change in focus has sparked debate among industry leaders and experts about the balance between innovation and risk management.
7 Sources
7 Sources
A US congressional commission has recommended a large-scale, government-funded AI initiative to compete with China in developing artificial general intelligence (AGI), drawing parallels to the Manhattan Project of World War II.
3 Sources
3 Sources
OpenAI CEO Sam Altman outlines four crucial steps for the United States to maintain its lead in artificial intelligence development, emphasizing the need for strategic action to prevent China from dominating the field.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved