2 Sources
[1]
Nuclear Experts Say Mixing AI and Nuclear Weapons Is Inevitable
Human judgement remains central to the launch of nuclear weapons. But experts say it's a matter of when, not if, artificial intelligence will get baked into the world's most dangerous systems. The people who study nuclear war for a living are certain that artificial intelligence will soon power the deadly weapons. None of them are quite sure what, exactly, that means. In the middle of July, Nobel laureates gathered at the University of Chicago to listen to nuclear war experts talk about the end of the world. In closed sessions over two days, scientists, former government officials, and retired military personnel enlightened the laureates about the most devastating weapons ever created. The goal was to educate some of the most respected people in the world about one of the most horrifying weapons ever made and, at the end of it, have the laureates make policy recommendations to world leaders about how to avoid nuclear war. AI was on everyone's mind. "We're entering a new world of artificial intelligence and emerging technologies influencing our daily life, but also influencing the nuclear world we live in," Scott Sagan, a Stanford professor known for his research into nuclear disarmament, said during a press conference at the end of the talks. It's a statement that takes as given the inevitability of governments mixing AI and nuclear weapons -- something everyone I spoke with in Chicago believed in. "It's like electricity," says Bob Latiff, a retired US Air Force major general and a member of the Bulletin of the Atomic Scientists' Science and Security Board. "It's going to find its way into everything." Latiff is one of the people who helps set the Doomsday Clock every year. "The conversation about AI and nukes is hampered by a couple of major problems. The first is that nobody really knows what AI is," says Jon Wolfsthal, a nonproliferation expert who's the director of global risk at the Federation of American Scientists and was formerly a special assistant to Barack Obama. "What does it mean to give AI control of a nuclear weapon? What does it mean to give a [computer chip] control of a nuclear weapon?" asks Herb Lin, a Stanford professor and Doomsday Clock alum. "Part of the problem is that large language models have taken over the debate." First, the good news. No one thinks that ChatGPT or Grok will get nuclear codes anytime soon. Wolfsthal tells me that there are a lot of "theological" differences between nuclear experts, but that they're united on that front. "In this realm, almost everybody says we want effective human control over nuclear weapon decisionmaking," he says. Still, Wolfsthal has heard whispers of other concerning uses of LLMs in the heart of American power. "A number of people have said, 'Well, look, all I want to do is have an interactive computer available for the president so he can figure out what Putin or Xi will do and I can produce that dataset very reliably. I can get everything that Xi or Putin has ever said and written about anything and have a statistically high probability to reflect what Putin has said,'" he says.
[2]
Experts Warn That AI Is Getting Control of Nuclear Weapons
Nobel laureates met with nuclear experts last month to discuss AI and the end of the world -- and if that sounds like the opening to a sci-fi blockbuster set in the apocalypse, you're not alone. As Wired reports, the convened experts seemed to broadly agree that it's only a matter of time until an AI will get hold of nuclear codes. Exactly why that needs to be true is hard to pin down, but the feeling of inevitability -- and anxiety -- is palpable in the magazine's reporting. "It's like electricity," retired US Air Force major general and member of the Bulletin of the Atomic Scientists' Science and Security Board, Bob Latiff, told Wired. "It's going to find its way into everything." It's a bizarre situation. AIs have already been shown to exhibit numerous dark streaks, resorting to blackmailing human users at an astonishing rate when threatened with being shut down. In the context of an AI, or networks of AIs, safeguarding a nuclear weapons stockpile, those sorts of poorly-understood risks become immense. And that's without getting into a genuine concern among some experts, which also happens to be the plot of the movie "The Terminator": a hypothetical superhuman AI going rogue and turning humanity's nuclear weapons against it. Earlier this year, former Google CEO Eric Schmidt warned that a human-level AI may not be incentivized to "listen to us anymore," arguing that "people do not understand what happens when you have intelligence at this level." That kind of AI doomerism has been on the minds of tech leaders for many years now, as reality plays a slow-motion game of catch-up. In their current form, the risks would probably be more banal, since the best AI models today still suffer from rampant hallucinations that greatly undercut the usefulness of their outputs. Then there's the threat of flawed AI tech leaving gaps in our cybersecurity, allowing adversaries -- or even adversary AIs -- to access systems in control of nuclear weapons. To get all members of last month's unusual meeting to agree on a topic as fraught as AI proved challenging, with Federation of American Scientists director of global risk Jon Wolfsthal admitting to the publication that "nobody really knows what AI is." They did find at least some common ground, at least. "In this realm, almost everybody says we want effective human control over nuclear weapon decisionmaking," Wolfsthal added. Latiff agreed that "you need to be able to assure the people for whom you work there's somebody responsible." If this all sounds like a bit of a clown show, you're not wrong. Under president Donald Trump, the federal government has been busy jamming AI into every possible domain, often while experts warn them the tech is not yet -- and may never be -- up to the task. Hammering the bravado home, the Department of Energy declared this year that AI is the "next Manhattan Project," referencing the World War 2-era project that resulted in the world's first nuclear bombs. Underscoring the seriousness of the threat, ChatGPT maker OpenAI also struck a deal with the US National Laboratories earlier this year to use its AI for nuclear weapon security. Last year, Air Force general Anthony Cotton, who's effectively in charge of the US stockpile of nuclear missiles, boasted at a defense conference that the Pentagon is doubling down on AI, arguing that it will "enhance our decision-making capabilities." Fortunately, Cotton stopped short of declaring that we must let the tech assume full control. "But we must never allow artificial intelligence to make those decisions for us," he added at the time.
Share
Copy Link
Nuclear experts and Nobel laureates discuss the increasing likelihood of AI integration into nuclear weapons systems, raising concerns about decision-making processes and potential risks.
Nuclear experts and Nobel laureates gathered at the University of Chicago in July to discuss the future of nuclear weapons, with artificial intelligence (AI) emerging as a central topic. The consensus among experts is clear: the integration of AI into nuclear weapons systems is not a matter of if, but when 1.
Source: Wired
Bob Latiff, a retired US Air Force major general and member of the Bulletin of the Atomic Scientists' Science and Security Board, likened AI's integration to electricity, stating, "It's going to find its way into everything" 1. This sentiment underscores the perceived inevitability of AI's role in nuclear weapons technology.
One of the primary challenges in addressing AI's integration into nuclear systems is the lack of a clear definition of AI itself. Jon Wolfsthal, director of global risk at the Federation of American Scientists, highlighted this issue, saying, "The conversation about AI and nukes is hampered by a couple of major problems. The first is that nobody really knows what AI is" 1.
This ambiguity extends to the practical implications of AI in nuclear weapons systems. Herb Lin, a Stanford professor, raised pertinent questions about what it means to give AI or even a computer chip control over nuclear weapons 1.
Despite the uncertainties, there is a strong consensus among experts on one crucial aspect: the necessity of human control in nuclear decision-making. Wolfsthal emphasized, "In this realm, almost everybody says we want effective human control over nuclear weapon decision-making" 12.
This stance is echoed by high-ranking military officials. Air Force General Anthony Cotton, who oversees the US nuclear missile stockpile, stated at a defense conference that while AI will "enhance our decision-making capabilities," it must never be allowed to make those decisions for us 2.
The integration of AI into nuclear weapons systems brings forth several concerns:
AI Hallucinations: Current AI models are prone to generating false or misleading information, which could be catastrophic in a nuclear context 2.
Cybersecurity Vulnerabilities: There are fears that flawed AI technology could create gaps in cybersecurity, potentially allowing adversaries or even adversary AIs to access nuclear weapons control systems 2.
Source: Futurism
The US government has been actively pursuing AI integration across various domains, including nuclear security. The Department of Energy has declared AI as the "next Manhattan Project," highlighting its perceived importance 2.
In a significant move, OpenAI, the creator of ChatGPT, has entered into an agreement with the US National Laboratories to use its AI technology for nuclear weapon security 2. This collaboration underscores the growing intersection of private AI development and national security interests.
As the world grapples with the implications of AI in nuclear weapons systems, the need for clear guidelines, robust safeguards, and continued human oversight becomes increasingly apparent. The discussions among experts and policymakers will play a crucial role in shaping the future of this critical intersection between cutting-edge technology and global security.
Cybersecurity researchers demonstrate a novel "promptware" attack that uses malicious Google Calendar invites to manipulate Gemini AI into controlling smart home devices, raising concerns about AI safety and real-world implications.
13 Sources
Technology
23 hrs ago
13 Sources
Technology
23 hrs ago
Google's search head Liz Reid responds to concerns about AI's impact on web traffic, asserting that AI features are driving more searches and higher quality clicks, despite conflicting third-party reports.
8 Sources
Technology
23 hrs ago
8 Sources
Technology
23 hrs ago
OpenAI has struck a deal with the US government to provide ChatGPT Enterprise to federal agencies for just $1 per agency for one year, marking a significant move in AI adoption within the government sector.
14 Sources
Technology
23 hrs ago
14 Sources
Technology
23 hrs ago
Microsoft announces the integration of OpenAI's newly released GPT-5 model across its Copilot ecosystem, including Microsoft 365, GitHub, and Azure AI. The update promises enhanced AI capabilities for users and developers.
3 Sources
Technology
7 hrs ago
3 Sources
Technology
7 hrs ago
Google has officially launched its AI coding agent Jules, powered by Gemini 2.5 Pro, offering asynchronous coding assistance with new features and tiered pricing plans.
10 Sources
Technology
23 hrs ago
10 Sources
Technology
23 hrs ago