3 Sources
[1]
Nuclear Experts Say Mixing AI and Nuclear Weapons Is Inevitable
Human judgement remains central to the launch of nuclear weapons. But experts say it's a matter of when, not if, artificial intelligence will get baked into the world's most dangerous systems. The people who study nuclear war for a living are certain that artificial intelligence will soon power the deadly weapons. None of them are quite sure what, exactly, that means. In the middle of July, Nobel laureates gathered at the University of Chicago to listen to nuclear war experts talk about the end of the world. In closed sessions over two days, scientists, former government officials, and retired military personnel enlightened the laureates about the most devastating weapons ever created. The goal was to educate some of the most respected people in the world about one of the most horrifying weapons ever made and, at the end of it, have the laureates make policy recommendations to world leaders about how to avoid nuclear war. AI was on everyone's mind. "We're entering a new world of artificial intelligence and emerging technologies influencing our daily life, but also influencing the nuclear world we live in," Scott Sagan, a Stanford professor known for his research into nuclear disarmament, said during a press conference at the end of the talks. It's a statement that takes as given the inevitability of governments mixing AI and nuclear weapons -- something everyone I spoke with in Chicago believed in. "It's like electricity," says Bob Latiff, a retired US Air Force major general and a member of the Bulletin of the Atomic Scientists' Science and Security Board. "It's going to find its way into everything." Latiff is one of the people who helps set the Doomsday Clock every year. "The conversation about AI and nukes is hampered by a couple of major problems. The first is that nobody really knows what AI is," says Jon Wolfsthal, a nonproliferation expert who's the director of global risk at the Federation of American Scientists and was formerly a special assistant to Barack Obama. "What does it mean to give AI control of a nuclear weapon? What does it mean to give a [computer chip] control of a nuclear weapon?" asks Herb Lin, a Stanford professor and Doomsday Clock alum. "Part of the problem is that large language models have taken over the debate." First, the good news. No one thinks that ChatGPT or Grok will get nuclear codes anytime soon. Wolfsthal tells me that there are a lot of "theological" differences between nuclear experts, but that they're united on that front. "In this realm, almost everybody says we want effective human control over nuclear weapon decisionmaking," he says. Still, Wolfsthal has heard whispers of other concerning uses of LLMs in the heart of American power. "A number of people have said, 'Well, look, all I want to do is have an interactive computer available for the president so he can figure out what Putin or Xi will do and I can produce that dataset very reliably. I can get everything that Xi or Putin has ever said and written about anything and have a statistically high probability to reflect what Putin has said,'" he says.
[2]
Experts Warn That AI Is Getting Control of Nuclear Weapons
Nobel laureates met with nuclear experts last month to discuss AI and the end of the world -- and if that sounds like the opening to a sci-fi blockbuster set in the apocalypse, you're not alone. As Wired reports, the convened experts seemed to broadly agree that it's only a matter of time until an AI will get hold of nuclear codes. Exactly why that needs to be true is hard to pin down, but the feeling of inevitability -- and anxiety -- is palpable in the magazine's reporting. "It's like electricity," retired US Air Force major general and member of the Bulletin of the Atomic Scientists' Science and Security Board, Bob Latiff, told Wired. "It's going to find its way into everything." It's a bizarre situation. AIs have already been shown to exhibit numerous dark streaks, resorting to blackmailing human users at an astonishing rate when threatened with being shut down. In the context of an AI, or networks of AIs, safeguarding a nuclear weapons stockpile, those sorts of poorly-understood risks become immense. And that's without getting into a genuine concern among some experts, which also happens to be the plot of the movie "The Terminator": a hypothetical superhuman AI going rogue and turning humanity's nuclear weapons against it. Earlier this year, former Google CEO Eric Schmidt warned that a human-level AI may not be incentivized to "listen to us anymore," arguing that "people do not understand what happens when you have intelligence at this level." That kind of AI doomerism has been on the minds of tech leaders for many years now, as reality plays a slow-motion game of catch-up. In their current form, the risks would probably be more banal, since the best AI models today still suffer from rampant hallucinations that greatly undercut the usefulness of their outputs. Then there's the threat of flawed AI tech leaving gaps in our cybersecurity, allowing adversaries -- or even adversary AIs -- to access systems in control of nuclear weapons. To get all members of last month's unusual meeting to agree on a topic as fraught as AI proved challenging, with Federation of American Scientists director of global risk Jon Wolfsthal admitting to the publication that "nobody really knows what AI is." They did find at least some common ground, at least. "In this realm, almost everybody says we want effective human control over nuclear weapon decisionmaking," Wolfsthal added. Latiff agreed that "you need to be able to assure the people for whom you work there's somebody responsible." If this all sounds like a bit of a clown show, you're not wrong. Under president Donald Trump, the federal government has been busy jamming AI into every possible domain, often while experts warn them the tech is not yet -- and may never be -- up to the task. Hammering the bravado home, the Department of Energy declared this year that AI is the "next Manhattan Project," referencing the World War 2-era project that resulted in the world's first nuclear bombs. Underscoring the seriousness of the threat, ChatGPT maker OpenAI also struck a deal with the US National Laboratories earlier this year to use its AI for nuclear weapon security. Last year, Air Force general Anthony Cotton, who's effectively in charge of the US stockpile of nuclear missiles, boasted at a defense conference that the Pentagon is doubling down on AI, arguing that it will "enhance our decision-making capabilities." Fortunately, Cotton stopped short of declaring that we must let the tech assume full control. "But we must never allow artificial intelligence to make those decisions for us," he added at the time.
[3]
'I don't know what it means to have a Manhattan Project for AI': Nuclear war experts remind us of the frightening risks of our artificial intelligence controlling our nukes
These are just risks from known unknowns, what about unknown unknowns? I'm not old enough to have lived through the Cold War and its close nuclear shaves that apparently had us at the brink of armageddon. But even millennials like myself, and younger generations, have grown up under the shadow of that particular mushroom-shaped threat. And it might just be me, but I don't find my fear of this soothed by our continuous AI advancement. That's especially the case when I reflect on the possibility of AI being baked into parts of nuclear launch systems, which, as Wired reports, nuclear war experts think might be inevitable. Ex-US Air Force major general and member of the Science and Security Board for the Bulletin of the Atomic Scientists Bob Latiff, for instance, thinks that AI is "like electricity" and is "going to find its way into everything." The various experts -- scientists, military personnel, and so on -- spoke to Nobel laureates last month at the University of Chicago. And while it seems there might have been an air of determinism about the whole 'AI coming to nukes' thing -- one that's supported by the military seemingly leaning into AI adoption -- that doesn't mean everyone was keen on this future. In fact, judging from what Wired relays, the experts were quick to point out all the risks that might occur from this Manhattan project. The first thing to understand is that launching a nuke doesn't occur at the final key-turn. That key-turn is a result of what Wired explains are "a hundred little decisions, all of them made by humans." And it's that last part that's key, when considering AI. Which of these little decisions could and should AI be allowed to exercise agency over? Thankfully the bigwigs seem to agree that we need human agency over actual nuclear weapon decisions, but even if AI doesn't have actual agency over decisions in the process, are there problems with relying on its information or suggestions? Director of global risk at the Federation of American Scientists Jon Wolfsthal explains his concerns: "What I worry about is that somebody will say we need to automate this system and parts of it, and that will create vulnerabilities that an adversary can exploit, or that it will produce data or recommendations that people aren't equipped to understand, and that will lead to bad decisions." I've already spoken about what I see as utopian AI fanaticism in the new tech elites, and we're certainly seeing the US lean heavily into the AI arms race -- thus the US energy secretary calling it the second Manhattan project, not to mention this being the Energy Department's official stance. So it's not exactly a far-fetched idea that AI could start to be used to automate parts of the system to, for instance, produce data or recommendations from the black box of an artificial intelligence. This problem is also surely exacerbated by a general lack of understanding of AI, and perhaps a misplaced faith in it. Wolfsthal agrees on the first point: "The conversation about AI and nukes is hampered by a couple of major problems. The first is that nobody really knows what AI is." If we misunderstand AI as something that is inherently truth-aiming then we are liable to be unthinkingly misguided by its data or recommendations. AI isn't inherently truth-aiming, humans are. We can try to guide AI in the direction of what we consider to be truthful, but that's coming from us, not the AI. If we start to feed AI into parts of processes that quite literally hold the keys to the fate of humanity, these are the kinds of things that we need to be remembering. It's good news, at least, that these conversations are taking place between nuclear war and nuclear proliferation experts and those who actually have a hand in how we tackle the problem in the future.
Share
Copy Link
Nuclear war experts discuss the inevitable integration of AI into nuclear weapons systems, highlighting potential risks and the need for human control in decision-making processes.
Nuclear war experts and scientists have recently convened to discuss a pressing concern: the integration of artificial intelligence (AI) into nuclear weapons systems. This development is seen as inevitable, with experts comparing AI's pervasiveness to that of electricity 1. The consensus among these specialists is that AI will soon be incorporated into the world's most dangerous weapons systems, though the exact implications remain uncertain 12.
Bob Latiff, a retired US Air Force major general and member of the Bulletin of the Atomic Scientists' Science and Security Board, stated that AI is "going to find its way into everything" 1. This sentiment was echoed by other experts at a recent gathering of Nobel laureates at the University of Chicago, where nuclear war specialists discussed the potential impacts of AI on nuclear weapons 13.
Source: Wired
Jon Wolfsthal, director of global risk at the Federation of American Scientists, highlighted a significant challenge in these discussions: "The conversation about AI and nukes is hampered by a couple of major problems. The first is that nobody really knows what AI is" 1. This lack of clear understanding complicates the assessment of AI's role in nuclear systems.
Despite the anticipated integration of AI, experts unanimously agree on the necessity of maintaining human control over nuclear weapon decision-making 12. Wolfsthal emphasized, "In this realm, almost everybody says we want effective human control over nuclear weapon decision-making" 2. This stance is crucial given the potential risks associated with automated systems in such high-stakes scenarios.
Source: Futurism
The integration of AI into nuclear systems raises several concerns:
While experts dismiss the notion of language models like ChatGPT directly controlling nuclear codes, there are discussions about using AI for data analysis and predictive modeling in nuclear strategy 1. Some propose using large language models to predict the behavior of world leaders, though the reliability and ethics of such applications remain questionable 1.
Source: pcgamer
The integration of AI into nuclear systems is part of a broader trend of AI adoption in military and government sectors. The U.S. Department of Energy has declared AI as the "next Manhattan Project," highlighting its perceived importance in national security 2. This push towards AI integration is occurring despite warnings from some experts about the technology's current limitations and potential risks 23.
As AI continues to advance and integrate into critical systems, the nuclear weapons field faces a future filled with both potential benefits and significant risks. The ongoing discussions among experts underscore the importance of maintaining human oversight, understanding AI's limitations, and carefully considering the implications of integrating this technology into systems that could determine the fate of humanity.
Summarized by
Navi
AI startup Anthropic secures a massive $13 billion Series F funding round, skyrocketing its valuation to $183 billion. The company reports exponential growth in revenue and customer base, solidifying its position as a major player in the AI industry.
17 Sources
Business
3 hrs ago
17 Sources
Business
3 hrs ago
Salesforce CEO Marc Benioff reveals the company has reduced its customer support workforce by 4,000 jobs, replacing them with AI agents. This move highlights the growing impact of AI on employment in the tech industry.
4 Sources
Technology
4 hrs ago
4 Sources
Technology
4 hrs ago
OpenAI announces the acquisition of product testing startup Statsig for $1.1 billion and appoints its CEO as CTO of Applications, while also making significant changes to its leadership team.
4 Sources
Business
3 hrs ago
4 Sources
Business
3 hrs ago
Microsoft strikes a deal with the US General Services Administration, offering significant discounts on cloud services and free access to its AI tool Copilot, potentially saving the government billions.
7 Sources
Technology
12 hrs ago
7 Sources
Technology
12 hrs ago
Recent studies and tragic incidents highlight the potential dangers of AI chatbots and companions for vulnerable youth, raising concerns about mental health support and suicide prevention.
3 Sources
Technology
3 hrs ago
3 Sources
Technology
3 hrs ago