3 Sources
3 Sources
[1]
Senator introduces bill to draw red lines to limit AI use by military
Senator Elissa Slotkin, a Michigan Democrat on the Armed Services Committee, has introduced a bill to the Senate to regulate the Pentagon's use of AI, an opening salvo in how Congress might address the use of AI by the military. The bill, introduced Tuesday, seeks to codify two existing Defense Department guidelines into law: that AI cannot autonomously decide to kill a target, and says that the technology cannot be used to help the military conduct mass surveillance on Americans. It also bans the use of the technology for launching or detonating a nuclear weapon. "We're unhealthy as a political system, and so we focus more on things like Greenland than we do on the use of AI in matters of legal force. And it's our responsibility to legislate this," Slotkin told NBC News. The first two tenants of the bill were at the center of the U.S. military's acrimonious split with AI giant Anthropic in recent weeks. While the Pentagon has insisted that it regards conducting mass surveillance of Americans as illegal already and that its policy mandates that a human be responsible for lethal decisions, Anthropic worried that loopholes could allow for that surveillance anyway and that future administrations could revoke those guidelines. The feud boiled over into President Donald Trump decreeing that all federal agencies have six months to stop using Anthropic and Defense Secretary Pete Hegseth declaring them a supply chain risk, despite the fact that the technology has still helped the U.S. identify military targets in its ongoing war with Iran. Anthropic is suing over that designation. Slotkin said her legislation could have headed off that split. "The Pentagon was able to target Anthropic in this case, and is going to spend the next year and God knows how many millions of dollars ripping out Anthropic from all the classified systems, something that's going to cost the taxpayer an enormous amount of money over a dispute that could have been handled if we just had law," she said in a phone call with NBC News. Slotkin said she introduced the bill, which has no cosponsors, with the aim of helping shape early conversations of the major annual defense spending bill, the National Defense Authorization Act, which is legislated around the end of the year. "Our bill is a neat five pages. This is not an extensive, elaborate thing," she said. "And that is on purpose, because we understand that, like with every tool ever invented, there are some really good uses that help, and there are some really dangerous uses."
[2]
Slotkin proposes legislation to limit Defense Department's use of AI
Paula Wethington is a digital producer at CBS Detroit. She previously held digital content roles at NEWSnet, Gannett/USA Today network and The Monroe News in Michigan. She is a graduate of the University of South Carolina. Michigan Sen. Elissa Slotkin has introduced legislation aiming to set limits on how and when the U.S. Department of Defense can use artificial intelligence within military operations. "Congress is behind in putting left and right limits on the use of AI, and the first place to start should be at the Pentagon," said Slotkin, a Democrat. "My bill ensures a human is involved when deadly autonomous weapons are fired, AI cannot be used to spy on the American people, and that a human is on the switch to launch nuclear weapons. AI is going to shape the future of America's national security, and we must win the AI race against China. But to do that, we need action that puts limits on AI in the Department of Defense. This is just common sense." The bill, known as the AI Guardrails Act, was introduced on Wednesday. Slotkin, who serves on the Armed Services Committee, said the bill is intended to set limits in three areas: banning the defense department from firing autonomous weapons to kill without human authorization, banning the use of AI to "spy on Americans," and banning the use of AI to launch nuclear weapons. The concerns outlined in the bill's summary include the insistence that the decision to launch nuclear weapons remain with the president. "Some military command decisions are too risky and too consequential for machines to decide," the bill summary said. Slotkin previously questioned Trump administration nominees on the potential use of AI during a recent Senate Armed Services hearing.
[3]
Slotkin introduces bill limiting Pentagon AI use
Sen. Elissa Slotkin (D-Mich.) moved ahead with efforts to limit the Pentagon's use of artificial intelligence, introducing a bill on Tuesday that would establish guardrails related to autonomous and nuclear weapons. The bill, titled the AI Guardrails Act, would prohibit the Department of Defense from using autonomous weapons to kill without human authorization, and using AI for domestic mass surveillance and nuclear weapons launch. It is the latest pushback from Democrats after the Pentagon cut ties with AI firm Anthropic earlier this month and took the unprecedented move to designate the company a supply chain risk. President Trump also directed federal civilian agencies to immediately stop using Anthropic's products. Slotkin's bill appears to touch on the assurances Anthropic pressed the Pentagon for, including specific restrictions on mass domestic surveillance and fully autonomous lethal weapons. The DOD insisted on using an "all lawful purposes" standard and negotiations fell apart as a result. "Congress is behind in putting left and right limits on the use of AI, and the first place to start should be at the Pentagon," Slotkin said in a press release Tuesday, adding, "AI is going to shape the future of America's national security and we must win the AI race against China. But to do that, we need action that puts limits on AI in the Department of Defense. This is just common sense." Slotkin argued her bill is consistent with the Trump administration's AI Action Plan, which includes calls on the U.S. to "aggressively adopt" AI for the Armed Forces, while ensuring it is "secure and reliable." "Militaries must also lay out which decisions must remain under human control regardless of the merits of AI-enabled decision-making," Slotkin's office said in a fact sheet about the bill. "Some military command decisions are too risky and too consequential for machines to decide." Slotkin's Democratic colleague, Sen. Adam Schiff (Calif.) told The Hill last week that he would introduce legislation in the coming weeks to codify protections around the use of AI in surveillance and warfare. HIs office has been in touch with industry leaders for the legislation and is also considering its inclusion in the upcoming National Defense Authorization Act, according to the senator's spokesperson. In the House, Rep. Sam Liccardo (D-Calif.) introduced an amendment to the Defense Production Act to prohibit federal agencies from "retaliating" against high-risk technology vendors and developers that try to limit the deployment of their technology "in ways to mitigate the risk to United States citizens." The amendment failed on a party line vote earlier this month. Anthropic has sued the Trump administration, asking the courts for a temporary halt on supply chain designation, which is typically reserved for companies of foreign adversaries.
Share
Share
Copy Link
Senator Elissa Slotkin introduced the AI Guardrails Act to regulate military AI use, requiring human authorization for autonomous weapons and banning AI-enabled mass surveillance. The five-page bill aims to codify existing Pentagon guidelines into law following the costly split with Anthropic that will require millions in taxpayer dollars to unwind.
Senator Elissa Slotkin, a Michigan Democrat serving on the Armed Services Committee, has introduced AI legislation designed to establish clear boundaries for military AI use by the Pentagon
1
. The AI Guardrails Act, introduced Tuesday, seeks to codify existing Department of Defense guidelines into enforceable law, addressing three critical areas where human oversight must remain non-negotiable2
.
Source: The Hill
The five-page bill prohibits the Defense Department from using autonomous weapons to kill without human authorization, bans AI-enabled mass surveillance on Americans, and ensures human control over nuclear weapons launches
3
. "Congress is behind in putting left and right limits on the use of AI, and the first place to start should be at the Pentagon," Slotkin said, emphasizing that some military command decisions are "too risky and too consequential for machines to decide"2
.The Elissa Slotkin bill emerges directly from the Pentagon's acrimonious split with AI giant Anthropic, a conflict that exposed vulnerabilities in relying solely on administrative guidelines rather than statutory guardrails
1
. While the Pentagon insisted it already regarded conducting mass surveillance of Americans as illegal and maintained policies requiring human responsibility for lethal decisions, Anthropic worried that loopholes could enable surveillance and that future administrations could revoke those guidelines1
.The dispute escalated when President Donald Trump decreed that all federal agencies have six months to stop using Anthropic, and Defense Secretary Pete Hegseth declared the company a supply chain risk—despite the technology helping the U.S. identify military targets in its ongoing conflict with Iran
1
. Anthropic has since sued over that designation3
.
Source: CBS
"The Pentagon was able to target Anthropic in this case, and is going to spend the next year and God knows how many millions of dollars ripping out Anthropic from all the classified systems, something that's going to cost the taxpayer an enormous amount of money over a dispute that could have been handled if we just had law," Slotkin told NBC News
1
.Slotkin's approach to limit Defense Department's use of AI attempts to thread a delicate needle between establishing necessary restrictions and maintaining competitive advantage. "AI is going to shape the future of America's national security, and we must win the AI race against China. But to do that, we need action that puts limits on AI in the Department of Defense. This is just common sense," the senator stated
2
.
Source: NBC
The legislation aligns with the Trump administration's AI Action Plan, which calls on the U.S. to "aggresively adopt" AI in military operations while ensuring it remains "secure and reliable"
3
. Slotkin deliberately kept the bill concise at five pages, "because we understand that, like with every tool ever invented, there are some really good uses that help, and there are some really dangerous uses"1
.Related Stories
The bill represents the opening move in what is likely to become broader Congressional action on AI in military operations. Introduced without cosponsors, Slotkin aims to shape early conversations around the National Defense Authorization Act, the major annual defense spending bill typically legislated toward year's end
1
.Other Democrats are pursuing parallel efforts. Senator Adam Schiff of California told The Hill he would introduce legislation in coming weeks to codify protections around AI use in surveillance and warfare, with his office consulting industry leaders and considering inclusion in the upcoming National Defense Authorization Act
3
. In the House, Representative Sam Liccardo introduced an amendment to prohibit federal agencies from retaliating against technology vendors attempting to limit deployment risks to U.S. citizens, though it failed on a party-line vote3
.Slotkin's critique of Congress cuts deep: "We're unhealthy as a political system, and so we focus more on things like Greenland than we do on the use of AI in matters of legal force. And it's our responsibility to legislate this"
1
. Whether this legislation gains traction will signal how seriously lawmakers take the challenge of establishing clear rules for military AI before technological capabilities outpace democratic oversight.Summarized by
Navi
[3]
03 Mar 2026•Policy and Regulation

25 Jun 2025•Policy and Regulation

05 Jun 2025•Policy and Regulation
