6 Sources
6 Sources
[1]
Elizabeth Warren calls Pentagon's decision to bar Anthropic 'retaliation' | TechCrunch
Anthropic is attracting an increasing number of supporters in its fight against the U.S. Department of Defense, which last month designated the AI lab as a supply-chain risk after it refused to make concessions on how its AI could be used by the military. In a letter to Defense Secretary Pete Hegseth, U.S. Senator Elizabeth Warren (D-MA) equated the DoD's decision with "retaliation," arguing that the Pentagon could simply have terminated its contract with the AI lab, CNBC reports. "I am particularly concerned that the DoD is trying to strong-arm American companies into providing the Department with the tools to spy on American citizens and deploy fully autonomous weapons without adequate safeguards," Warren wrote, per the report, adding that the barring of Anthropic "appears to be retaliation." Warren's words echo many other organizations that have spoken out against the Defense Department's treatment of Anthropic. Several tech companies and employees -- including from OpenAI, Google and Microsoft -- as well as legal rights groups, have filed amicus briefs in support of Anthropic and denouncing the designation, which is usually applied to foreign adversaries and not U.S. firms. The dispute arose after Anthropic told the Pentagon that it did not want its AI systems to be used for mass surveillance of Americans, and that the technology wasn't ready for use in targeting or firing decisions of lethal autonomous weapons without human intervention. The Pentagon contested that a private company shouldn't dictate how the military uses technology, and soon after designated the company as a "supply-chain risk." The label requires any company or agency that does work with the Pentagon to certify that it doesn't use the designated company's products or services -- effectively barring the company from work with any company that also works with the U.S. government. The letter from Warren comes a day before a hearing in San Francisco on Tuesday, when District Judge Rita Lin will decide whether to grant Anthropic a preliminary injunction that seeks to preserve the status quo while its case against the DoD is litigated. While Anthropic is suing the DOD of infringing on its First Amendment rights and punishing the company based on ideological grounds, the Defense Department has maintained that Anthropic's refusal to allow all lawful military uses of its technology was a business decision, not protected speech, and that the designation was a straightforward national security call and not punishment for the company's views. The AI lab last week submitted two declarations to the court that claim the government's logic is flawed as they depend on technical misunderstandings as well as points of concern that were not raised during the company's negotiations with the DoD. Warren has also written to OpenAI CEO Sam Altman, asking for details of the company's agreement with the DoD, which came just a day after the Pentagon blacklisted Anthropic. Anthropic and the Defense Department did not immediately respond to requests for comment.
[2]
Warren presses Pentagon over decision to grant xAI access to classified networks | TechCrunch
Sen. Elizabeth Warren (D-MA) sent a letter to Defense Secretary Pete Hegseth on Monday expressing concern over the Pentagon's reported decision to give Elon Musk's company xAI access to classified networks. "Grok, the controversial AI model developed by xAI, has provided disturbing outputs for users, including giving users 'advice on how to commit murders and terrorist attacks,' generating antisemitic content, and creating child sexual abuse material," the letter reads. Warren said Grok's "apparent lack of adequate guardrails" could pose "serious risks to the safety of U.S. military personnel and to the cybersecurity of classified systems." She demanded Hegseth provide information on how the Department of Defense plans to "mitigate these potential national security risks." Warren isn't the first to express alarm at Grok, xAI's controversial chatbot, gaining access to classified systems. Last month, a coalition of nonprofits urged the government to immediately suspend the deployment of Grok in federal agencies, including the DoD, after X users repeatedly prompted the chatbot to turn real photos of women, and in some cases children, into sexualized images without their consent. The same day Warren sent her letter, a class action lawsuit was filed against xAI alleging Grok had generated sexual content from real images of the plaintiffs as minors. The letter comes in the aftermath of the Pentagon's decision to label Anthropic a supply chain risk after the AI firm refused to give the military unrestricted access to its AI systems. Anthropic had been, until recently, the only AI company with classified-ready systems. In the midst of that conflict, the DoD signed an agreement with OpenAI as well as xAI to use the two companies' AI systems in classified networks, according to Axios. A senior Pentagon official confirmed that Grok was onboarded to be used in a classified setting, but is not yet being used. "It is unclear what assurances or documentation xAI has provided to the Department of Defense about Grok's security safeguards, data-handling practices, or safety controls, and whether DoD has evaluated those assurances before reportedly allowing Grok access to classified system," Warren writes. Warren requested a copy of the deal reportedly reached between the DoD and xAI on the use of Grok in classified systems and an explanation of how the department plans to ensure Grok is not exposed to cyberattacks and will "not leak sensitive or classified military information." (Last week, a former employee of Musk's Department of Government Efficiency reportedly stole Americans' personal data from the Social Security Administration and stored it on a thumbdrive -- the latest accusation of DOGE-related data leakage.) Chief Pentagon spokesperson Sean Parnell said the department "looks forward to deploying Grok to its official AI platform GenAI.mil in the very near future." GenAI.mil is the military's secure enterprise platform for generative AI that gives DoD workers access to large language models (LLMs) and other AI tools within government-approved cloud environments. It is designed to help with primarily non-classified tasks like research, document drafting, and data analysis.
[3]
What If Elon Musk's Grok Leaks Classified Info? Elizabeth Warren Is Worried -- The Pentagon Isn't - Decrypt
Grok's history includes lurid deepfake images of minors, antisemitic outputs, and leaked conversations. Senator Elizabeth Warren wants to know how a chatbot that allegedly generated millions of deepfake images -- including compromising images depicting minors -- ended up with keys to the Pentagon's most classified systems. On Sunday, Warren sent a four-page letter to Defense Secretary Pete Hegseth demanding answers about the Department of Defense's decision to give Elon Musk's xAI access to classified military networks, which she said was granted while multiple federal agencies were raising red flags. "I write regarding my concerns about the Department of Defense's (DoD) reported decision to allow Elon Musk's xAI to access classified systems despite concerns raised by multiple federal agencies, including the National Security Agency (NSA) and the General Services Agency (GSA)," Warren wrote. "I am concerned that Grok's apparent lack of adequate guardrails could pose serious risks to the safety of U.S. military personnel and to the cybersecurity of classified systems," she added, "especially if Grok is given sensitive military information and access to operational systems." The National Security Agency, Warren's letter notes, "conducted a classified review" and "determined Grok had particular security concerns that other models didn't." The General Services Administration raised similar alarms. "Were Grok to leak government information, this could reveal sensitive military plans, U.S. intelligence efforts, and potentially put service members in danger," Warren wrote. Neither concern appears to have slowed anything down. "It is unclear what assurances or documentation xAI has provided to the Department of Defense about Grok's security safeguards, data-handling practices, or safety controls, and whether DoD has evaluated those assurances before reportedly allowing Grok access to classified systems," the letter reads. The timing couldn't be harder to ignore. The same day Warren's letter went out, three Tennessee minors filed a federal class action lawsuit against xAI, alleging Grok generated child sexual abuse material based on their real photographs. The complaint accuses xAI of deliberately releasing Grok without industry-standard safeguards, calling it "a business opportunity" to profit from the exploitation of real people, including children. Last week, the Washington Post reported that a Department of Government Efficiency (DOGE) employee under Musk's oversight copied sensitive Social Security Administration data records on hundreds of millions of Americans, and intended to use that data at their new tech startup. Warren's letter also cites Grok's history of generating antisemitic content, giving users instructions on how to commit murders and terrorist attacks, and running wild with non-consensual deepfakes despite repeated promises of fixes. Hundreds of thousands of private Grok conversations were also found indexed on Google last August. Government testing showed Grok is more susceptible than competing models to "data poisoning" attacks -- where manipulated data corrupts the system's outputs -- a serious vulnerability for a tool being considered for weapons development and battlefield intelligence. The Pentagon's own Chief of Responsible AI circulated internal memos about these risks and stepped down shortly thereafter. The deal itself came together under unusual circumstances. xAI was reportedly a late addition to the Pentagon's AI contract pool, awarded a deal worth up to $200 million last July. The classified access agreement followed in February, just as the DoD was publicly feuding with Anthropic over safety guardrails. When asked about it, a Pentagon spokesperson told the Wall Street Journal that the department was "excited to have xAI, one of America's national champion frontier AI companies onboard and looks forward to deploying Grok to its official AI platform GenAI.mil in the very near future." That context matters. Anthropic had been the only AI company with classified-ready systems, with Claude deployed in real military operations. After Anthropic refused the Pentagon's demand to make Claude available for "all lawful purposes" -- specifically pushing back on autonomous weapons and mass domestic surveillance -- the DoD labeled the company a supply chain risk. xAI and OpenAI were announced as replacements. There are no records of xAI questioning the reach of the "all lawful purposes" standard. OpenAI was more diplomatic about it, establishing some boundaries on a server level. Warren is asking Hegseth to respond by March 30 with the full text of the xAI agreement, all internal communications about the deal, and answers on whether any testing or evaluation took place before access was granted. One of her 10 questions asks directly whether safeguards exist to ensure Grok does not cause "erroneous targeting decisions" if deployed in critical operational systems.
[4]
Warren demands Hegseth share information about xAI's access to classified networks
Sen. Elizabeth Warren, D-Mass., sent a letter to Defense Secretary Pete Hegseth on Monday demanding information about the Pentagon's reported decision to give Elon Musk's xAI access to classified networks. In the letter, Warren raised concerns that xAI and its AI-powered chatbot, Grok, could present critical safety and reliability risks to America's military. Warren cited the possibility that Grok could leak classified information to adversaries, be manipulated based on biased or inaccurate data, or lack critical safety controls and put service members in danger. "I am concerned that Grok's apparent lack of adequate guardrails could pose serious risks to the safety of U.S. military personnel and to the cybersecurity of classified systems," Warren wrote in the letter, seen by NBC News, "especially if Grok is given sensitive military information and access to operational systems." "It is unclear what assurances or documentation xAI has provided to the Department of Defense about Grok's security safeguards, data-handling practices, or safety controls, and whether DoD has evaluated those assurances before reportedly allowing Grok access to classified systems," Warren wrote. In late February, the Pentagon and xAI reached a deal that would pave the way for xAI's systems to be used on classified networks, according to Axios, citing an anonymous Defense Department official. The announcement came in the middle of the Pentagon's rupture with rival AI company Anthropic, which had insisted on stronger guarantees that the Pentagon would not use its AI systems for domestic surveillance or direct use in deadly weapons. It is unclear whether xAI's reported deal with the Pentagon allows for a wider range of uses than Anthropic's proposed contract. In her letter, Warren requested a full copy of the agreement reportedly reached between the Defense Department and xAI. In July, xAI received a contract worth up to $200 million from the Pentagon to develop new AI applications for the Defense Department. In a statement, the military's Chief Digital and Artificial Intelligence Office said the deal would "broaden DoD use of and experience in frontier AI capabilities and increase the ability of these companies to understand and address critical national security needs." In recent months, xAI has come under intense scrutiny at several government agencies, according to The Wall Street Journal, due to concerns about the safety and reliability of its Grok services. The Journal reported that the Pentagon had previously raised concerns about Grok and "questioned whether it was aligned with government ethics and standards." In late December and early January, xAI received widespread attention for Grok's ability to manipulate images of people, primarily women and children, to remove subjects' clothing. The phenomenon prompted California's Attorney General Rob Bonta to launch an investigation into the company, while countries including Indonesia and Malaysia banned Grok over the sexualized photos. Grok has been framed by Musk as being more permissive and less "woke" than AI models from competitors like Anthropic, OpenAI and Google. These lower guardrails have led the chatbot to generate controversial or incendiary outputs at times, for example churning out antisemitic posts after a technical update in July. Similar comments from Grok in November prompted the French government to open an inquiry into xAI. In February, the European Union's data privacy office launched an investigation into X, the social media platform closely associated with Grok and xAI, over concerns that X was spreading sexualized AI-generated images of women and children. Musk called similar moves in February from the Paris prosecutor's office "a political attack." Warren's letter asked Hegseth to share how the Defense Department plans to mitigate the risks of inaccurate responses and other safety shortcomings, along with all communications that led to the reported agreement.
[5]
Warren demands information on xAI access to classified Pentagon networks
Sen. Elizabeth Warren (D-Mass.) is demanding more information from Defense Secretary Pete Hegseth on xAI's access to Pentagon classified networks. She also asked him to address worries that the company's chatbot, Grok, could pose reliability concerns for the U.S. military. Warren, who sits on the Senate Armed Services Committee, said in a Sunday letter that she was concerned that Grok could present safety risks to armed forces, citing the possibility of the chatbot leaking classified information. "Were Grok to leak government information, this could reveal sensitive military plans, U.S. intelligence efforts, and potentially put service members in danger," she wrote the four-page letter. "It is unclear what assurances or documentation xAI has provided to the Department of Defense about Grok's security safeguards, data-handling practices, or safety controls, and whether DoD has evaluated those assurances before reportedly allowing Grok access to classified systems." Chief Pentagon spokesperson Sean Parnell said the Department of Defense (DOD) said no one single prompt is used "to define the operational capability and wanting to work with the government." "xAI's Grok remains a competitive frontier model -- no media spin can change that. The Department is excited to have xAI, one of America's national champion frontier AI companies onboard and looks forward to deploying Grok to its official AI platform GenAI.mil in the very near future," he said. DOD and xAI struck a deal last month to bring its AI models to the military's classified systems, an agreement that came as the Pentagon was publicly feuding with Anthropic, which demanded that Claude, its AI model, not be used in firing autonomous weapons and domestic surveillance. Warren, in her letter, asked for a full copy of the agreement between xAI and the Pentagon. Last month, The Wall Street Journal reported that officials in multiple government agencies, including the National Security Agency and General Services Administration, raised concerns over Grok's safety and reliability. Musk's xAI got a nearly $200 million Pentagon contract last July to develop an AI application for the DOD. "What, if any, safeguards are in place, both in the agreement and within the Department writ large, to ensure that Grok is not exposed to cyberattacks, including data poisoning attacks, that could compromise its outputs," Warren asked in the letter.
[6]
Elizabeth Warren Grills Pentagon over Anthropic Snub and OpenAI Deal
US Senator Elizabeth Warren has raised fresh concerns about the Department of Defense's decision to treat AI firm Anthropic as a potential supply chain risk, a move that effectively blocks the company from securing new military contracts. In letters sent to Defence Secretary Pete Hegseth and OpenAI chief Sam Altman, Warren questioned the basis of the designation and sought clarity on whether policy disagreements over AI safety influenced the decision. Reports suggest tensions escalated after Anthropic refused to relax certain safeguards on the use of its AI systems in sensitive defence contexts. The company has advocated limits on applications such as mass surveillance and fully autonomous lethal weapons. Negotiations with defence officials reportedly stalled, prompting the to begin phasing out the firm's tools.
Share
Share
Copy Link
Senator Elizabeth Warren is pressing Defense Secretary Pete Hegseth for details on the Pentagon's decision to grant Elon Musk's xAI access to classified military networks. Her concerns center on Grok's history of generating disturbing content and potential national security risks, coming as the Defense Department publicly feuds with Anthropic over AI safety guardrails.
Senator Elizabeth Warren has sent a pointed letter to Defense Secretary Pete Hegseth demanding comprehensive information about the Pentagon's decision to grant Elon Musk's xAI access to classified networks
1
4
. The Massachusetts Democrat, who sits on the Senate Armed Services Committee, raised alarm about potential security risks posed by xAI's Grok AI model, citing its controversial history of generating disturbing outputs including instructions on committing murders and terrorist attacks, antisemitic content, and child sexual abuse material2
.
Source: NBC
"I am concerned that Grok's apparent lack of adequate guardrails could pose serious risks to the safety of U.S. military personnel and to the cybersecurity of classified systems," Warren wrote in her four-page letter
4
. The senator specifically questioned whether the Defense Department had properly evaluated xAI's security safeguards, data-handling practices, and safety controls before granting access to sensitive military information.Warren's letter reveals that multiple federal agencies, including the National Security Agency and the General Services Administration, had raised concerns about Grok's reliability before the Pentagon moved forward with the agreement
3
. The NSA conducted a classified review and "determined Grok had particular security concerns that other models didn't," according to the senator's correspondence3
. Government testing showed Grok is more susceptible than competing models to data poisoning attacks, where manipulated data corrupts the system's outputs—a serious vulnerability for a tool being considered for weapons development and battlefield intelligence.
Source: Decrypt
"Were Grok to leak government information, this could reveal sensitive military plans, U.S. intelligence efforts, and potentially put service members in danger," Warren warned
5
. The timing of her inquiry carries additional weight, as it arrived the same day three Tennessee minors filed a federal class action lawsuit against xAI, alleging Grok generated child sexual abuse material based on their real photographs3
.The Pentagon and xAI reached a deal in late February that would enable xAI's systems to be used on classified networks, according to reports
4
. This agreement surfaced during the Pentagon's public rupture with Anthropic, which had insisted on stronger guarantees that the Defense Department would not use its AI systems for domestic surveillance or direct deployment in autonomous weapons4
. The Defense Department subsequently designated Anthropic as a supply-chain risk after the AI lab refused to make concessions on how its technology could be used by the military1
.
Source: TechCrunch
Warren characterized the Pentagon's treatment of Anthropic as "retaliation," arguing in a separate letter that "the DoD is trying to strong-arm American companies into providing the Department with the tools to spy on American citizens and deploy fully autonomous weapons without adequate safeguards"
1
. Anthropic had been the only AI company with classified-ready systems before the controversy, with its Claude model deployed in real military operations. Following the designation, both OpenAI and xAI were announced as replacements3
.Related Stories
xAI received a contract worth up to $200 million from the Pentagon in July to develop new AI applications for the Defense Department
4
. However, Warren's letter highlights the absence of public information about what assurances xAI provided regarding Grok's capabilities and limitations. "It is unclear what assurances or documentation xAI has provided to the Department of Defense about Grok's security safeguards, data-handling practices, or safety controls, and whether DoD has evaluated those assurances before reportedly allowing Grok access to classified systems," the senator wrote2
.There are no records of xAI questioning the reach of the Pentagon's "all lawful purposes" standard, which Anthropic had contested
3
. Warren requested a full copy of the agreement between xAI and the Pentagon, along with all internal communications about the deal, asking whether any testing or evaluation took place before access was granted3
. One of her ten questions asks directly whether safeguards exist to ensure Grok does not cause "erroneous targeting decisions" if deployed in critical operational systems.Chief Pentagon spokesperson Sean Parnell defended the decision, stating the department "looks forward to deploying Grok to its official AI platform GenAI.mil in the very near future" . GenAI.mil is the military's secure enterprise platform for generative AI that gives Defense Department workers access to large language models and other AI tools within government-approved cloud environments, designed primarily for non-classified tasks like research, document drafting, and data analysis . A senior Pentagon official confirmed that Grok was onboarded to be used in a classified setting but is not yet being deployed
2
.Warren has requested responses by March 30, as concerns mount about the leakage of classified information and the broader implications of AI deployment in military contexts without transparent oversight. The controversy highlights growing tensions between rapid AI adoption for national security purposes and the need for robust ethical considerations around surveillance, autonomous weapons, and military personnel safety.
Summarized by
Navi
[1]
[2]
29 May 2025•Policy and Regulation

03 Mar 2026•Policy and Regulation

04 Mar 2026•Policy and Regulation

1
Technology

2
Science and Research

3
Science and Research
