3 Sources
3 Sources
[1]
The Pentagon Wants to Raw Dog the Latest AI Models on Classified Systems
The Pentagon is looking to expand its use of artificial intelligence across both unclassified and classified networks, but negotiations with major AI companies have hit a sticking point. Defense officials want access to the most advanced models without any usage restrictions or heavy guardrails. According to Reuters, military officials argue they should be allowed to deploy AI however they see fit, as long as it complies with U.S. law. The push comes as OpenAI announced Monday that it has made a customized version of ChatGPT available through the War DepartmentΓ’β¬β’s AI platform, GenAI.mil. The platform, which launched in December, is used by roughly 3 million civilian and military personnel and already includes tailored versions of tools from xAI and GoogleΓ’β¬β’s Gemini. "We are pushing all of our chips in on artificial intelligence as a fighting force. The Department is tapping into America's commercial genius, and we're embedding generative AI into our daily battle rhythm," Secretary of War Pete Hegseth said in a press release about the platform. "AI tools present boundless opportunities to increase efficiency, and we are thrilled to witness AI's future positive impact across the War Department." OpenAIΓ’β¬β’s version of ChatGPT on the platform is designed to help with day-to-day tasks like summarizing policy documents, drafting reports, and assisting with research. But Reuters reports that Pentagon officials are pushing to roll out AI systems across all classification levels, potentially opening the door to more sensitive applications like mission planning or weapons targeting. An unnamed official told Reuters that the Pentagon is Γ’β¬Εmoving to deploy frontier AI capabilities across all classification levels.Γ’β¬ Currently, AnthropicΓ’β¬β’s models are available in select classified settings through third-party providers, but with significant usage restrictions. Reuters reports that Anthropic executives have told military officials they do not want their systems used for autonomous weapons targeting or domestic surveillance. Meanwhile, Semafor reports that Anthropic has not agreed to allow its models to be used for Γ’β¬Εall lawful uses." As of now, its tools are not currently available on GenAI.mil. The negotiations leave AI companies walking a delicate tightrope. On one side, there are employees who oppose military use of their systems and fear it will make it hard to recruit future employees. On the other side is the Pentagon, which represents a massive customer and a powerful political force. Semafor reported that AnthropicΓ’β¬β’s stance has Γ’β¬Εdrawn ire from the Pentagon and the White House.Γ’β¬ At the same time, some OpenAI employees have expressed concerns about giving competitors an advantage by stepping back from defense work, according to Semafor. The Pentagon, OpenAI, Anthropic, Google, and xAI did not immediately respond to requests for comment from Gizmodo.
[2]
Pentagon pushing AI companies to expand on classified networks, sources say
The Pentagon is pushing the top AI companies including OpenAI and Anthropic to make their artificial-intelligence tools available on classified networks without many of the standard restrictions that the companies apply to users. During a White House event on Tuesday, Pentagon Chief Technology Officer Emil Michael told tech executives that the military is aiming to make the AI models available on both unclassified and classified domains, according to two people familiar with the matter. The Pentagon is "moving to deploy frontier AI capabilities across all classification levels," an official who requested anonymity said.
[3]
Pentagon pushing AI companies to expand on classified networks, sources say
Feb 11 (Reuters) - The Pentagon is pushing the top AI companies including OpenAI and Anthropic to make their artificial-intelligence tools available on classified networks without many of the standard restrictions that the companies apply to users. During a White House event on Tuesday, Pentagon Chief Technology Officer Emil Michael told tech executives that the military is aiming to make the AI models available on both unclassified and classified domains, according to two people familiar with the matter. The Pentagon is "moving to deploy frontier AI capabilities across all classification levels," an official who requested anonymity told Reuters. It is the latest development in ongoing negotiations between the Pentagon and the top generative AI companies over how the U.S. will use AI on a future battlefield that is already dominated by autonomous drone swarms, robots and cyber attacks. Michael's comments are also likely to intensify an already contentious debate over the military's desire to use AI without restrictions and tech companies' ability to set boundaries around how their tools are deployed. Many AI companies are building custom tools for the U.S. military, most of which are available only on unclassified networks typically used for military administration. Only one AI company - Anthropic - is available in classified settings through third parties but the government is still bound by the company's usage policies. Classified networks are used to handle a wide range of more sensitive work that can include mission-planning or weapons targeting. Reuters could not determine how or when the Pentagon planned to deploy AI chatbots on classified networks. Military officials are hoping to leverage AI's power to synthesize information to help shape decisions. But while these tools are powerful, they can make mistakes and even make up information that might sound plausible at first glance. Such mistakes in classified settings could have deadly consequences, AI researchers say. AI companies have sought to minimize the downside of their products by building safeguards within their models and asking customers to adhere to certain guidelines. But Pentagon officials have bristled at such restrictions, arguing that they should be able to deploy commercial AI tools as long as they comply with American law. This week, OpenAI reached a deal with the Pentagon so that the military could use its tools, including ChatGPT, on an unclassified network called , which has been rolled out to more than 3 million Defense Department employees. As part of the deal, OpenAI agreed to remove many of its typical user restrictions although some guardrails remain. Alphabet's Google and xAI have previously struck similar deals. In a statement, OpenAI said this week's agreement is specific to unclassified use through genai.mil. Expanding on that agreement would require a new or modified agreement, a spokesperson said. Similar discussions between OpenAI rival Anthropic and the Pentagon have been significantly more contentious, Reuters previously reported. Anthropic executives have told military officials that they do not want their technology used to target weapons autonomously and conduct U.S. domestic surveillance. Anthropic's products include a chatbot called Claude. "Anthropic is committed to protecting America's lead in AI and helping the U.S. government counter foreign threats by giving our warfighters access to the most advanced AI capabilities," an Anthropic spokesperson said. "Claude is already extensively used for national security missions by the U.S. government and we are in productive discussions with the Department of War about ways to continue that work." President Donald Trump has ordered the Department of Defense to rename itself the Department of War, a change that will require action by Congress. (Reporting by David Jeans in New York and Deepa Seetharaman in San Francisco; Editing by Kenneth Li and Matthew Lewis)
Share
Share
Copy Link
The Pentagon is demanding that major AI companies like OpenAI and Anthropic make their most advanced models available on classified military networks without standard usage restrictions. The push has sparked intense negotiations, with Anthropic resisting uses like autonomous weapons targeting while OpenAI recently agreed to deploy ChatGPT on the military's GenAI.mil platform serving 3 million personnel.
The Department of Defense is intensifying pressure on leading artificial intelligence companies to deploy AI tools across classified networks without the guardrails typically applied to commercial users. During a White House event on Tuesday, Pentagon Chief Technology Officer Emil Michael told tech executives that the military aims to make advanced AI models available on both unclassified and classified domains, according to sources familiar with the discussions
2
. An unnamed official confirmed that the Pentagon is "moving to deploy frontier AI capabilities across all classification levels"3
.
Source: Gizmodo
Military officials argue they should be allowed to deploy artificial intelligence tools however they see fit, as long as it complies with U.S. law
1
. This stance directly challenges the usage restrictions and ethical boundaries that AI companies have established for their products. The negotiations represent a critical inflection point for how the military use of artificial intelligence will evolve on future battlefields already dominated by autonomous drone swarms, robots, and cyber attacks3
.OpenAI announced Monday that it has made a customized version of ChatGPT available through the Department of Defense's AI platform, GenAI.mil, which launched in December and serves roughly 3 million civilian and military personnel
1
. The platform already includes tailored versions of tools from xAI and Google's Gemini. As part of the deal, OpenAI agreed to remove many of its typical user restrictions, though some guardrails remain3
.Source: Japan Times
Secretary of War Pete Hegseth stated in a press release: "We are pushing all of our chips in on artificial intelligence as a fighting force. The Department is tapping into America's commercial genius, and we're embedding generative AI into our daily battle rhythm"
1
. OpenAI's version is designed to help with day-to-day tasks like summarizing policy documents, drafting reports, and assisting with research. However, an OpenAI spokesperson clarified that this week's agreement is specific to unclassified use through GenAI.mil, and expanding to classified systems would require a new or modified agreement3
.Discussions between Anthropic and Pentagon officials have been significantly more contentious. Anthropic executives have told military officials they do not want their systems used for autonomous weapons targeting or domestic surveillance
1
. The company has not agreed to allow its models to be used for "all lawful uses," and its tools are not currently available on GenAI.mil1
.Currently, Anthropic's models are available in select classified settings through third-party providers, but with significant usage restrictions
1
. An Anthropic spokesperson stated: "Anthropic is committed to protecting America's lead in AI and helping the U.S. government counter foreign threats by giving our warfighters access to the most advanced AI capabilities. Claude is already extensively used for national security missions by the U.S. government and we are in productive discussions with the Department of War about ways to continue that work"3
. Reports indicate that Anthropic's stance has drawn ire from the Pentagon and the White House1
.Related Stories
The negotiations leave AI companies navigating competing pressures. On one side are employees who oppose military applications and fear difficulty recruiting future talent. On the other is the Pentagon, which represents a massive customer and powerful political force
1
. Some OpenAI employees have expressed concerns about giving competitors an advantage by stepping back from defense work1
.Classified networks handle sensitive work that can include mission planning or weapons targeting. Military officials hope to leverage AI's power for information synthesis to shape decisions. However, AI researchers warn that these tools can make mistakes and even fabricate plausible-sounding information. Such errors in classified settings could have deadly consequences
3
. AI companies have sought to minimize risks by building safeguards within their models and requiring customers to adhere to certain guidelines, but Pentagon officials have bristled at such restrictions3
. The outcome of these negotiations will determine whether advanced AI models on military networks operate with or without the ethical boundaries their creators intended.Summarized by
Navi
[3]
30 Jan 2026β’Policy and Regulation

Yesterdayβ’Technology

09 Nov 2024β’Technology
1
Technology

2
Science and Research

3
Policy and Regulation
