2 Sources
2 Sources
[1]
OpenAI Adds Custom ChatGPT to Pentagon Platform as Expert Warns of Risks - Decrypt
Critics warn that human error, and overtrust in AI systems risks remain. OpenAI said Monday it is deploying a custom version of ChatGPT on GenAI.mil, the AI platform developed by the U.S. Department of Defense. The move expands the military's access to powerful generative AI models, even as critics warn that user error remains a key security risk. ChatGPT joins a growing list of AI models made available to the U.S. military, including Google's Gemini and Grok, the AI system developed by xAI, which was folded into SpaceX earlier this month. "We believe the people responsible for defending the country should have access to the best tools available, and it is important for the U.S. and other democratic countries to understand how, with the proper safeguards, AI can help protect people, deter adversaries, and prevent future conflict," OpenAI said in a statement. OpenAI said the GenAI.mil version of ChatGPT is approved for unclassified Defense Department use and will run inside an authorized government cloud infrastructure. According to OpenAI, the system includes safeguards designed to protect sensitive data. Still, J.B. Branch, Big Tech Accountability Advocate at Public Citizen, warned that user overreliance on AI could undermine those protections. "Research shows that when people use these large language models, they tend to give them the benefit of the doubt," Branch told Decrypt. "So in high‑impact situations like the military, that makes it even more important to ensure they get things correct." The deployment comes as the Pentagon accelerates the adoption of commercial AI across military networks and as AI developers seek profitability. In January, Defense Secretary Pete Hegseth said the department plans to deploy leading AI models across both unclassified and classified military networks. While OpenAI said the custom version of ChatGPT is meant only for unclassified data, Branch warned that putting any sensitive information into AI systems leaves it vulnerable to adversaries, adding that users often mistake such tools for secure vaults. "Classified information is supposed to only have a certain set of eyes on it," he said. "So even if you have a cut‑off system that's only allowed within the military, that doesn't change the fact that classified data is only meant for a limited subset of people."
[2]
OpenAI opening ChatGPT access to Pentagon
OpenAI announced Monday that it is bringing a custom version of ChatGPT to the Pentagon's AI platform. It joins other major AI companies, including Google and Elon Musk's xAI, on the Defense Department's platform GenAI.mil. The custom ChatGPT will run on "authorized government cloud infrastructure with built-in safety controls and protections" and is approved for unclassified work, OpenAI said. "We believe the people responsible for defending the country should have access to the best tools available, and it is important for the United States and other democratic countries to understand how, with the proper safeguards, AI can help protect people, deter adversaries, and prevent future conflict," the company said in a press release. OpenAI previously received a contract for up to $200 million with the Defense Department's Chief Digital and Artificial Intelligence Office last July as the agency sought to boost its AI adoption. Anthropic, Google and xAI also scored similar contracts. Anthropic has reportedly clashed with the Pentagon in recent months over restrictions barring its AI model Claude from being used for domestic surveillance or autonomous lethal operations. OpenAI said in Monday's announcement that its models "incorporate safeguards at the model and platform level" and support "all lawful uses."
Share
Share
Copy Link
OpenAI has deployed a custom version of ChatGPT on the Pentagon's AI platform GenAI.mil, joining Google's Gemini and xAI's Grok. While approved for unclassified use with built-in safeguards, experts warn that user over-reliance on AI systems poses security risks, particularly when military personnel mistake these tools for secure vaults.
OpenAI announced Monday that it is deploying a custom version of ChatGPT on GenAI.mil, the Pentagon's AI platform developed by the U.S. Department of Defense
1
. The move marks a significant expansion of commercial AI adoption within military networks, positioning ChatGPT alongside other generative AI models already available to defense personnel, including Google's Gemini and Grok, the AI system developed by xAI1
. The deployment follows a contract for up to $200 million that OpenAI secured with the Defense Department's Chief Digital and Artificial Intelligence Office last July2
. Similar contracts were also awarded to Anthropic, Google, and xAI as the Pentagon accelerates its push to integrate advanced AI tools across its operations2
.
Source: The Hill
The custom ChatGPT version will run on secure government cloud infrastructure with built-in safety controls and is approved exclusively for unclassified use
2
. According to OpenAI, the system incorporates safeguards at both the model and platform level to protect sensitive data2
. The company emphasized that it supports "all lawful uses" and believes those responsible for defending the country should have access to the best tools available1
. Defense Secretary Pete Hegseth indicated in January that the department plans to deploy leading AI models across both unclassified and classified military networks, signaling broader integration ahead1
.Despite the technical safeguards, critics have raised concerns about AI systems risks stemming from human error and user over-reliance on these technologies. J.B. Branch, Big Tech Accountability Advocate at Public Citizen, warned that people tend to give large language models the benefit of the doubt, which becomes particularly problematic in high-impact military applications
1
. "Research shows that when people use these large language models, they tend to give them the benefit of the doubt," Branch told Decrypt. "So in high‑impact situations like the military, that makes it even more important to ensure they get things correct"1
. Branch also highlighted a security risk related to users mistaking AI tools for secure vaults, potentially exposing sensitive information to adversaries even when systems are isolated within military networks1
.Related Stories
The deployment reflects a broader trend of AI companies seeking profitability through government contracts while the Pentagon pursues rapid AI integration
1
. Anthropic has reportedly clashed with the Pentagon in recent months over restrictions barring its AI model Claude from being used for domestic surveillance or autonomous lethal operations, highlighting ongoing tensions around AI ethics and military applications2
. As OpenAI and other tech giants expand their presence on GenAI.mil, observers will be watching how the Pentagon balances the promise of AI-enhanced defense capabilities against the persistent challenges of unclassified use protocols, data security, and the human tendency to overtrust automated systems in critical decision-making contexts.Summarized by
Navi
[2]
1
Technology

2
Science and Research

3
Policy and Regulation
