OpenAI deploys custom ChatGPT to Pentagon platform as experts flag security concerns

2 Sources

Share

OpenAI has deployed a custom version of ChatGPT on the Pentagon's AI platform GenAI.mil, joining Google's Gemini and xAI's Grok. While approved for unclassified use with built-in safeguards, experts warn that user over-reliance on AI systems poses security risks, particularly when military personnel mistake these tools for secure vaults.

OpenAI Brings ChatGPT to Pentagon's AI Platform

OpenAI announced Monday that it is deploying a custom version of ChatGPT on GenAI.mil, the Pentagon's AI platform developed by the U.S. Department of Defense

1

. The move marks a significant expansion of commercial AI adoption within military networks, positioning ChatGPT alongside other generative AI models already available to defense personnel, including Google's Gemini and Grok, the AI system developed by xAI

1

. The deployment follows a contract for up to $200 million that OpenAI secured with the Defense Department's Chief Digital and Artificial Intelligence Office last July

2

. Similar contracts were also awarded to Anthropic, Google, and xAI as the Pentagon accelerates its push to integrate advanced AI tools across its operations

2

.

Source: The Hill

Source: The Hill

Safeguards and Infrastructure for Military Use

The custom ChatGPT version will run on secure government cloud infrastructure with built-in safety controls and is approved exclusively for unclassified use

2

. According to OpenAI, the system incorporates safeguards at both the model and platform level to protect sensitive data

2

. The company emphasized that it supports "all lawful uses" and believes those responsible for defending the country should have access to the best tools available

1

. Defense Secretary Pete Hegseth indicated in January that the department plans to deploy leading AI models across both unclassified and classified military networks, signaling broader integration ahead

1

.

Experts Warn of Human Error and User Over-Reliance

Despite the technical safeguards, critics have raised concerns about AI systems risks stemming from human error and user over-reliance on these technologies. J.B. Branch, Big Tech Accountability Advocate at Public Citizen, warned that people tend to give large language models the benefit of the doubt, which becomes particularly problematic in high-impact military applications

1

. "Research shows that when people use these large language models, they tend to give them the benefit of the doubt," Branch told Decrypt. "So in high‑impact situations like the military, that makes it even more important to ensure they get things correct"

1

. Branch also highlighted a security risk related to users mistaking AI tools for secure vaults, potentially exposing sensitive information to adversaries even when systems are isolated within military networks

1

.

Broader Context and Future Implications

The deployment reflects a broader trend of AI companies seeking profitability through government contracts while the Pentagon pursues rapid AI integration

1

. Anthropic has reportedly clashed with the Pentagon in recent months over restrictions barring its AI model Claude from being used for domestic surveillance or autonomous lethal operations, highlighting ongoing tensions around AI ethics and military applications

2

. As OpenAI and other tech giants expand their presence on GenAI.mil, observers will be watching how the Pentagon balances the promise of AI-enhanced defense capabilities against the persistent challenges of unclassified use protocols, data security, and the human tendency to overtrust automated systems in critical decision-making contexts.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo