CISA's Acting Director Uploaded Sensitive Files to ChatGPT, Triggering Federal Investigation

2 Sources

Share

Dr. Madhu Gottumukkala, acting director of CISA, uploaded sensitive government documents into the public version of ChatGPT last summer, triggering automated security alerts. The incident has sparked a Department of Homeland Security investigation into potential data exposure and raises critical questions about federal AI governance and human element vulnerability in cybersecurity.

Acting Director of CISA Triggers Security Breach with ChatGPT Uploads

Dr. Madhu Gottumukkala, the acting director of CISA, uploaded sensitive government documents into the public version of ChatGPT during summer 2025, setting off internal security alarms across the Department of Homeland Security (DHS). The data exposure incident, first reported by POLITICO, involved contracting documents marked "For Official Use Only" (FOUO) that were not intended for public release

1

. While none of the files were classified, the breach is particularly concerning given that CISA oversees federal cyber defense against foreign adversaries including Russia and China.

Source: Interesting Engineering

Source: Interesting Engineering

Automated cybersecurity sensors flagged the activity in August, with multiple alerts triggered during the first week alone. Four DHS officials, speaking anonymously due to fear of retaliation, confirmed that Madhu Gottumukkala had requested special approval to use ChatGPT shortly after joining CISA in May, at a time when DHS blocked the tool for most employees

1

. "He forced CISA's hand into making them give him ChatGPT, and then he abused it," one official stated.

Why Public Large Language Models Pose Critical Security Risks

The technical implications of this ChatGPT leak extend far beyond a simple policy violation. When users interact with the public version of ChatGPT, data uploaded becomes accessible to OpenAI and may train future responses. With more than 700 million total active users on the platform, any sensitive information fed into the system effectively leaves the federal perimeter and enters third-party servers with no zero-retention guarantee

2

.

This stands in stark contrast to approved AI tools within federal AI governance frameworks. DHSChat, the Department of Homeland Security's internal chatbot, operates with data sovereignty protections that prevent information from leaving federal networks. The sandboxed instance offers zero-training policies, meaning prompts are never used to train base models, and includes comprehensive audit logging for compliance purposes

2

. The question remains: why did the acting director of CISA bypass this secure alternative?

Source: Digit

Source: Digit

Federal Investigation and Damage Assessment Underway

Senior DHS leaders launched an internal review after detecting the uploads, assessing whether the exposure harmed government security. Gottumukkala met with senior DHS officials to review what he uploaded, including discussions with DHS's then-acting general counsel Joseph Mazzara and Chief Information Officer Antoine McCord. CISA officials also briefed Gottumukkala in August, with meetings involving CIO Robert Costello and Chief Counsel Spencer Fisher

1

.

DHS policy requires agencies to investigate any exposure of sensitive material and determine whether disciplinary action applies. Options can include retraining, formal warnings, or clearance reviews. However, it remains unclear what the review concluded. CISA disputed parts of the reporting, with Director of Public Affairs Marci McCarthy stating that Gottumukkala "was granted permission to use ChatGPT with DHS controls in place" and that "this use was short-term and limited." She added that he "last used ChatGPT in mid-July 2025 under an authorized temporary exception granted to some employees"

1

.

Human Element Vulnerability Exposes Gaps in AI Security Policy

The incident highlights a massive blind spot in federal AI governance: human element vulnerability. Modern cybersecurity operates on Zero Trust principles—never trust, always verify. Yet CISA's own automated security sensors, designed to detect Shadow AI and the exfiltration of sensitive data, were triggered by the agency's most senior political official

2

. The irony is striking: the person responsible for overseeing the nation's response to such alerts was the one triggering them.

For the tech industry and government agencies alike, this serves as a critical case study. If the head of the world's most sophisticated cyber defense agency can fall victim to the convenience of public AI tools, it demonstrates that AI security policy is only as strong as its most privileged user. The DHS-led damage assessment now focuses on determining whether FOUO material, which included internal CISA infrastructure details, has been permanently ingested into OpenAI's training logs

2

.

Growing Turmoil at Cybersecurity's Highest Levels

This ChatGPT leak adds to mounting turmoil inside CISA under Gottumukkala's leadership. Six career staff members went on leave after he failed a counterintelligence polygraph he requested, which DHS later described as "unsanctioned." When asked about the failure during congressional testimony, Gottumukkala said he did not "accept the premise of that characterization." Last week, he also attempted to remove CISA CIO Costello, though other political appointees intervened and blocked the move

1

.

Gottumukkala has led CISA in an acting role since May. President Donald Trump's nominee to permanently lead the agency, Sean Plankey, remains unconfirmed. As federal agencies rush to harness AI and other cutting-edge technologies, this incident underscores the urgent need for robust governance frameworks that account for insider risk, regardless of rank or privilege. The short-term implications involve potential disciplinary action and policy refinement, while long-term consequences may reshape how federal agencies approach AI tool approval and monitoring for senior officials.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo