OpenAI Codex vulnerability exposed GitHub tokens to theft via command injection attacks

Reviewed byNidhi Govil

2 Sources

Share

A critical command injection vulnerability in OpenAI's Codex coding agent allowed attackers to steal sensitive GitHub authentication tokens by manipulating branch names. The flaw, discovered by BeyondTrust's Phantom Labs, could have enabled unauthorized access across repositories and scaled attacks targeting multiple developers. OpenAI has since patched the issue with improved input validation and stronger security controls.

Critical Command Injection Flaw Discovered in OpenAI Codex

A critical command injection vulnerability in OpenAI's Codex coding agent created a pathway for attackers to steal sensitive GitHub authentication tokens, according to findings from Phantom Labs, the research arm of BeyondTrust

2

. The OpenAI Codex vulnerability stemmed from improper input sanitization when processing GitHub branch names during task execution, allowing malicious actors to inject arbitrary shell commands into container environments where code repositories are cloned and authenticated

1

.

Codex, offered as part of ChatGPT, enables developers to interact directly with code repositories through prompts that trigger automated tasks like code generation, reviews, and pull requests. These tasks run inside managed container environments that authenticate using short-lived GitHub OAuth tokens. The vulnerability occurred when manipulated branch names during task creation allowed attackers to execute code within these containers and extract the OAuth tokens used for repository access

2

.

Source: SiliconANGLE

Source: SiliconANGLE

GitHub Token Theft Creates Lateral Movement Risks

With access to compromised GitHub OAuth tokens, attackers could potentially move laterally within GitHub, particularly dangerous in enterprise environments where Codex is granted broad permissions across repositories and developer workflows

2

. BeyondTrust researcher Tyler Jespersen explained that "the vulnerability exists within the task creation HTTP request, which allows an attacker to smuggle arbitrary commands through the GitHub branch name parameter"

1

.

The security flaw extended beyond Codex's web interface to affect its command-line interface, software development kit, and integrated development environment integrations, where locally stored authentication credentials could reproduce the attack via backend application programming interfaces

2

. Researchers demonstrated that by embedding malicious payloads directly into GitHub branch names, an attacker with repository access could compromise multiple users interacting with the same project, enabling scaled attacks across organizations.

OpenAI Implements Improved Input Validation and Security Controls

OpenAI has addressed the vulnerability through coordinated fixes, including improved input validation, stronger shell escaping protections, and tighter controls around token exposure within container environments

2

. The company also implemented additional measures to limit token scope and lifetime during task execution. There is no evidence the flaw was exploited maliciously before being patched

1

.

Broader Security Concerns for AI Coding Agents

The discovery coincides with separate findings from Check Point revealing a ChatGPT data exfiltration vulnerability that OpenAI patched on February 20, 2026. That flaw exploited a hidden DNS-based communication path in the Linux runtime to bypass AI guardrails and exfiltrate conversation data without user awareness.

Source: Hacker News

Source: Hacker News

"AI coding agents are not just productivity tools. They are live execution environments with access to sensitive credentials and organizational resources," the Phantom Labs report concludes

2

. The researchers emphasize that as AI agents become more deeply integrated into developer workflows, the security in AI coding agents and the container environments they operate in must be treated with the same rigor as any other application security boundary. The attack surface is expanding as these tools handle increasingly sensitive operations across code repositories and enterprise systems.

For organizations deploying AI coding agents, the vulnerability highlights the need for independent security layers beyond native controls. Cybersecurity experts recommend monitoring for unauthorized access within GitHub, implementing strict input sanitization practices, and maintaining visibility into how AI tools interact with authentication systems and code repositories. As AI guardrails continue to evolve, the security architecture surrounding these platforms requires continuous assessment to prevent data exfiltration and credential compromise at scale.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo