OpenAI Investigates Claims of 20 Million User Credentials for Sale, No Evidence of Breach Found

3 Sources

Share

OpenAI is investigating claims of a hacker selling 20 million user credentials, but has found no evidence of a system breach. Security experts suggest the data may have been obtained through other means.

Alleged Hack of OpenAI User Credentials

A hacker claiming to possess login credentials for 20 million OpenAI user accounts has put the data up for sale on a dark web forum. The cybercriminal, known as 'emirking', advertised the dataset as "a goldmine" containing email addresses and passwords

1

. However, OpenAI has stated that their investigation has found no evidence of a compromise to their systems

2

.

Doubts About the Legitimacy of the Claim

Security researchers have cast doubt on the authenticity of the alleged breach. Malwarebytes Labs expressed skepticism about the possibility of harvesting such a large number of credentials through phishing operations

1

. Additionally, KELA cybersecurity assessed the available data and concluded that the credentials were likely obtained via infostealer malware rather than a direct breach of OpenAI's systems

1

.

Potential Source of the Data

KELA's analysis revealed that the compromised logins were related to OpenAI services and contained authentication details for 'auth0.openai.com'. The security firm cross-referenced these details with its own database of compromised accounts, which contains over 4 million records collected in 2024

1

. This investigation suggests that the credentials may be part of a larger dataset scraped from various sources that sell and share infostealer logs

1

.

Risks and Potential Consequences

Even if the data wasn't obtained through a direct breach of OpenAI's systems, the leak of user credentials poses significant risks. The primary dangers include:

  1. Social engineering attacks: Attackers could use personal information from AI chatbot interactions to craft convincing phishing attempts

    1

    .
  2. Identity theft: Compromised accounts may contain sensitive personal information that could be exploited

    1

    .
  3. Exposure of sensitive data: OpenAI's services, like ChatGPT, often contain users' private conversations, commercial projects, and other confidential information

    3

    .

Preventive Measures and Security Recommendations

While OpenAI continues its investigation, users are advised to take proactive steps to secure their accounts:

  1. Change passwords: Update your OpenAI account password and any other accounts where you may have used the same or similar passwords

    2

    .
  2. Enable two-factor authentication (2FA): Activate 2FA on your OpenAI account for an additional layer of security

    2

    .
  3. Log out of all devices: This ensures that any potentially compromised sessions are terminated

    2

    .
  4. Stay vigilant: Be cautious of unexpected contacts or suspicious links, and verify the authenticity of any requests for personal information

    1

    .
  5. Monitor accounts: Keep a close eye on financial statements and credit reports for any unusual activity

    1

    .
  6. Use unique passwords: Employ a password manager to create and store strong, unique passwords for each online account

    2

    .

OpenAI's Response and Past Incidents

OpenAI has acknowledged the situation and stated that they are taking the claims seriously. However, they maintain that there is currently no evidence of a compromise to their systems

3

. This incident follows two previous security issues faced by the company since the public release of ChatGPT, including a breach of their internal Slack messaging system and a bug that exposed private data of paying customers

3

.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved