OpenAI Confirms ChatGPT Abuse by Hackers for Malware and Election Interference

Curated by THEOUTPOST

On Thu, 10 Oct, 12:02 AM UTC

15 Sources

Share

OpenAI reports multiple instances of ChatGPT being used by cybercriminals to create malware, conduct phishing attacks, and attempt to influence elections. The company has disrupted over 20 such operations in 2024.

OpenAI Confirms Abuse of ChatGPT by Cybercriminals

OpenAI, the company behind the popular AI chatbot ChatGPT, has released a report confirming that threat actors are using its AI-powered tool to enhance their malicious cyber operations. The company has disrupted over 20 such operations since the beginning of 2024, marking the first official confirmation of mainstream AI tools being used for offensive cyber activities 12.

Malware Development and Debugging

Several threat groups have been identified using ChatGPT for malware-related activities. Chinese and Iranian hackers were found leveraging the AI tool to debug existing malware, develop new malicious software, and create supporting infrastructure for their operations 12.

One notable case involves the Iranian group Storm-0817, which used ChatGPT to develop custom malware for Android devices. This malware can steal contact lists, call logs, browser history, and access files on infected devices, as well as obtain precise location data 14.

Phishing and Social Engineering

OpenAI reported that a Chinese threat actor, dubbed 'SweetSpecter', targeted the company directly with spear-phishing emails. These emails contained malicious ZIP attachments disguised as support requests, which, if opened, would trigger an infection chain leading to the deployment of SugarGh0st RAT on the victim's system 14.

Election Interference Attempts

Perhaps most concerning is the use of ChatGPT in attempts to influence elections worldwide. OpenAI has observed multiple instances where its AI models were used to generate fake content, including long-form articles and social media comments, aimed at swaying public opinion during election periods 35.

Specific examples include:

  1. An Israeli-based commercial company briefly generated social media comments about elections in India 3.
  2. An operation called "A2Z" focused on Azerbaijan and its neighbors just before the European Parliament elections 3.
  3. Generation of comments about elections in France, Italy, Poland, Germany, and the US 3.
  4. Accounts from Rwanda used to generate comments about local elections for posting on social media platforms 5.

OpenAI's Response and Industry Implications

OpenAI has taken swift action by banning all accounts associated with these malicious activities and sharing indicators of compromise with cybersecurity partners 1. The company emphasizes that while these incidents don't represent new capabilities in malware development, they demonstrate how AI tools can make offensive operations more efficient for low-skilled actors 12.

OpenAI maintains that the majority of AI-generated social media posts received little to no engagement, and many operations ceased entirely after access to the AI models was blocked 3. However, the company acknowledges the need for continued vigilance and improvement of AI safeguards 2.

Future Concerns and Mitigation Efforts

As 2024 is a significant election year globally, there are growing concerns about the potential misuse of AI in influencing public opinion. OpenAI has committed to working with internal safety and security teams, as well as sharing findings with industry peers and the research community to prevent such abuses 23.

The incidents highlight the double-edged nature of AI technology, showcasing both its potential for misuse and its ability to help detect and prevent cyber threats. As AI continues to evolve, it will be crucial for companies like OpenAI to stay ahead of bad actors and implement robust security measures to protect users and maintain the integrity of democratic processes worldwide.

Continue Reading
OpenAI Cracks Down on ChatGPT Misuse: Bans Accounts Linked

OpenAI Cracks Down on ChatGPT Misuse: Bans Accounts Linked to Surveillance and Influence Campaigns

OpenAI has banned multiple accounts for misusing ChatGPT in surveillance and influence campaigns, highlighting the ongoing challenge of preventing AI abuse while maintaining its benefits for legitimate users.

TechSpot logoTechRadar logoThe Hacker News logoDigital Trends logo

15 Sources

TechSpot logoTechRadar logoThe Hacker News logoDigital Trends logo

15 Sources

OpenAI Blocks Iranian Hackers' ChatGPT Accounts for US

OpenAI Blocks Iranian Hackers' ChatGPT Accounts for US Election Interference Attempts

OpenAI has taken action against Iranian hacker groups using ChatGPT to influence the US presidential elections. The company has blocked several accounts and is working to prevent further misuse of its AI technology.

The Times of India logoThe Hindu logoEconomic Times logoThe Guardian logo

19 Sources

The Times of India logoThe Hindu logoEconomic Times logoThe Guardian logo

19 Sources

ChatGPT Crawler Vulnerability: Potential for DDoS Attacks

ChatGPT Crawler Vulnerability: Potential for DDoS Attacks and Prompt Injection

A security researcher has uncovered a vulnerability in ChatGPT's crawler that could potentially be exploited for DDoS attacks and prompt injection, raising concerns about AI security and OpenAI's response to the issue.

MakeUseOf logoDataconomy logoNDTV Gadgets 360 logotheregister.com logo

4 Sources

MakeUseOf logoDataconomy logoNDTV Gadgets 360 logotheregister.com logo

4 Sources

ChatGPT Blocks Over 250,000 AI-Generated Election Candidate

ChatGPT Blocks Over 250,000 AI-Generated Election Candidate Images to Combat Misinformation

OpenAI's ChatGPT rejected more than 250,000 requests to generate images of U.S. election candidates in the lead-up to Election Day, as part of efforts to prevent AI-driven misinformation and election interference.

Euronews English logoPC Magazine logoengadget logoBenzinga logo

9 Sources

Euronews English logoPC Magazine logoengadget logoBenzinga logo

9 Sources

OpenAI Thwarts China-Linked Phishing Attempt on Employees

OpenAI Thwarts China-Linked Phishing Attempt on Employees

OpenAI reveals a foiled phishing attack by a suspected China-based group, highlighting cybersecurity risks in the AI industry amid US-China tech rivalry.

NDTV Gadgets 360 logoBloomberg Business logoFortune logotheregister.com logo

5 Sources

NDTV Gadgets 360 logoBloomberg Business logoFortune logotheregister.com logo

5 Sources

TheOutpost.ai

Your one-stop AI hub

The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.

© 2025 TheOutpost.AI All rights reserved