OpenAI Bans Chinese and Russian Accounts Misusing AI for Surveillance and Cyberattacks

Reviewed byNidhi Govil

4 Sources

Share

OpenAI's latest threat report reveals the banning of accounts linked to Chinese and Russian entities for misusing AI models. The report highlights attempts at surveillance, influence campaigns, and malware development using multiple AI tools.

News article

OpenAI Uncovers and Bans Accounts Linked to Chinese and Russian Entities Misusing AI Models

In its latest threat report, OpenAI revealed that it has banned multiple ChatGPT accounts suspected of being associated with Chinese government entities and Russian-speaking criminal groups. These accounts were found to be misusing AI models for various nefarious purposes, including surveillance, influence campaigns, and malware development

1

.

Chinese Entities Attempt to Leverage AI for Surveillance

OpenAI reported banning several accounts linked to Chinese government entities that sought to use ChatGPT for developing large-scale monitoring systems. One user, suspected of using a VPN to access the service from China, asked ChatGPT to design promotional materials and project plans for a social media "listening" tool. This tool was described as a "probe" capable of scanning major social media platforms for what the user termed as extremist speech, and ethnic, religious, and political content

1

.

In other instances, banned users attempted to use ChatGPT to identify funding sources for an X account critical of the Chinese government and to pinpoint petition organizers in Mongolia

1

.

Russian Actors Exploit AI for Influence Operations and Malware

OpenAI also disrupted accounts associated with Russian-speaking entities engaged in influence operations and malware development. A set of suspected Russian accounts used ChatGPT to generate video prompts for an influence operation called "Stop News," later attempting to use other AI tools to produce videos for distribution on YouTube and TikTok

1

2

.

Additionally, the company banned accounts asking ChatGPT to develop and refine malware, including remote-access trojans and credential stealers. These accounts were linked to Russian-speaking criminal groups, as evidenced by their activities in a specific Telegram channel

1

.

Multi-Model Approach: A New Trend in AI Exploitation

A significant finding from the report is the emerging trend of threat actors using multiple AI models in their operations. OpenAI observed that adversaries are routinely hopping between different AI tools for small gains in speed or automation

2

.

For instance, a cluster of Chinese-language accounts used ChatGPT to research and refine phishing automation techniques they intended to run on DeepSeek, a China-based AI model

2

3

.

Limited Effectiveness and Ongoing Monitoring

Despite these concerning attempts, OpenAI noted that the identified campaigns didn't seem to be very effective. The company emphasized that nation-state entities are still in the early stages of their AI experimentations

2

.

Since February 2024, when OpenAI began producing threat reports, the company has banned more than 40 networks that violated its usage policies

1

. OpenAI continues to monitor and disrupt such activities, stating that it found no evidence of new tactics or that their models provided threat actors with novel offensive capabilities

4

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo