4 Sources
4 Sources
[1]
OpenAI bans some Chinese, Russian accounts using AI for evil
It also banned some suspected Russian accounts trying to create influence campaigns and malware OpenAI has banned ChatGPT accounts believed to be linked to Chinese government entities attempting to use AI models to surveil individuals and social media accounts. In its most recent threat report [PDF] published today, the GenAI giant said that these users usually asked ChatGPT to help design tools for large-scale monitoring and analysis - but stopped short of asking the model to perform the surveillance activities. "What we saw and banned in those cases was typically threat actors asking ChatGPT to help put together plans or documentation for AI-powered tools, but not then to implement them," Ben Nimmo, principal investigator on OpenAI's Intelligence and Investigations team, told reporters. One now-banned user, suspected to be using a VPN to access the AI service from China, asked ChatGPT to design promotional materials and project plans for a social media listening tool, described as a "probe," that could scan X, Facebook, Instagram, Reddit, TikTok, and YouTube for what the user described as extremist speech, and ethnic, religious, and political content. This user claimed a government client wanted this scanning tool, but stopped short of using the model to monitor social media. OpenAI said it's unable to verify if the Chinese government ended up using any such tool. In two other cases, the company banned one user who asked ChatGPT to identify funding sources for an X account that criticized the Chinese government and another one who asked ChatGPT to identify petition organizers in Mongolia. In both, we're told, OpenAI's models only provided publicly available information - not identities, funding sources, or other sensitive details. "Cases like these are limited snapshots, but they do give us important insights into how authoritarian regimes might abuse future AI capabilities," Nimmo said. "They point to something about the direction of travel, even if they also suggest that maybe the destination is somewhere away." Since the company started producing threat reports in February 2024, OpenAI said it has banned more than 40 networks that violated its usage policies. Also since that time, the threat groups and individuals attempting to use AI for evil have been employing the models to improve their existing tradecraft, not to develop entirely new cyberattacks or workflows. That still seems to be the case, according to OpenAI execs. More recently, however, some of the disrupted accounts appear to be using multiple AI models to achieve their nefarious goals. "One China-linked cluster that we investigated, for example, used ChatGPT to draft phishing lures and then explored another model, DeepSeek, to automate mass targeting," said Michael Flossman, who leads OpenAI's threat intelligence team. Similarly, a set of suspected and now-banned Russian accounts used ChatGPT to generate video prompts for an influence operation dubbed Stop News, but then attempted to use other companies' AI tools to produce the videos that were later posted on YouTube and TikTok. OpenAI could not independently confirm which other models this group used. "We're seeing adversaries routinely use multiple AI tools hopping between models for small gains in speed or automation," Flossman said. In another example of attempted model abuse originating from Russia, the company banned accounts asking ChatGPT to develop and refine malware, including a remote-access trojan, credential stealers, and features to help malware evade detection. The company wrote: These accounts appear to be linked with Russian-speaking criminal groups, as the threat intel team saw them posting about their activities in a Telegram channel connected to a specific criminal gang. OpenAI execs declined to attribute the malware-making endeavors to a particular cybercrime crew, but said they have "medium to high confidence on who is behind it." ®
[2]
Foreign adversaries are using multiple AI tools, OpenAI warns
Why it matters: In the cases OpenAI discovered, the adversaries typically turned to ChatGPT to help plan their schemes, then used other models to carry them out -- reflecting the range of applications for AI tools in such operations. Zoom in: OpenAI banned several accounts tied to nation-state campaigns that seemed to be using multiple AI models to improve their operations. * A Russian-based actor that was generating content for a covert influence operation used ChatGPT to write prompts seemingly for another AI video model. * A cluster of Chinese-language accounts used ChatGPT to research and refine phishing automation they wanted to run on China-based model DeepSeek. * OpenAI also confirmed that an actor the company previously disrupted was the same one Anthropic recently flagged in a threat report, suggesting they were using both tools. Between the lines: OpenAI mostly observed threat actors using ChatGPT to improve their existing tactics, rather than creating new ones, Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, told reporters in a call ahead of the report's release. * However, the multi-model approach means that investigators have "just a glimpse" at how threat actors are using a specific model, Nimmo said. The intrigue: Nation-state hackers and scammers are also learning to hide the telltale signs of AI usage, OpenAI's research team found. One scam network asked ChatGPT to remove em dashes from its writing, for example. The big picture: Much like the U.S. government, foreign adversaries have been exploring ways to use ChatGPT and similar tools for years. * In the latest report, OpenAI said it had banned accounts that appeared to be tied to both China-based entities and Russian-speaking criminal groups for using the model to help develop malware and write phishing emails. * The company also banned accounts linked to Chinese government entities, including some that were asking OpenAI's models to "generate work proposals for large-scale systems designed to monitor social media conversations," according to the report. What to watch: The campaigns OpenAI identified didn't seem to be very effective, per the report. But nation-state entities are still early in their AI experimentations.
[3]
US foreign adversaries use ChatGPT with other AI models in cyber operations: Report
Malicious actors from U.S. foreign adversaries used ChatGPT jointly with other AI models to conduct various cyber operations, according to a new OpenAI report. Users linked to China and Russia relied on OpenAI's technology in conjunction with other models, such as China's DeepSeek, to conduct phishing campaigns and covert influence operations, the report found. "Increasingly, we have disrupted threat actors who appeared to be using multiple AI models to achieve their aims," OpenAI noted. A cluster of ChatGPT accounts that showed signs consistent with Chinese government intelligence efforts used the AI model to generate content for phishing campaigns in multiple languages, in addition to developing tools and malware. This group also looked at using DeepSeek to automate this process, such as analyzing online content to generate a list of email targets and produce content that would likely appeal to them. OpenAI banned the accounts but noted it could not confirm whether they ultimately used automation with other AI models. Another cluster of accounts based in Russia used ChatGPT to develop scripts, SEO-optimized descriptions and hashtags, translations and prompts for generating news-style videos with other AI models. The activity appears to be part of a Russian influence operation that OpenAI previously identified, which posted AI-generated content across websites and social media platforms, the report noted. Its latest content criticized France and the U.S. for their role in Africa while praising Russia. The accounts, now banned by OpenAI, also produced content critical of Ukraine and its supporters. However, the ChatGPT maker found that these efforts gained little traction. OpenAI separately noted in the report that it banned several accounts seemingly linked to the Chinese government that sought to use ChatGPT to develop proposals for large-scale monitoring, such as tracking social media or movements. "While these uses appear to have been individual rather than institutional, they provide a rare snapshot into the broader world of authoritarian abuses of AI," the company wrote.
[4]
OpenAI bans suspected China-linked accounts for seeking surveillance proposals
In its latest public threat report, OpenAI said some individuals had asked its chatbot to outline social media "listening" tools and other monitoring concepts, violating the startup's national security policy. OpenAI said on Tuesday it has banned several ChatGPT accounts with suspected links to the Chinese government entities after the users asked for proposals to monitor social media conversations. In its latest public threat report, OpenAI said some individuals had asked its chatbot to outline social media "listening" tools and other monitoring concepts, violating the startup's national security policy. The San Francisco-based firm's report raises safety concerns over potential misuse of generative AI amid growing competition between the U.S. and China to shape the technology's development and rules. OpenAI said it also banned several Chinese‑language accounts that used ChatGPT to assist phishing and malware campaigns and asked the model to research additional automation that could be achieved through China's DeepSeek. The Chinese embassy in the U.S. did not immediately respond to a request for comment on the report. It also banned accounts tied to suspected Russian‑speaking criminal groups that used the chatbot to help develop certain malware, OpenAI said. The Microsoft-backed startup has disrupted and reported more than 40 networks since it began public threat reporting in February last year and its models refused overtly malicious prompts, the AI company added. "We found no evidence of new tactics or that our models provided threat actors with novel offensive capabilities," the company said in the report. OpenAI, which now has more than 800 million weekly ChatGPT users, became the world's most valuable startup at a $500 billion valuation after completing a secondary share sale last week.
Share
Share
Copy Link
OpenAI's latest threat report reveals the banning of accounts linked to Chinese and Russian entities for misusing AI models. The report highlights attempts at surveillance, influence campaigns, and malware development using multiple AI tools.
In its latest threat report, OpenAI revealed that it has banned multiple ChatGPT accounts suspected of being associated with Chinese government entities and Russian-speaking criminal groups. These accounts were found to be misusing AI models for various nefarious purposes, including surveillance, influence campaigns, and malware development
1
.OpenAI reported banning several accounts linked to Chinese government entities that sought to use ChatGPT for developing large-scale monitoring systems. One user, suspected of using a VPN to access the service from China, asked ChatGPT to design promotional materials and project plans for a social media "listening" tool. This tool was described as a "probe" capable of scanning major social media platforms for what the user termed as extremist speech, and ethnic, religious, and political content
1
.In other instances, banned users attempted to use ChatGPT to identify funding sources for an X account critical of the Chinese government and to pinpoint petition organizers in Mongolia
1
.OpenAI also disrupted accounts associated with Russian-speaking entities engaged in influence operations and malware development. A set of suspected Russian accounts used ChatGPT to generate video prompts for an influence operation called "Stop News," later attempting to use other AI tools to produce videos for distribution on YouTube and TikTok
1
2
.Additionally, the company banned accounts asking ChatGPT to develop and refine malware, including remote-access trojans and credential stealers. These accounts were linked to Russian-speaking criminal groups, as evidenced by their activities in a specific Telegram channel
1
.Related Stories
A significant finding from the report is the emerging trend of threat actors using multiple AI models in their operations. OpenAI observed that adversaries are routinely hopping between different AI tools for small gains in speed or automation
2
.For instance, a cluster of Chinese-language accounts used ChatGPT to research and refine phishing automation techniques they intended to run on DeepSeek, a China-based AI model
2
3
.Despite these concerning attempts, OpenAI noted that the identified campaigns didn't seem to be very effective. The company emphasized that nation-state entities are still in the early stages of their AI experimentations
2
.Since February 2024, when OpenAI began producing threat reports, the company has banned more than 40 networks that violated its usage policies
1
. OpenAI continues to monitor and disrupt such activities, stating that it found no evidence of new tactics or that their models provided threat actors with novel offensive capabilities4
.Summarized by
Navi
[1]
06 Jun 2025•Technology
22 Feb 2025•Technology
10 Oct 2024•Technology