2 Sources
2 Sources
[1]
OpenAI Says ChatGPT Refused to Help Chinese Influence Operations
OpenAI said its ChatGPT AI service refused to assist an individual associated with Chinese law enforcement in planning an online campaign to discredit the Japanese prime minister. In its latest update on disrupting malicious uses of artificial intelligence, the San Francisco-based startup detailed requests by the user that included editing status reports on a wider net of covert influence operations against domestic and foreign adversaries. OpenAI interpreted the evidence it gathered as indicative of a "large-scale, resource-intensive and sustained" effort by Chinese law enforcement to suppress dissent. "I'm not familiar with what you mentioned and do not see any basis for this accusation," Chinese foreign ministry spokeswoman Mao Ning said at a regular news conference on Thursday. The announcement comes on the heels of archrival Anthropic PBC's decision to loosen its hallmark commitment to AI guardrails, even as it resists pressure from the US Department of Defense about dropping such safeguards. OpenAI on Wednesday said it identified a series of misdeeds that included romance scams targeting Indonesians, a social media content farm linked to Russia and more accounts deemed likely to have originated in China seeking information from US officials. The plan targeting Sanae Takaichi, Japan's first female prime minister, surfaced in mid-October, in the days leading up to her election when she criticized the state of human rights in Inner Mongolia, according to OpenAI. The user sought help in crafting a plan that would amplify negative comments about Takaichi, accuse her of far-right leanings and increase online pressure, the company said. The Japanese prime minister later elicited a rebuke from China over remarks about Taiwan, a self-governing island that China considers part of its territory. The ensuing diplomatic spat continues to weigh on tourism and trade between the two countries.
[2]
Chinese law enforcement tried using ChatGPT to discredit Japan's PM, OpenAI says
Why it matters: The operation was unusual and "revealed a lot about China's strategy for covert influence operations and transnational repression," Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, told reporters. * "These cyber special operations are large scale, resource intensive and sustained," Nimmo added. Driving the news: An individual tied to Chinese law enforcement used ChatGPT to continuously edit and polish updates to reports about their so-called "cyber special operations." * The updates suggest that Chinese law enforcement has built and is expanding a strategy to "suppress dissent and silence critics both online and offline" around the world using hundreds of people, thousands of fake accounts and locally deployed AI models, according to the report. * The updates also referenced plans for a large-scale influence operation partially powered by Chinese open-weight AI models. What they're saying: "It's not just digital, it's not just about trolling, it's industrialized," Nimmo told reporters. "It's about trying to hit critics of the (Chinese Communist Party) with everything, everywhere, all at once." Zoom in: In mid-October, the user attempted to use ChatGPT to design and refine a campaign aimed at discrediting Sanae Takaichi -- who won a landslide election victory last month -- after she publicly criticized the state of human rights in Inner Mongolia. * Takaichi also infuriated Beijing last year when she suggested that Japan might defend Taiwan in the event of a Chinese invasion. * The plan hinged on six elements, including posting and amplifying negative comments about Takaichi on social media; sending complaints to Japanese politicians using fake email accounts posing as foreign residents; and accusing Takaichi of far-right leanings. Yes, but: ChatGPT refused to help the individual refine the campaign. * Instead, the user then returned a few weeks later to update a report that indicated the campaign went ahead, likely using locally hosted Chinese AI models, according to OpenAI. * That update also suggested the user included a set of hashtags in its social media operations. OpenAI researchers traced those to posts on X, Blogspot and Pixiv, a popular online Japanese community for artists. The big picture: Many of the other influence operations outlined in OpenAI's report reflect the same old tools and tactics that influence operators typically use in online campaigns -- just supercharged with AI. * ChatGPT helped Cambodia-based scammers create marketing materials for a fake online dating service used in romance scams. * The chatbot also helped Russian-based actors translate social media comments in Spanish for an operation targeting Argentina. What to watch: Whether scammers and nation-state operators change their tactics after OpenAI made their techniques and tells known publicly.
Share
Share
Copy Link
OpenAI disclosed that ChatGPT rejected requests from an individual tied to Chinese law enforcement seeking to design a campaign targeting Japan's Prime Minister Sanae Takaichi. The incident reveals large-scale cyber special operations using hundreds of people, thousands of fake accounts, and locally deployed AI models to suppress dissent globally.
OpenAI revealed that ChatGPT refused to assist an individual associated with Chinese law enforcement in planning an online campaign to discredit Sanae Takaichi, Japan's first female prime minister
1
. The disclosure, part of OpenAI's latest update on disrupting malicious uses of artificial intelligence, exposes what the company describes as a "large-scale, resource-intensive and sustained" effort by Chinese authorities to suppress dissent both domestically and internationally1
.
Source: Axios
The attempted operation surfaced in mid-October, days before Takaichi's landslide election victory, after she publicly criticized the state of human rights in Inner Mongolia
2
. The user sought ChatGPT's help in crafting a plan that would amplify negative comments about Takaichi, accuse her of far-right leanings, and increase online pressure1
. The plan hinged on six elements, including posting and amplifying negative comments on social media, sending complaints to Japanese politicians using fake email accounts posing as foreign residents, and accusing Takaichi of extremist political views2
.
Source: Bloomberg
Despite ChatGPT's refusal to participate, the user returned weeks later to update a report indicating the campaign proceeded anyway, likely using locally hosted Chinese AI models
2
. Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, told reporters the operation "revealed a lot about China's strategy for covert influence operations and transnational repression"2
. The updates referenced in the reports suggest Chinese law enforcement has built and is expanding a strategy to suppress dissent using hundreds of people, thousands of fake accounts, and locally deployed AI models around the world2
."It's not just digital, it's not just about trolling, it's industrialized," Nimmo explained. "It's about trying to hit critics of the Chinese Communist Party with everything, everywhere, all at once"
2
. The user continuously edited and polished updates to reports about their so-called "cyber special operations," providing OpenAI researchers with unusual insight into the scale and methodology of these state-backed groups2
.Related Stories
OpenAI identified the campaign targeting Takaichi as part of a broader pattern of covert activity. The company said it discovered a series of misdeeds that included romance scams targeting Indonesians, a social media content farm linked to Russia, and additional accounts likely originating in China seeking information from US officials . ChatGPT helped Cambodia-based scammers create marketing materials for a fake online dating service used in romance scams, while the chatbot also assisted Russian-based actors in translating social media comments in Spanish for an operation targeting Argentina
2
.The update included a set of hashtags used in social media operations that OpenAI researchers traced to posts on X, Blogspot, and Pixiv, a popular online Japanese community for artists
2
. This information warfare demonstrates how nation-state operators are attempting to leverage AI models to supercharge traditional disinformation campaign tactics. The disclosure comes as OpenAI's archrival Anthropic PBC loosens its commitment to AI guardrails, even while resisting pressure from the US Department of Defense about dropping such safeguards .China's foreign ministry spokeswoman Mao Ning denied the accusations at a regular news conference, stating "I'm not familiar with what you mentioned and do not see any basis for this accusation" . The diplomatic tension follows Takaichi's remarks about Taiwan, which elicited a rebuke from China and continues to affect tourism and trade between the two countries . Observers will be watching whether scammers and nation-state operators change their tactics after OpenAI made their techniques publicly known
2
.Summarized by
Navi
06 Jun 2025•Technology

22 Feb 2025•Technology

10 Oct 2024•Technology

1
Policy and Regulation

2
Technology

3
Policy and Regulation
