Curated by THEOUTPOST
On Sat, 22 Feb, 12:11 AM UTC
15 Sources
[1]
OpenAI cracks down on users developing social media surveillance tool using ChatGPT
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. Doh! It's proven that anytime you release something to the World Wide Web, some people - usually a lot - will abuse it. So it's probably not surprising that people are abusing ChatGPT in ways against OpenAI's policies and privacy laws. Developers have difficulty catching everything, but they bring their ban hammer when they do. OpenAI recently published a report highlighting some attempted misuses of its ChatGPT service. The developer caught users in China exploiting ChatGPT's "reasoning" capabilities to develop a tool to surveil social media platforms. They asked the chatbot to advise them on creating a business strategy and to check the coding of the tool. OpenAI noted that its mission is to build "democratic" AI models, a technology that should benefit everyone by enforcing some common-sense rules. The company has actively looked for potential misuses or disruptions by various stakeholders and described a couple coming out of China. The most interesting case involves a set of ChatGPT accounts focused on developing a surveillance tool. The accounts used ChatGPT's AI model to generate detailed descriptions and sales pitches for a social media listening tool. The software, powered by non-OpenAI models, would generate real-time reports regarding Western protests and send them to Chinese security services. The users also used ChatGPT to debug the tool's code. OpenAI policy explicitly prohibits using its AI tech for performing surveillance tasks, including unauthorized monitoring on behalf of government and authoritarian regimes. The developers banned those accounts for disregarding the platform's rules. The Chinese actors attempted to conceal their location by using a VPN. They also utilized remote access tools such as AnyDesk and VoIP to appear to be working from the US. However, the accounts followed a time pattern consistent with Chinese business hours. The users also prompted ChatGPT to use Chinese. The surveillance tool they were developing used Meta's Llama AI models to generate documents based on the surveillance. The another instance of ChatGPT abuse involved Chinese users generating end-of-year performance reports for phishing email campaigns. OpenAI also banned an account that leveraged the LLM in a disinformation campaign against Cai Xia, a Chinese dissident currently living in the US. OpenAI Threat Intelligence Investigator Ben Nimmo told The New York Times that this was the first time the company caught people trying to exploit ChatGPT to make an AI-based surveillance tool. However, with millions of users mainly using it for legitimate reasons, cyber-criminal activity is the exception, not the norm.
[2]
Open AI bans multiple accounts found to be misusing ChatGPT
Misinformation and surveillance campaigns were uncovered OpenAI has confirmed it recently identified a set of accounts involved in malicious campaigns, and banned users responsible. The banned accounts involved in the 'Peer Review' and 'Sponsored Discontent' campaigns likely originate from China, OpenAI said, and "appear to have used, or attempted to use, models built by OpenAI and another U.S. AI lab in connection with an apparent surveillance operation and to generate anti-American, Disrupting malicious uses of our models: an update February 2025 3 Spanish-language articles". AI has facilitated a rise in disinformation, and is a useful tool for threat actors to use to disrupt elections and undermine democracy in unstable or politically divided nations - and state-sponsored campaigns have used the technology to their advantage. The 'Peer Review' campaign used ChatGPT to generate "detailed descriptions, consistent with sales pitches, of a social media listening tool that they claimed to have used to feed real-time reports about protests in the West to the Chinese security services", OpenAI confirmed. As part of this surveillance campaign, the threat actors used the model to "edit and debug code and generate promotional materials" for suspected AI-powered social media listening tools - although OpenAI was unable to identify posts on social media following the campaign. ChatGT accounts participating in the 'Sponsored Discontent' campaign, were used to generate comments in English and news articles in Spanish, consistent with 'spamouflage' behavior, primarily using anti-American rhetoric, probably to spark discontent in Latin America, namely Peru, Mexico, and Ecuador. This isn't the first time Chinese state-sponsored actors have been identified using 'spamouflage' tactics to spread disinformation. In late 2024, a Chinese influence campaign was discovered targeting US voters with thousands of AI generated images and videos, mostly low-quality and containing false information.
[3]
OpenAI Bans Accounts Misusing ChatGPT for Surveillance and Influence Campaigns
OpenAI on Friday revealed that it banned a set of accounts that used its ChatGPT tool to develop a suspected artificial intelligence (AI)-powered surveillance tool. The social media listening tool is said to likely originate from China and is powered by one of Meta's Llama models, with the accounts in question using the AI company's models to generate detailed descriptions and analyze documents for an apparatus capable of collecting real-time data and reports about anti-China protests in the West and sharing the insights with Chinese authorities. The campaign has been codenamed Peer Review owing to the "network's behavior in promoting and reviewing surveillance tooling," researchers Ben Nimmo, Albert Zhang, Matthew Richard, and Nathaniel Hartley noted, adding the tool is designed to ingest and analyze posts and comments from platforms such as X, Facebook, YouTube, Instagram, Telegram, and Reddit. In one instance flagged by the company, the actors used ChatGPT to debug and modify source code that's believed to run the monitoring software, referred to as "Qianyue Overseas Public Opinion AI Assistant." Besides using its model as a research tool to surface publicly available information about think tanks in the United States, and government officials and politicians in countries like Australia, Cambodia and the United States, the cluster has also been found to leverage ChatGPT access to read, translate and analyze screenshots of English-language documents. Some of the images were announcements of Uyghur rights protests in various Western cities, and were likely copied from social media. It's currently not known if these images were authentic. OpenAI also said it disrupted several other clusters that were found abusing ChatGPT for various malicious activities - The development comes as AI tools are being increasingly used by bad actors to facilitate cyber-enabled disinformation campaigns and other malicious operations. Last month, Google Threat Intelligence Group (GTIG) revealed that over 57 distinct threat actors with ties to China, Iran, North Korea, and Russia used its Gemini AI chatbot to improve multiple phases of the attack cycle and conduct research into topical events, or perform content creation, translation, and localization. "The unique insights that AI companies can glean from threat actors are particularly valuable if they are shared with upstream providers, such as hosting and software developers, downstream distribution platforms, such as social media companies, and open-source researchers," OpenAI said. "Equally, the insights that upstream and downstream providers and researchers have into threat actors open up new avenues of detection and enforcement for AI companies."
[4]
OpenAI cracks down on ChatGPT scammers
OpenAI has made it clear that its flagship AI service, ChatGPT is not intended for malicious use. The company has released a report detailing that it has observed the trends of bad actors using its platform as it becomes more popular. OpenAI indicated it has removed dozens of accounts on the suspicion of using ChatGPT in unauthorized ways, such as for "debugging code to generating content for publication on various distribution platforms." Recommended Videos The company has also recently announced reaching a 400 million weekly active user milestone. The company detailed that its usership has increased by more than 100 million in less than three months as more enterprises and developers utilize its tools. However, ChatGPT is also a free service that can be accessed globally. As the moral and ethical aspects of its functions have long been in question, OpenAI has had to come to terms with the fact that there are entities that have ulterior motives for the platform. "OpenAI's policies strictly prohibit use of output from our tools for fraud or scams. Through our investigation into deceptive employment schemes, we identified and banned dozens of accounts," the company said in its report. In its report, OpenAI discussed having to challenge nefarious actions taking place on ChatGPT. The company highlighted several case studies, where it has uncovered and taken action by banning the accounts found to be using the tool for malicious intent. In one instance, OpenAI detailed an account that wrote disparaging news articles about the US, with the news source being published in Latin America under the guise of a Chinese publication byline. Another case, localized in North Korea was found to be to be generating resumes and job profiles for make-believe job applicants. According to OpenAI, the account may have been used for applying to jobs at Western companies. Yet another study uncovered accounts believed to have originated in Cambodia that used ChatGPT for translation and to generate comments in networks of "romance scammers," that infiltrate several social media platforms, including X, Facebook, and Instagram. OpenAI has confirmed that it has shared its findings with its industry contemporaries, such as Meta, that might inadvertently be affected by the actions happening on ChatGPT.
[5]
OpenAI has been actively banning users if they're suspected of malicious activities
Restricted accounts used AI to create scam networks and fictitious resumes OpenAI has removed numerous user accounts globally after suspecting its artificial intelligence tool, ChatGPT, was being used for malicious purposes, according to a new report. Scammers have been using AI to enhance their attacks, OpenAI notes in a new report outlining the AI trends and features that malicious actors are employing, including case studies of attacks that the company has thwarted. Surpassing over 400 million weekly active users, ChatGPT is freely accessible globally. In its report, OpenAI says it repeatedly "saw threat actors using AI for multiple tasks at once, from debugging code to generating content for publication on various distribution platforms." "While no one entity has a monopoly on detection, connecting accounts and patterns of behavior has in some cases allowed us to identify previously unreported connections between apparently unrelated sets of activity across platforms," it wrote. Among the cases OpenAI has disrupted, the company recently banned a ChatGPT account that generated news articles that denigrated the US and were published in mainstream news outlets in Latin America under a Chinese company's byline. Further, the company also banned accounts, supposedly originating from North Korea, that used AI to generate resumes and online profiles for fictitious job applicants. The company speculated that these profiles were created in hopes of getting jobs at Western companies. In another instance, OpenAI identified a group of accounts potentially linked to Cambodia, using the chatbot as a means to translate and generate comments for a "romance baiting" scam network across social media and communication platforms, including X, Facebook and Instagram. The report itself outlines several other instances blocked by the company; however, it does not specify how many "dozens" of accounts in total were removed, or the time frame in which they occurred. While OpenAI has been on the front foot in stopping these malicious uses of ChatGPT, the company has also reiterated that it won't tolerate the misuse of its technology. "OpenAI's policies strictly prohibit use of output from our tools for fraud or scams. Through our investigation into deceptive employment schemes, we identified and banned dozens of accounts," it wrote. Through sharing insights with industry peers such as Meta, the company hopes to enhance "our collective ability to detect, prevent, and respond to such threats while advancing our shared safety".
[6]
OpenAI bans accounts appearing to work on a Chinese surveillance tool
OpenAI recently banned several accounts that had been using ChatGPT to write sales pitches and debug code for a suspected social media surveillance tool that likely originated in China, the company said -- part of a broader effort by the AI startup to police malicious uses of its powerful AI models. According to a report the San Francisco startup released on Friday, the accounts were using ChatGPT to advertise and augment what they claimed was an AI assistant capable of collecting real-time data and reports about anti-China protests in the US, UK and other Western countries. That information would then be relayed to Chinese authorities, the report said. The findings come at a time of growing concern in the US around Chinese use of American technology to advance its own interests. "This is a pretty troubling glimpse into the way one nondemocratic actor tried to use democratic or US-based AI for nondemocratic purposes, according to the materials they were generating themselves," said Ben Nimmo, OpenAI's principal investigator on the company's intelligence and investigations team, during a press call Thursday. By publishing such cases, Nimmo said OpenAI aims to shed light on how "authoritarian regimes may try to leverage US-built AI, democratic AI, against the US and allied countries, as well as their own people." OpenAI said that the accounts in the network referenced using other AI tools to develop their code, including a version of Llama, the open source model developed by Meta Platforms Inc. In a statement, Meta said that if its service was involved, it was likely one of many such tools available to the users, including AI models made in China. OpenAI noted it does not have visibility into whether this code was deployed. The software, called "Qianyue Overseas Public Opinion AI Assistant," couldn't be independently verified by OpenAI, though the startup had access to the text of apparent marketing materials. The marketing copy detailed how the purpose of the "social listening" software was to send surveillance reports to Chinese authorities, intelligence agents and staff at Chinese embassies. The software appeared to be specifically focused on identifying online conversations in Western countries about demonstrations related to human rights in China. Descriptions of the software said it pulled from social media conversations on platforms such as X, Facebook and Instagram. It is against OpenAI's policies to use its AI for communications surveillance or unauthorized monitoring of individuals, including "on behalf of governments and authoritarian regimes that seek to suppress personal freedoms and rights," according to the company's threat report. In recent months, OpenAI has been warning politicians in the US about what it sees as a growing economic and national security threat from Chinese-built AI, particularly in the wake of the surprisingly competitive AI models from Chinese startup DeepSeek. Some China hawks in the US have criticized Meta for open sourcing its AI tools, saying that it is empowering Chinese AI companies to make advancements. While OpenAI's models are currently kept proprietary, the company has recently been considering open sourcing models in line with growing competition from DeepSeek and others. In a statement, Meta pointed to the growing availability of AI models globally, saying that the limited availability of some Western technology may not matter much when it comes to bad actors. "China is already investing more than a trillion dollars to surpass the US technologically, and Chinese tech companies are releasing their own open AI models as fast as companies in the US," a representative for the company said. In its report, OpenAI also shared several other examples of accounts that it banned for misusing its tools -- including ones linked to Iranian influence operations using ChatGPT to generate social media posts and articles; another appearing to represent a deceptive employment scheme that mimicked scams linked to North Korea; and another set of accounts likely linked to China that were generating Spanish-language articles critical of the US government.
[7]
OpenAI bans Chinese accounts using ChatGPT to edit code for social media surveillance
This is the first time the company has caught an effort like this. OpenAI has banned the accounts of a group of Chinese users who had attempted to use ChatGPT to debug and edit code for an AI social media surveillance tool, the company . The campaign, which OpenAI calls Peer Review, saw the group prompt ChatGPT to generate sales pitches for a program those documents suggest was designed to monitor anti-Chinese sentiment on X, Facebook, YouTube, Instagram and other platforms. The operation appears to have been particularly interested in spotting calls for protests against human rights violations in China, with the intent of sharing those insights with the country's authorities. "This network consisted of ChatGPT accounts that operated in a time pattern consistent with mainland Chinese business hours, prompted our models in Chinese, and used our tools with a volume and variety consistent with manual prompting, rather than automation," said OpenAI. "The operators used our models to proofread claims that their insights had been sent to Chinese embassies abroad, and to intelligence agents monitoring protests in countries including the United States, Germany and the United Kingdom." According to Ben Nimmo, a principal investigator with OpenAI, this was the first time the company had uncovered an AI tool of this kind. "Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our AI models," Nimmo told . Much of the code for the surveillance tool appears to have been based on an open-source version of one of Meta's . The group also appears to have used ChatGPT to generate an end-of-year performance review where it claims to have written phishing emails on behalf of clients in China. "Assessing the impact of this activity would require inputs from multiple stakeholders, including operators of any open-source models who can shed a light on this activity," OpenAI said of the operation's efforts to use ChatGPT to edit code for the AI social media surveillance tool. Separately, OpenAI said it recently banned an account that used ChatGPT to generate social media posts critical of , a Chinese political scientist and dissident who lives in the US in exile. The same group also used the chatbot to generate articles in Spanish critical of the US. These articles were published by "mainstream" news organizations in Latin America and often attributed to either an individual or a Chinese company.
[8]
OpenAI Bans Accounts in China, North Korea Over AI Misuse
Disclaimer: This content generated by AI & may have errors or hallucinations. Edit before use. Read our Terms of use OpenAI has announced that it banned some users' accounts in China and North Korea for malicious use of its models, including social media surveillance and public opinion manipulation, according to its February 21, 2025, report. OpenAI highlighted two incidents carried out by threat actors it believes originated from China, including attempts to use models developed by OpenAI as well as another U.S.-based lab to conduct surveillance and generate "anti-American, Spanish-language articles." It noted that this marks the first instance of an 'influence operation' (IO) carried out with assistance from its models. "This is the first time we've observed a China-origin influence operation successfully planting long-form articles in Latin American media to criticize the U.S.," OpenAI observed in the report. According to the Breakout Scale, which rates IO impact from 1 (lowest) to 6 (highest), this incident qualifies as Category 4 (breakout to mainstream media) due to the wide readership of Latin American news websites. The other incident, which the company claims is Sinocentric, revolved around the development of a 'social media listening tool' -- something it said was used by the threat actors to "feed real-time reports about protests in the West to the Chinese security services." Purportedly named "Qianyue Overseas Public Opinion AI Assistant," the users in question developed the bot to "ingest and analyze posts and comments from platforms such as X, Facebook, YouTube, Instagram, Telegram, and Reddit." The threat actors also used the tool to identify conversations on social media platforms related to China-based political and social topics, with an emphasis on any calls for social demonstrations inside the country -- and fed the insights they found to Chinese authorities, the report further highlighted. The company also said it had identified behavior consistent with an IT worker scheme based in North Korea, referring to attributions from Microsoft and Google. The activity in question was similar in nature to "North Korean state efforts to funnel income through deceptive hiring schemes, where individuals fraudulently obtain positions at Western companies to support the regime's financial network," it said. This report closely follows recent attempts by the United States to restrict and even control the flow of AI chips around the world, enforcing a licensing requirement for companies to ensure the chips do not end up with adversaries such as China, Russia, Iran, and others. Chipmakers, including the likes of Nvidia, seem to have been affected the most by the turbulence in this environment, facing operational hazards due to the export ban, antitrust probes, and more. Questions pertaining to the polarity of efforts in AI safety arise when such restrictions are seen in contrast to U.S.-based companies such as OpenAI catering to U.S. government operations, even going so far as to introduce a tailored version of ChatGPT, which the company launched in January this year for administrative usage. While apprehensions related to the misuse of AI in the report are not misplaced, it raises a broader question regarding who or what popularly used models are geared towards and what constitutes ideal usage. The AI startup, on its part, lists within its Usage Policies provisions that users must abide by whenever they use the company's products, including ChatGPT, labs.openai.com, and the OpenAI API. A few restrictions that the actions of the bad actors in question would contravene are:
[9]
OpenAI Bans Accounts Appearing to Work on a Surveillance Tool
OpenAI recently banned several accounts that had been using ChatGPT to write sales pitches and debug code for a suspected social media surveillance tool that likely originated in China, the company said -- part of a broader effort by the AI startup to police malicious uses of its powerful AI models. According to a report the San Francisco startup released on Friday, the accounts were using ChatGPT to advertise and augment what they claimed was an AI assistant capable of collecting real-time data and reports about anti-China protests in the US, UK and other Western countries. That information would then be relayed to Chinese authorities, the report said.
[10]
OpenAI removes users in China, North Korea suspected of malicious activities
(Reuters) - OpenAI has removed accounts of users from China and North Korea who the artificial intelligence company believes were using its technology for malicious purposes including surveillance and opinion-influence operations, the ChatGPT maker said on Friday. The activities are ways authoritarian regimes could try to leverage AI against the U.S. as well as their own people, OpenAI said in a report, adding that it used AI tools to detect the operations. The company gave no indication how many accounts were banned or over what time period the action occurred. In one instance, users had ChatGPT generate news articles in Spanish that denigrated the United States and were published by mainstream news outlets in Latin America under a Chinese company's byline. In a second instance, malicious actors potentially connected to North Korea used AI to generate resumes and online profiles for fictitious job applicants, with the goal of fraudulently getting jobs at Western companies. Another set of ChatGPT accounts that appeared to be connected to a financial fraud operation based in Cambodia used OpenAI's technology to translate and generate comments across social media and communication platforms including X and Facebook. The U.S. government has expressed concerns about China's alleged use of artificial intelligence to repress its population, spread misinformation and undermine the security of the United States and its allies. OpenAI's ChatGPT is the most popular AI chatbot, and the company's weekly active users have surpassed 400 million. It is in talks to raise up to $40 billion at a $300 billion valuation, in what could be a record single funding round for a private company. (Reporting by Anna Tong in San Francisco; Editing by Cynthia Osterman)
[11]
OpenAI removes users in China, North Korea suspected of malicious activities
Feb 21 (Reuters) - OpenAI has removed accounts of users from China and North Korea who the artificial intelligence company believes were using its technology for malicious purposes including surveillance and opinion-influence operations, the ChatGPT maker said on Friday. The activities are ways authoritarian regimes could try to leverage AI against the U.S. as well as their own people, OpenAI said in a report, adding that it used AI tools to detect the operations. The company gave no indication how many accounts were banned or over what time period the action occurred. In one instance, users had ChatGPT generate news articles in Spanish that denigrated the United States and were published by mainstream news outlets in Latin America under a Chinese company's byline. In a second instance, malicious actors potentially connected to North Korea used AI to generate resumes and online profiles for fictitious job applicants, with the goal of fraudulently getting jobs at Western companies. Another set of ChatGPT accounts that appeared to be connected to a financial fraud operation based in Cambodia used OpenAI's technology to translate and generate comments across social media and communication platforms including X and Facebook. The U.S. government has expressed concerns about China's alleged use of artificial intelligence to repress its population, spread misinformation and undermine the security of the United States and its allies. OpenAI's ChatGPT is the most popular AI chatbot, and the company's weekly active users have surpassed 400 million. It is in talks to raise up to $40 billion at a $300 billion valuation, in what could be a record single funding round for a private company. Reporting by Anna Tong in San Francisco; Editing by Cynthia Osterman Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Cybersecurity Anna Tong Thomson Reuters Anna Tong is a correspondent for Reuters based in San Francisco, where she reports on the technology industry. She joined Reuters in 2023 after working at the San Francisco Standard as a data editor. Tong previously worked at technology startups as a product manager and at Google where she worked in user insights and helped run a call center. Tong graduated from Harvard University.
[12]
OpenAI Removes Chinese Accounts Which Published Propaganda in Latin American Newspapers
OpenAI says it has removed the accounts of several users linked to China, which were used to generate propaganda material published in mainstream newspapers in Latin America. In an updated report spotted by Reuters, OpenAI pointed to a number of incidents where it believes that ChatGPT was used to generate Spanish-language newspaper articles criticizing the US, which were then published in well-known newspapers in Mexico, Peru and Ecuador. The articles centered on political divisions in the United States and current affairs, in particular the topics of drug use and homelessness. The users reportedly prompted ChatGPT to generate the Spanish-language articles in Chinese, during mainland Chinese working hours. OpenAI noted that they used ChatGPT to translate receipts from Latin American newspapers, indicating the articles may well have been paid placements. ChatGPT was also allegedly used by the accounts to generate short-form material, including comments critical of Cai Xia, a well-known Chinese political dissident, which were then posted on X by users claiming to be from the US or India. OpenAI believes some of the activity was consistent with the covert influence operation known as "Spamouflage." This was a major Chinese disinformation operation that was spotted on over 50 social media platforms, including Facebook, Instagram, TikTok, Twitter, and Reddit. The campaign, identified by Meta in 2023, targeted users in the US, Taiwan, UK, Australia, and Japan with positive information about China. This isn't the first time OpenAI has come clean about how its tools have been used for the propaganda efforts of foreign powers. In May 2024 OpenAI reported on how groups based in Russia, China, Iran and Israel used the company's AI models to generate short comments on social media, as well as translate and proofread text in various languages. For example, a Russian propaganda group known as Bad Grammar used OpenAI's technology to generate fake replies about Ukraine to specific posts on Telegram in English and Russian. But though we've seen international propaganda groups leverage OpenAI's tool before, OpenAI thinks the recent incident is unique due to its targeting of mainstream media, calling this "a previously unreported line of effort, which ran in parallel to more typical social media activity, and may have reached a significantly wider audience."
[13]
OpenAI Removes Users in China, North Korea Suspected of Malicious Activities
(Reuters) - OpenAI has removed accounts of users from China and North Korea who the artificial intelligence company believes were using its technology for malicious purposes including surveillance and opinion-influence operations, the ChatGPT maker said on Friday. The activities are ways authoritarian regimes could try to leverage AI against the U.S. as well as their own people, OpenAI said in a report, adding that it used AI tools to detect the operations. The company gave no indication how many accounts were banned or over what time period the action occurred. In one instance, users had ChatGPT generate news articles in Spanish that denigrated the United States and were published by mainstream news outlets in Latin America under a Chinese company's byline. In a second instance, malicious actors potentially connected to North Korea used AI to generate resumes and online profiles for fictitious job applicants, with the goal of fraudulently getting jobs at Western companies. Another set of ChatGPT accounts that appeared to be connected to a financial fraud operation based in Cambodia used OpenAI's technology to translate and generate comments across social media and communication platforms including X and Facebook. The U.S. government has expressed concerns about China's alleged use of artificial intelligence to repress its population, spread misinformation and undermine the security of the United States and its allies. OpenAI's ChatGPT is the most popular AI chatbot, and the company's weekly active users have surpassed 400 million. It is in talks to raise up to $40 billion at a $300 billion valuation, in what could be a record single funding round for a private company. (Reporting by Anna Tong in San Francisco; Editing by Cynthia Osterman)
[14]
Conspiracy against US? OpenAI-ChatGPT removes China, North Korea users for malicious surveillance
OpenAI-ChatGPT has reportedly removed suspected malicious China and North Korea users.OpenAI has removed accounts of users from China and North Korea who the artificial intelligence company believes were using its technology for malicious purposes including surveillance and opinion-influence operations, the ChatGPT maker said on Friday, as per a report. The activities are ways authoritarian regimes could try to leverage AI against the U.S. as well as their own people, OpenAI said in a report, adding that it used AI tools to detect the operations. The company gave no indication how many accounts were banned or over what time period the action occurred, Reuters reported. In one instance, users had ChatGPT generate news articles in Spanish that denigrated the United States and were published by mainstream news outlets in Latin America under a Chinese company's byline. In a second instance, malicious actors potentially connected to North Korea used AI to generate resumes and online profiles for fictitious job applicants, with the goal of fraudulently getting jobs at Western companies, as per the Reuters report. Another set of ChatGPT accounts that appeared to be connected to a financial fraud operation based in Cambodia used OpenAI's technology to translate and generate comments across social media and communication platforms including X and Facebook. The U.S. government has expressed concerns about China's alleged use of artificial intelligence to repress its population, spread misinformation and undermine the security of the United States and its allies, Reuters reported. OpenAI's ChatGPT is the most popular AI chatbot, and the company's weekly active users have surpassed 400 million. It is in talks to raise up to $40 billion at a $300 billion valuation, in what could be a record single funding round for a private company. Q1. What we know about OpenAI's ChatGPT? A1. OpenAI's ChatGPT is the most popular AI chatbot, and the company's weekly active users have surpassed 400 million. Q2. What is valuation of OpenAI-ChatGPT? A2. OpenAI-ChatGPT is in talks to raise up to $40 billion at a $300 billion valuation, in what could be a record single funding round for a private company.
[15]
OpenAI finds new Chinese influence campaigns using its tools
Why it matters: AI's potential to supercharge disinformation and speed the work of nation state-backed cyberattacks is steadily moving from scary theory to complex reality. Driving the news: OpenAI published its latest threat report on Friday, identifying several examples of efforts to misuse ChatGPT and its other tools. What they're saying: "As far as we know this is the first time a Chinese influence operation has been found translating long-form articles into Spanish and publishing them in Latin America," Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, said in a briefing with reporters. Another campaign, which OpenAI dubbed "peer review," consisted of accounts using ChatGPT to generate marketing materials for a social media listening tool that its creators claimed had been used to send reports of protests to the Chinese security services. Between the lines: OpenAI, which started publishing threat reports last year, says that it's doing so "to inform efforts to understand and prepare for how the P.R.C. or other authoritarian regimes may try to leverage AI against the U.S. and allied countries, as well as their own people." Yes, but: As open source tools become more powerful -- and are able to be run locally -- threat actors may use them for more of their tasks, making it harder for such efforts to be detected. "This was a really interesting case where it looks like a threat actor at least mentions the use of a bunch of different models," Nimmo said, noting it's not clear what motivated the use of so many tools. The bottom line: As AI continues to ratchet up attackers' capabilities, AI providers are having to put more effort into tracking and foiling them -- often with the help of their own tools.
Share
Share
Copy Link
OpenAI has banned multiple accounts for misusing ChatGPT in surveillance and influence campaigns, highlighting the ongoing challenge of preventing AI abuse while maintaining its benefits for legitimate users.
OpenAI, the company behind the popular AI chatbot ChatGPT, has recently taken action against users who were exploiting the platform for nefarious purposes. In a detailed report, the company revealed that it had identified and banned multiple accounts involved in surveillance and influence campaigns, primarily originating from China 12.
One of the most concerning discoveries was a campaign codenamed 'Peer Review'. Users, believed to be from China, were utilizing ChatGPT to develop an AI-powered social media surveillance tool. This tool was designed to monitor and report on Western protests in real-time, potentially feeding information to Chinese security services 13.
The developers of this tool used ChatGPT to:
The surveillance tool itself was reportedly powered by Meta's Llama AI models, demonstrating how different AI technologies can be combined for potentially harmful purposes 3.
Another campaign, dubbed 'Sponsored Discontent', involved the use of ChatGPT to generate anti-American content. This included English-language comments and Spanish-language news articles, consistent with 'spamouflage' behavior. The campaign appeared to target Latin American countries, including Peru, Mexico, and Ecuador, likely aiming to spark discontent in these regions 24.
OpenAI's investigation revealed several other instances of ChatGPT misuse:
In response to these discoveries, OpenAI has taken swift action:
This incident highlights the ongoing challenges in preventing the misuse of AI technologies. As ChatGPT's user base grows, reaching over 400 million weekly active users 4, the potential for abuse increases proportionally.
OpenAI's Threat Intelligence Investigator, Ben Nimmo, noted that this was the first time the company had caught users attempting to create an AI-based surveillance tool using ChatGPT 1. However, he emphasized that such criminal activity remains the exception rather than the norm among the millions of legitimate users.
As AI tools become more sophisticated and accessible, the need for robust security measures and ethical guidelines becomes increasingly critical. OpenAI's proactive approach in identifying and addressing these issues demonstrates the company's commitment to responsible AI development and usage. However, it also underscores the ongoing cat-and-mouse game between AI developers and those seeking to exploit these powerful technologies for malicious purposes.
Reference
[4]
OpenAI reports multiple instances of ChatGPT being used by cybercriminals to create malware, conduct phishing attacks, and attempt to influence elections. The company has disrupted over 20 such operations in 2024.
15 Sources
15 Sources
OpenAI has taken action against Iranian hacker groups using ChatGPT to influence the US presidential elections. The company has blocked several accounts and is working to prevent further misuse of its AI technology.
19 Sources
19 Sources
OpenAI, the creator of ChatGPT, has developed tools to detect AI-generated text but is taking a measured approach to their release. The company cites concerns about potential misuse and the need for further refinement.
12 Sources
12 Sources
OpenAI's ChatGPT rejected more than 250,000 requests to generate images of U.S. election candidates in the lead-up to Election Day, as part of efforts to prevent AI-driven misinformation and election interference.
9 Sources
9 Sources
OpenAI has cut off API access to an engineer who created a voice-controlled sentry gun using the company's Realtime API, citing a violation of their usage policies prohibiting the development of weapons.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved