12 Sources
[1]
How global threat actors are weaponizing AI now, according to OpenAI
The company's new report outlines the latest examples of AI misuse and abuse originating from China and elsewhere. As generative AI has spread in recent years, so too have fears over the technology's misuse and abuse. Tools like ChatGPT can produce realistic text, images, video, and speech. The developers behind these systems promise productivity gains for businesses and enhanced human creativity, while many safety experts and policy-makers worry about the impending surge of misinformation, among other dangers, that these systems enable. Also: What AI pioneer Yoshua Bengio is doing next to make AI safer OpenAI -- arguably the leader in this ongoing AI race -- publishes an annual report highlighting the myriad ways in which its AI systems are being used by bad actors. "AI investigations are an evolving discipline," the company wrote in the latest version of its report, released Thursday. "Every operation we disrupt gives us a better understanding of how threat actors are trying to abuse our models, and enables us to refine our defenses." (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) The new report detailed 10 examples of abuse from the past year, four of which appear to be coming from China. In each of the 10 cases outlined in the new report, OpenAI outlined how it detected and addressed the problem. One of the cases with probable Chinese origins, for example, found ChatGPT accounts generating social media posts in English, Chinese, and Urdu. A "main account" would publish a post, then others would follow with comments, all of which were designed to create an illusion of authentic human engagement and attract attention around politically charged topics. According to the report, those topics -- including Taiwan and the dismantling of USAID -- are "all closely aligned with China's geostrategic interests." Also: AI bots scraping your data? This free tool gives those pesky crawlers the run-around Another example of abuse, which according to OpenAI had direct links to China, involved using ChatGPT to engage in nefarious cyber activities, like password "bruteforcing"-- trying a huge number of AI-generated passwords in an attempt to break into online accounts -- and researching publicly available records regarding the US military and defense industry. China's foreign ministry has denied any involvement with the activities outlined in OpenAI's report, according to Reuters. Other threatening uses of AI outlined in the new report were allegedly linked to actors in Russia, Iran, Cambodia, and elsewhere. Text-generating models like ChatGPT are likely to be just the beginning of AI's specter of misinformation. Text-to-video models, like Google's Veo 3, can increasingly generate realistic video from natural language prompts. Text-to-speech models, meanwhile, like ElevenLabs' new v3, can generate humanlike voices with similar ease. Also: Text-to-speech with feeling - this new AI model does everything but shed a tear Though developers generally implement some kind of guardrails before deploying their models, bad actors -- as OpenAI's new report makes clear -- are becoming ever more creative in their misuse and abuse. The two parties are locked in a game of cat and mouse, especially as there are currently no robust federal oversight policies in place in the US.
[2]
ChatGPT for evil: Fake IT resumes, misinfo, and more
Fake IT workers possibly linked to North Korea, Beijing-backed cyber operatives, and Russian malware slingers are among the baddies using ChatGPT for evil, according to OpenAI's latest threat report. The AI giant said it quashed 10 operations using its chatbot to conduct social engineering and cyber snooping campaigns, generate spammy social media content, and even develop a multi-stage malware campaign targeting people and organizations around the globe. Four of the 10 campaigns were likely of Chinese origin, and OpenAI banned all of the ChatGPT accounts associated with the malicious activities. These included accounts linked to "multiple" fake IT worker campaigns, which used the language models to craft application materials for software engineering and other remote jobs. "While we cannot determine the locations or nationalities of the threat actors, their behaviors were consistent with activity publicly attributed to IT worker schemes connected to North Korea (DPRK)," the report said [PDF]. "Some of the actors linked to these recent campaigns may have been employed as contractors by the core group of potential DPRK-linked threat actors to perform application tasks and operate hardware, including within the US." In addition to using AI to create fake, US-based personas with fabricated employment histories (as has been previously documented by OpenAI and other researchers), some of the newer campaigns attempted to auto-generate resumes. Plus, OpenAI detected indicators of operators in Africa posing as job applicants along with recruiting people in North America to run laptop farms -- along the lines of an Arizona woman who was busted for her role in raking in millions for North Korea while allegedly scamming more than 300 US companies. Other banned accounts originated from Russia, and the AI company's threat hunters caught them doing the usual election trolling, in this case, using the chatbot to generate German-language content about the country's 2025 election. The spammers used a Telegram channel with 1,755 subscribers and an X account with more than 27,000 followers to distribute their content, in one instance xeeting: "We urgently need a 'DOGE ministry' when the AfD finally takes office," referring to the Alternative für Deutschland (AfD) party. The Telegram channel regularly reported fake news stories and commentary lifted straight from a website that the French government linked to a Russian propaganda network called "Portal Kombat." In one of the more interesting operations: OpenAI banned a cluster of accounts operated by a Russian-speaking individual using ChatGPT to develop Windows malware dubbed ScopeCreep and set up command-and-control infrastructure: The criminal then distributed the ScopeCreep malware via a publicly available code repository that spoofed a legitimate crosshair overlay tool (Crosshair X) for video games. The malware itself, developed by continually prompting ChatGPT to implement specific features, included a number of notable capabilities. It's written in Go, and uses a number of tricks to avoid being detected by anti-virus and other malware-stopping tools. After the unsuspecting gamer runs the malware, it's designed to escalate privileges, harvest browser-stored credentials, tokens, and cookies, and exfiltrate them to attacker-controlled infrastructure. Despite their successful efforts in using the LLM to help develop malware, the info-stealing campaign itself didn't get very far, we're told. "Although this malware was likely active in the wild, with some samples appearing on VirusTotal, we did not see evidence of any widespread interest or distribution," OpenAI wrote. Perhaps unsurprisingly, nearly half of the malicious operations likely originated in China. The bulk of these used the AI models to generate a ton of social media posts and profile images across TikTok, X, Bluesky, Reddit, Facebook, and other websites. The content, written primarily in English and Chinese, with a focus on Taiwan, American tariffs and politics, and pro-Chinese Communist Party narratives, according to the report. This time around, however, Chinese government-backed operators used ChatGPT to support open-source research, script tweaking, system troubleshooting, and software development. OpenAI noted that while this activity aligned with known APT infrastructure, the models didn't provide capabilities beyond what's available through public resources. All of these now-banned accounts were associated with "multiple" unnamed PRC-backed hackers, and used infrastructure operated by Keyhole Panda (aka APT5) and Vixen Panda (aka APT15). In some of the more technical queries, the prompts "included mention of reNgine, an automated reconnaissance framework for web applications, and Selenium automation, designed to bypass login mechanisms and capture authorization tokens," the research noted. The ChatGPT interactions related to software development "included web and Android app development, and both C-language and Golang software. Infrastructure setup included configuring VPNs, software installation, Docker container deployments, and local LLM deployments such as DeepSeek." ®
[3]
OpenAI finds more Chinese groups using ChatGPT for malicious purposes
SAN FRANCISCO, June 5 (Reuters) - OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday. While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said. Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio. OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content, opens new tab for websites and social media platforms. In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID. Some content also criticized U.S. President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?". In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation. A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within U.S. political discourse, including text and AI-generated profile images. China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings. OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion. Reporting by Anna Tong in San Francisco, Editing by Louise Heavens Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial Intelligence Anna Tong Thomson Reuters Anna Tong is a correspondent for Reuters based in San Francisco, where she reports on the technology industry. She joined Reuters in 2023 after working at the San Francisco Standard as a data editor. Tong previously worked at technology startups as a product manager and at Google where she worked in user insights and helped run a call center. Tong graduated from Harvard University.
[4]
OpenAI Bans ChatGPT Accounts Used by Russian, Iranian and Chinese Hacker Groups
OpenAI has revealed that it banned a set of ChatGPT accounts that were likely operated by Russian-speaking threat actors and two Chinese nation-state hacking groups to assist with malware development, social media automation, and research about U.S. satellite communications technologies, among other things. "The [Russian-speaking] actor used our models to assist with developing and refining Windows malware, debugging code across multiple languages, and setting up their command-and-control infrastructure," OpenAI said in its threat intelligence report. "The actor demonstrated knowledge of Windows internals and exhibited some operational security behaviors." The Go-based malware campaign has been codenamed ScopeCreep by the artificial intelligence (AI) company. There is no evidence that the activity was widespread in nature. The threat actor, per OpenAI, used temporary email accounts to sign up for ChatGPT, using each of the created accounts to have one conversation to make a single incremental improvement to their malicious software. They subsequently abandoned the account and moved on to the next. This practice of using a network of accounts to fine-tune their code highlights the adversary's focus on operational security (OPSEC), OpenAI added. The attackers then distributed the AI-assisted malware through a publicly available code repository that impersonated a legitimate video game crosshair overlay tool called Crosshair X. Users who ended up downloading the trojanized version of the software had their systems infected by a malware loader that would then proceed to retrieve additional payloads from an external server and execute them. "From there, the malware was designed to initiate a multi-stage process to escalate privileges, establish stealthy persistence, notify the threat actor, and exfiltrate sensitive data while evading detection," OpenAI said. "The malware is designed to escalate privileges by relaunching with ShellExecuteW and attempts to evade detection by using PowerShell to programmatically exclude itself from Windows Defender, suppressing console windows, and inserting timing delays." Among other tactics incorporated by ScopeCreep include the use of Base64-encoding to obfuscate payloads, DLL side-loading techniques, and SOCKS5 proxies to conceal their source IP addresses. The end goal of the malware is to harvest credentials, tokens, and cookies stored in web browsers, and exfiltrate them to the attacker. It's also capable of sending alerts to a Telegram channel operated by the threat actors when new victims are compromised. OpenAI noted that the threat actor asked its models to debug a Go code snippet related to an HTTPS request, as well as sought help with integrating Telegram API and using PowerShell commands via Go to modify Windows Defender settings, specifically when it comes to adding antivirus exclusions. The second group of ChatGPT accounts disabled by OpenAI are said to be associated with two hacking groups attributed to China: ATP5 (aka Bronze Fleetwood, Keyhole Panda, Manganese, and UNC2630) and APT15 (aka Flea, Nylon Typhoon, Playful Taurus, Royal APT, and Vixen Panda) While one subset engaged with the AI chatbot on matters related to open-source research into various entities of interest and technical topics, as well as to modify scripts or troubleshooting system configurations. "Another subset of the threat actors appeared to be attempting to engage in development of support activities including Linux system administration, software development, and infrastructure setup," OpenAI said. "For these activities, the threat actors used our models to troubleshoot configurations, modify software, and perform research on implementation details." This consisted of asking for assistance building software packages for offline deployment and advice pertaining to configured firewalls and name servers. The threat actors engaged in both web and Android app development activities. In addition, the China-linked clusters weaponized ChatGPT to work on a brute-force script that can break into FTP servers, research about using large-language models (LLMs) to automate penetration testing, and develop code to manage a fleet of Android devices to programmatically post or like content on social media platforms like Facebook, Instagram, TikTok, and X. Some of the other observed malicious activity clusters that harnessed ChatGPT in nefarious ways are listed below - "Some of these companies operated by charging new recruits substantial joining fees, then using a portion of those funds to pay existing 'employees' just enough to maintain their engagement," OpenAI's Ben Nimmo, Albert Zhang, Sophia Farquhar, Max Murphy, and Kimo Bumanglag said. "This structure is characteristic of task scams."
[5]
Foreign propagandists continue using ChatGPT in influence campaigns
OpenAI claims it has disrupted operations across China, Russia and Iran. Chinese propaganda and social engineering operations to create posts, comments and drive engagement at home and abroad. OpenAI said it has recently disrupted four Chinese covert influence operations that were using its tool to generate social media posts and replies on platforms including TikTok, Facebook, Reddit and X. The comments generated revolved around several topics from US politics to a Taiwanese video game where players fight the Chinese Communist Party. ChatGPT was used to create social media posts that both supported and decried different hot button issues to stir up misleading political discourse. Ben Nimmo, principal investigator at OpenAI , "what we're seeing from China is a growing range of covert operations using a growing range of tactics." While OpenAI claimed it also disrupted a handful of operations it believes originated in Russia, Iran and North Korea, Nimmo elaborated on the Chinese operations saying they "targeted many different countries and topics [...] some of them combined elements of influence operations, social engineering, surveillance." This is far from the first time this has occurred. In 2023, found that AI-generated content has been used in politically motivated online influence campaigns in numerous instances since 2019. In 2024, outlining its efforts to disrupt five state-affiliated operations across China, Iran and North Korea that were using OpenAI models for malicious intent. These applications included debugging code, generating scripts and creating content for use in phishing campaigns. That same year, OpenAI said it that was using ChatGPT to create longform political articles about US elections that were then posted on fake news sites posing as both conservative and progressive outlets. The operation was also creating comments to post on X and Instagram through fake accounts, again espousing opposing points of view. "We didn't generally see these operations getting more engagement because of their use of AI," Nimmo . "For these operations, better tools don't necessarily mean better outcomes." This offers little comfort. As generative AI gets cheaper and , it stands to reason that its ability to generate content en masse will make influence campaigns like these easier and more affordable to build, even if their efficacy remains unchanged.
[6]
China covertly using ChatGPT for propaganda posts on social media
OpenAI has found evidence of China covertly using ChatGPT to create propaganda posts on social media, and to conduct digital surveillance. Reddit, TikTok, Facebook, and X were among the platforms targeted ... The AI company says it has blocked ten of these operations, four of which "likely originated in China." Other countries believed to be abusing the service in this way are Russia, Iran, and North Korea. NPR reports that the chatbot was used both to generate original posts, and to create engagement with them. In one amusing twist, the perpetrators also used ChatGPT to create their own performance reviews for the abuse of the tool. Chinese propagandists are using ChatGPT to write posts and comments on social media sites -- and also to create performance reviews detailing that work for their bosses, according to OpenAI researchers [...] "What we're seeing from China is a growing range of covert operations using a growing range of tactics," Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, said on a call with reporters about the company's latest threat report [...] One Chinese operation, which OpenAI dubbed "Sneer Review," used ChatGPT to generate short comments that were posted across TikTok, X, Reddit, Facebook, and other websites, in English, Chinese, and Urdu. Subjects included the Trump administration's dismantling of the U.S. Agency for International Development -- with posts both praising and criticizing the move -- as well as criticism of a Taiwanese game in which players work to defeat the Chinese Communist Party. In many cases, the operation generated a post as well as comments replying to it. Ironically, the AI-generated performance reviews provided insights into the operations. The actors behind Sneer Review also used OpenAI's tools to do internal work, including creating "a performance review describing, in detail, the steps taken to establish and run the operation," OpenAI said. "The social media behaviors we observed across the network closely mirrored the procedures described in this review." Another element involved using ChatGPT to generate emails to journalists, analysts, and politicians in an apparent intelligence-gathering operation.
[7]
OpenAI takes down covert operations tied to China and other countries
Open AI CEO Sam Altman speaks during a conference in San Francisco this week. The company said it has recently taken down 10 influence operations that were using its generative artificial intelligence tools. Four of those operations were likely run by the Chinese government. Justin Sullivan/Getty Images hide caption Chinese propagandists are using ChatGPT to write posts and comments on social media sites -- and also to create performance reviews detailing that work for their bosses, according to OpenAI researchers. The use of the company's artificial intelligence chatbot to create internal documents, as well as by another Chinese operation to create marketing materials promoting its work, comes as China is ramping up its efforts to influence opinion and conduct surveillance online. "What we're seeing from China is a growing range of covert operations using a growing range of tactics," Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, said on a call with reporters about the company's latest threat report. In the last three months, OpenAI says it disrupted 10 operations using its AI tools in malicious ways, and banned accounts connected to them. Four of the operations likely originated in China, the company said. The China-linked operations "targeted many different countries and topics, even including a strategy game. Some of them combined elements of influence operations, social engineering, surveillance. And they did work across multiple different platforms and websites," Nimmo said. One Chinese operation, which OpenAI dubbed "Sneer Review," used ChatGPT to generate short comments that were posted across TikTok, X, Reddit, Facebook, and other websites, in English, Chinese, and Urdu. Subjects included the Trump administration's dismantling of the U.S. Agency for International Development -- with posts both praising and criticizing the move -- as well as criticism of a Taiwanese game in which players work to defeat the Chinese Communist Party. In many cases, the operation generated a post as well as comments replying to it, behavior OpenAI's report said "appeared designed to create a false impression of organic engagement." The operation used ChatGPT to generate critical comments about the game, and then to write a long-form article claiming the game received widespread backlash. The actors behind Sneer Review also used OpenAI's tools to do internal work, including creating "a performance review describing, in detail, the steps taken to establish and run the operation," OpenAI said. "The social media behaviors we observed across the network closely mirrored the procedures described in this review." Another operation OpenAI tied to China focused on collecting intelligence by posing as journalists and geopolitical analysts. It used ChatGPT to write posts and biographies for accounts on X, to translate emails and messages from Chinese to English, and to analyze data. That included "correspondence addressed to a US Senator regarding the nomination of an Administration official," OpenAI said, but added that it was not able to independently confirm whether the correspondence was sent. "They also used our models to generate what looked like marketing materials," Nimmo said. In those, the operation claimed it conducted "fake social media campaigns and social engineering designed to recruit intelligence sources," which lined up with its online activity, OpenAI said in its report. In its previous threat report in February, OpenAI identified a surveillance operation linked to China that claimed to monitor social media "to feed real-time reports about protests in the West to the Chinese security services." The operation used OpenAI's tools to debug code and write descriptions that could be used in sales pitches for the social media monitoring tool. In its new report published on Wednesday, OpenAI said it had also disrupted covert influence operations likely originating in Russia and Iran, a spam operation attributed to a commercial marketing company in the Philippines, a recruitment scam linked to Cambodia, and a deceptive employment campaign bearing the hallmarks of operations connected to North Korea. "It is worth acknowledging the sheer range and variety of tactics and platforms that these operations use, all of them put together," Nimmo said. However, he said the operations were largely disrupted in their early stages and didn't reach large audiences of real people. "We didn't generally see these operations getting more engagement because of their use of AI," Nimmo said. "For these operations, better tools don't necessarily mean better outcomes."
[8]
OpenAI says it disrupted at least 10 malicious AI campaigns already this year
Russia, China, and Iran are using ChatGPT to translate and generate content OpenAI has revealed it has taken down a number of malicious campaigns using its AI offerings, including ChatGPT. In a report titled, "Disrupting malicious uses of AI: June 2025," OpenAI lays out how it dismantled or disrupted 10 employment scams, influence operations, and spam campaigns using ChatGPT in the first FEW months of 2025 alone. Many of the campaigns were conducted by state-sponsored actors with links to China, Russia and Iran. Four of the campaigns disrupted by OpenAI appear to have originated in China, with their focus on social engineering, covert influence operations, and cyber threats. One campaign, dubbed "Sneer Review" by OpenAI, saw the Taiwanese "Reversed Front" board game that includes resistance against the Chinese Communist Party spammed by highly critical Chinese comments. The network behind the campaign then generated an article and posted it on a forum claiming that the game had received widespread backlash based on the critical comments in an effort to discredit both the game and Taiwanese independence. Another campaign, named "Helgoland Bite", saw Russian actors using ChatGPT to generate text in German that criticized the US and NATO, and generate content about the German 2025 election. Most notably, the group also used ChatGPT to seek out opposition activists and bloggers, as well as generating messages that referenced coordinated social media posts and payments. OpenAI has also banned numerous ChatGPT accounts linked to US targeted influence accounts in an operation known as "Uncle Spam". In many cases, Chinese actors would generate highly divisive content aimed at widening the political divide in the US, including creating social media accounts that posted arguments for and against tariffs, as well as generating accounts that mimicked US veteran support pages. OpenAI's report is a key reminder that not everything you see online is posted by an actual human being, and that the person you've picked an online fight with could be getting exactly what they want; engagement, outrage, and division.
[9]
OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes
SAN FRANCISCO (Reuters) -OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday. While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said. Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio. OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms. In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID. Some content also criticized U.S. President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?". In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation. A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within U.S. political discourse, including text and AI-generated profile images. China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings. OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion. (Reporting by Anna Tong in San Francisco, Editing by Louise Heavens)
[10]
OpenAI finds more Chinese groups using ChatGPT for malicious purposes
The release of ChatGPT in 2022 has since raised concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio. In a report on Thursday, the artificial intelligence company said that Chinese groups are using its technology for malicious operations.OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday. While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said. Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio. OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms. In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID. Some content also criticised US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?". In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation. A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within U.S. political discourse, including text and AI-generated profile images. China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings. OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.
[11]
OpenAI Reveals China Covertly Used ChatGPT To Spread Propaganda, Manipulate Social Media Engagement, And Target Journalists And Politicians In A Coordinated AI-Powered Influence Campaign
OpenAI revolutionized AI and how the technology is used by introducing ChatGPT. The company has come far, with its services being integrated and used in varied domains, from the healthcare sector to the education sector, the chatbot has been put to good use. But it seems that many are misusing the AI tool, or so the company claims. OpenAI is said to have identified and shut down several secret campaigns with harmful purposes that are said to be linked to China and aimed at influencing public opinion. With the growing influence of AI and the wider application of technology, there is also an increased concern among the tech community regarding the responsible use of AI tools, which puts an added responsibility on the company's shoulders to ensure they support ethical AI development and progress. Even if there are vigorous efforts put into this, there is often abuse of the technology, and it is used for wrongful purposes. OpenAI recently shared it has traced out and shut down several social media campaigns that misused ChatGPT and were linked to propaganda, believed to be originating from China. It is said that there were ten operations that were shut down, four of which were linked to China. The operations consisted of ChatGPT being used for creating politically charged posts and even engaging in several platforms online with a fake identity to influence public opinions and direct them in a certain direction. A campaign called Uncle Spam was used to write controversial campaigns against sensitive topics in the U.S. The company suggests this is not the first time that its technology has been wrongfully used, as other countries such as Russia, Iran, and even North Korea are also claimed to be involved in such practices. As per an NPR report, the ChatGPT misuse is one step ahead as not only were the propaganda posts being created, but they were also boosted and made to appear more genuinely and widely available, causing concern about the wrongful influence it might have had on the community. The tool was further used to create performance reviews of how the chatbot was used for the activities. The report states: Chinese propagandists are using ChatGPT to write posts and comments on social media sites -- and also to create performance reviews detailing that work for their bosses. Ben Nimmo, who is the main investigator on OpenAI's intelligence and investigation team also commented on the threat the company is facing. What we're seeing from China is a growing range of covert operations using a growing range of tactics. The operation involved another layer of activities that involved using OpenAI's ChatGPT for creating emails that were directed at analysts, journalists, and politicians in an attempt to build relationships by using false pretenses and being able to extract information as a result. The AI tools being used for strategic manipulation and violating ethical boundaries is concerning and companies should stay more vigilant and take immediate action against it.
[12]
OpenAI finds more Chinese groups using ChatGPT for malicious purposes
SAN FRANCISCO (Reuters) -OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday. While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said. Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio. OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms. In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID. Some content also criticized U.S. President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?". In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation. A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within U.S. political discourse, including text and AI-generated profile images. China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings. OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion. (Reporting by Anna Tong in San Francisco, Editing by Louise Heavens)
Share
Copy Link
OpenAI's latest threat report reveals how state-backed actors and cybercriminals from China, Russia, and other countries are exploiting AI tools like ChatGPT for malicious purposes, including cyber espionage, malware development, and disinformation campaigns.
OpenAI, a leading artificial intelligence company, has released its latest threat intelligence report, revealing an alarming trend of AI misuse by state-backed actors and cybercriminals worldwide. The report details how tools like ChatGPT are being exploited for various malicious purposes, including cyber espionage, malware development, and disinformation campaigns 1.
Source: Economic Times
The report identified ten significant abuse cases over the past year, with four originating from China. These operations ranged from generating social media posts in multiple languages to create an illusion of authentic engagement on politically charged topics, to more sophisticated cyber activities 1.
One notable Chinese operation involved using ChatGPT to engage in nefarious cyber activities, such as password "bruteforcing" and researching publicly available records on the US military and defense industry 1. Another operation generated polarized social media content supporting both sides of divisive topics within US political discourse, complete with AI-generated profile images 3.
The report also highlighted AI misuse by actors from Russia, Iran, and other countries. A Russian-speaking individual was found using ChatGPT to develop Windows malware dubbed "ScopeCreep" and set up command-and-control infrastructure 2. This malware, distributed via a spoofed video game tool repository, was designed to escalate privileges, harvest browser-stored credentials, and exfiltrate sensitive data 2.
Source: ZDNet
The threat actors demonstrated sophisticated tactics and a focus on operational security. For instance, the Russian-speaking actor used temporary email accounts to create multiple ChatGPT accounts, each used for a single conversation to make incremental improvements to their malicious software 4.
The ScopeCreep malware, developed with ChatGPT's assistance, incorporated various evasion techniques, including Base64-encoding for payload obfuscation, DLL side-loading, and the use of SOCKS5 proxies to conceal source IP addresses 4.
Chinese groups were found using ChatGPT to generate social media posts and replies on platforms including TikTok, Facebook, Reddit, and X. These posts covered a wide range of topics, from US politics to criticism of a Taiwan-centric video game, often supporting opposing viewpoints to stir up misleading political discourse 5.
Source: Wccftech
OpenAI has taken steps to disrupt these operations by banning associated ChatGPT accounts. However, the company acknowledges that AI investigations are an evolving discipline, and each disrupted operation provides insights into how threat actors are attempting to abuse AI models 1.
As generative AI becomes more accessible and affordable, there are concerns that influence campaigns and cyber operations could become easier and more cost-effective to execute, even if their efficacy remains unchanged 5. This underscores the ongoing challenge of balancing AI's potential benefits with the need for robust security measures and ethical guidelines in its development and deployment.
Summarized by
Navi
[2]
Google introduces Search Live, an AI-powered feature enabling back-and-forth voice conversations with its search engine, enhancing user interaction and information retrieval.
15 Sources
Technology
1 day ago
15 Sources
Technology
1 day ago
Microsoft is set to cut thousands of jobs, primarily in sales, as it shifts focus towards AI investments. The tech giant plans to invest $80 billion in AI infrastructure while restructuring its workforce.
13 Sources
Business and Economy
1 day ago
13 Sources
Business and Economy
1 day ago
Apple's senior VP of Hardware Technologies, Johny Srouji, reveals the company's interest in using generative AI to accelerate chip design processes, potentially revolutionizing their approach to custom silicon development.
11 Sources
Technology
17 hrs ago
11 Sources
Technology
17 hrs ago
Midjourney, known for AI image generation, has released its first AI video model, V1, allowing users to create short videos from images. This launch puts Midjourney in competition with other AI video generation tools and raises questions about copyright and pricing.
10 Sources
Technology
1 day ago
10 Sources
Technology
1 day ago
A new study reveals that AI reasoning models produce significantly higher CO₂ emissions compared to concise models when answering questions, highlighting the environmental impact of advanced AI technologies.
8 Sources
Technology
9 hrs ago
8 Sources
Technology
9 hrs ago