Curated by THEOUTPOST
On Sat, 17 Aug, 12:02 AM UTC
19 Sources
[1]
OpenAI says Iranian hackers used ChatGPT AI chatbot to influence US Presidential elections, here's how - Times of India
The TOI Tech Desk is a dedicated team of journalists committed to delivering the latest and most relevant news from the world of technology to readers of The Times of India. TOI Tech Desk's news coverage spans a wide spectrum across gadget launches, gadget reviews, trends, in-depth analysis, exclusive reports and breaking stories that impact technology and the digital universe. Be it how-tos or the latest happenings in AI, cybersecurity, personal gadgets, platforms like WhatsApp, Instagram, Facebook and more; TOI Tech Desk brings the news with accuracy and authenticity.
[2]
OpenAI blocks Iranian group's ChatGPT accounts for targeting US election
OpenAI said on Friday it had taken down accounts of an Iranian group for using its ChatGPT chatbot to generate content meant for influencing the U.S. presidential election and other issues. The operation, identified as Storm-2035, used ChatGPT to generate content focused on topics such as commentary on the candidates on both sides in the U.S. elections, the conflict in Gaza and Israel's presence at the Olympic Games and then shared it via social media accounts and websites. Investigation by the Microsoft-backed AI company showed that ChatGPT was used for generating long-form articles and shorter social media comments. OpenAI said the operation did not appear to have achieved meaningful audience engagement. (For top technology news of the day, subscribe to our tech newsletter Today's Cache) Majority of the identified social media posts received few or no likes, shares or comments and the company did not see indications of web articles being shared across social media. The accounts have been banned from using OpenAI's services and the company continues to monitor activities for any further attempts to violate policies, it said. OpenAI says stalled attempts by Israel-based company to interfere in Indian elections Earlier in August, a Microsoft threat-intelligence report said Iranian network Storm-2035, comprising four websites masquerading as news outlets, is actively engaging U.S. voter groups on opposing ends of the political spectrum. The engagement was being built with "polarizing messaging on issues such as the U.S. presidential candidates, LGBTQ rights, and the Israel-Hamas conflict," the report stated. Democratic candidate Kamala Harris and Republican rival Donald Trump are locked in a tight race, ahead of the Nov. 5 presidential election. The AI firm said in May it had disrupted five covert influence operations that sought to use its models for "deceptive activity" across the internet. Read Comments
[3]
OpenAI blocks Iranian group's ChatGPT accounts for targeting US election
OpenAI said on Friday it had taken down accounts of an Iranian group for using its ChatGPT chatbot to generate content meant for influencing the U.S. presidential election and other issues. The operation, identified as Storm-2035, used ChatGPT to generate content focused on topics such as commentary on the candidates on both sides in the U.S. elections, the conflict in Gaza and Israel's presence at the Olympic Games and then shared it via social media accounts and websites. Investigation by the Microsoft-backed AI company showed that ChatGPT was used for generating long-form articles and shorter social media comments. OpenAI said the operation did not appear to have achieved meaningful audience engagement. Majority of the identified social media posts received few or no likes, shares or comments and the company did not see indications of web articles being shared across social media. The accounts have been banned from using OpenAI's services and the company continues to monitor activities for any further attempts to violate policies, it said. Earlier in August, a Microsoft threat-intelligence report said Iranian network Storm-2035, comprising four websites masquerading as news outlets, is actively engaging U.S. voter groups on opposing ends of the political spectrum. The engagement was being built with "polarizing messaging on issues such as the U.S. presidential candidates, LGBTQ rights, and the Israel-Hamas conflict," the report stated. Democratic candidate Kamala Harris and Republican rival Donald Trump are locked in a tight race, ahead of the Nov. 5 presidential election. The AI firm said in May it had disrupted five covert influence operations that sought to use its models for "deceptive activity" across the internet.
[4]
Iranian group used ChatGPT to try to influence US election, OpenAI says
Accounts banned but AI company says operation did not appear to have meaningful audience engagement OpenAI said on Friday it had taken down accounts of an Iranian group for using its ChatGPT chatbot to generate content meant for influencing the US presidential election and other issues. The operation, identified as Storm-2035, used ChatGPT to generate content focused on topics such as commentary on the candidates on both sides in the US elections, the conflict in Gaza and Israel's presence at the Olympic Games and then shared it via social media accounts and websites, Open AI said. Investigation by the Microsoft-backed AI company showed ChatGPT was used for generating long-form articles and shorter social media comments. OpenAI said the operation did not appear to have achieved meaningful audience engagement. Majority of the identified social media posts received few or no likes, shares or comments and the company did not see indications of web articles being shared across social media. The accounts have been banned from using OpenAI's services and the company continues to monitor activities for any further attempts to violate policies, it said. Earlier in August, a Microsoft threat-intelligence report said Iranian network Storm-2035, comprising four websites masquerading as news outlets, is actively engaging US voter groups on opposing ends of the political spectrum. The engagement was being built with "polarizing messaging on issues such as the US presidential candidates, LGBTQ rights, and the Israel-Hamas conflict", the report stated. Democratic candidate Kamala Harris and Republican rival Donald Trump are locked in a tight race, ahead of the presidential election on 5 November. The AI firm said in May it had disrupted five covert influence operations that sought to use its models for "deceptive activity" across the internet.
[5]
OpenAI blocks Iranian group's ChatGPT accounts for targeting US election
The operation, identified as Storm-2035, used ChatGPT to generate content focused on topics such as commentary on the candidates on both sides in the US elections. OpenAI said on Friday it had taken down accounts of an Iranian group for using its ChatGPT chatbot to generate content meant for influencing the US presidential election and other issues. The operation, identified as Storm-2035, used ChatGPT to generate content focused on topics such as commentary on the candidates on both sides in the US elections, the conflict in Gaza and Israel's presence at the Olympic Games and then shared it via social media accounts and websites. Investigation by the Microsoft-backed AI company showed that ChatGPT was used for generating long-form articles and shorter social media comments. OpenAI said the operation did not appear to have achieved meaningful audience engagement. Majority of the identified social media posts received few or no likes, shares or comments and the company did not see indications of web articles being shared across social media. The accounts have been banned from using OpenAI's services and the company continues to monitor activities for any further attempts to violate policies, it said. Iranian network masquerading as news outlets Earlier in August, a Microsoft threat-intelligence report said Iranian network Storm-2035, comprising four websites masquerading as news outlets, is actively engaging US voter groups on opposing ends of the political spectrum. The engagement was being built with "polarizing messaging on issues such as the US presidential candidates, LGBTQ rights, and the Israel-Hamas conflict," the report stated. Democratic candidate Kamala Harris and Republican rival Donald Trump are locked in a tight race, ahead of the November 5 presidential election. Advertisement The AI firm said in May it had disrupted five covert influence operations that sought to use its models for "deceptive activity" across the internet.
[6]
How an Iranian group used ChatGPT to attempt to influence U.S. presidential election
The story so far: OpenAI on Thursday (August 16, 2024) said it banned ChatGPT accounts linked to an Iranian influence operation that used the chatbot to generate content to influence the U.S. presidential election. The Microsoft-backed company said it identified and took down a "cluster of ChatGPT accounts" and that it was monitoring the situation. What is Storm-2035? OpenAI assigned the group the name Storm-2035 moniker, and said the operation was made up of four websites that acted as news organisations. These news sites exploited issues like LGBTQ rights and Israel-Hamas conflict, to target U.S. voters. The sites also used AI tools to plagiarise stories and capture web traffic, per a Microsoft Threat Analysis Center (MTAC) report issued on August 9. Some named sites included EvenPolitics, Nio Thinker, Westland Sun, Teorator, and Savannah Time. The operation allegedly targeted both liberal and conservative voters in the U.S. How did the group use ChatGPT? According to OpenAI, the operatives used ChatGPT to create long-form articles and social media comments that were then posted by several X and Instagram accounts. (For top technology news of the day, subscribe to our tech newsletter Today's Cache) AI chatbots such as ChatGPT can potentially assist foreign operatives fool gullible internet users by mimicking American users' language patterns, rehashing already existing comments or propaganda, and cutting down the time it takes to create and circulate plagiarised content meant to sway voters. Apart from the upcoming U.S. presidential election, Storm-2035 operation covered world issues such Venezuelan politics, Latin rights in the U.S., the destruction in Palestine, Scottish independence, and Israel taking part in the Olympic Games. The network also exploited popular topics like fashion and beauty. OpenAI shared screenshots of some of the news stories and social media posts it attributed to the operation; one article claimed that X was censoring former president Donald Trump's tweets, while separate social media posts asked users to "dump" Trump or Vice President Kamala Harris. How severe is the impact of Storm-2035? OpenAI has downplayed the severity of the incident, claiming that audiences did not engage much with the uploaded content on social media. Using Brookings' BreakoutScale, which measures the impact of covert operations on a scale from 1 (lowest) to 6 (highest), the report shared this operation was at the low end of Category 2, meaning it was posted on multiple platforms, but there was no evidence that real people picked up or widely shared their content. However, OpenAI stressed it had shared the threat information with "government, campaign, and industry stakeholders." While OpenAI presented the discovery and disruption of the Iran-linked influence operation as a positive development, the use of generative AI tools by foreign operatives against U.S. voters is a gravely urgent issue that highlights multiple points of failure across OpenAI, X, Instagram, and the search engines ranking the sites. OpenAI sets up safety committee as it starts training new modelWere there other similar issues OpenAI faced in the past? In May, the AI firm posted a report revealing it had been working for over three months to dismantle covert influence operations that used its tools for generating comments on social media, articles in multiple languages, fake names and bios for social media accounts, and translating or proofreading text. A Russian outfit that OpenAI called 'Bad Grammar,' used the Telegram to target Ukraine, Moldova, the Baltic States and the U.S. Separately, another Russia-based operation titled 'Doppelganger,' an Israeli operation that OpenAI nicknamed 'Zeno Zeno,' a Chinese network called 'Spamouflage,' and an Iranian group called 'International Union of Virtual Media' or IUVM, used ChatGPT to write comments on social media platforms like X and 9GAG, and to post articles and news stories. The investigation found that the content covered issues like Russia's invasion of Ukraine, the Gaza conflict, Indian and European elections, and the criticism of the Chinese government by Chinese dissidents or foreign governments. Besides hunting down influence networks, OpenAI also found incidents of state-backed threat actors abusing AI to attack enemies. Other serious cases exposing OpenAI's vulnerabilities followed. In July, the Microsoft-backed firm revealed that early last year, a hacker gained access to OpenAI's internal messaging systems and stole information related to the company's AI technologies. While the hacker was found to be an individual, the incident raised alarms that Chinese adversaries could easily do the same. OpenAI's internal AI details stolen in 2023 breach, NYT reportsWhat is OpenAI doing to safeguard its tech? While studying these cases, OpenAI found that its AI tools thankfully refused to generate text or images for some prompts due to the safeguards already built into them. The firm also developed AI-powered security tools to detect threat actors within days instead of weeks. While not explicitly discussed by OpenAI, the AI company has become enmeshed with prominent figures from U.S. federal agencies or government bodies. In June, OpenAI picked cybersecurity expert and retired U.S. Army General Paul M. Nakasone to be a part of its Board of Directors. Nakasone led the U.S. National Security Agency and has served in assignments with cyber units in the U.S., Korea, Iraq, and Afghanistan. A couple of weeks ago, the firm also announced it will be teaming up with the U.S. AI Safety Institute, so that its next big foundational model GPT-5 can be previewed and tested by it. Read Comments
[7]
OpenAI Blocks Iranian Influence Operation Using ChatGPT for U.S. Election Propaganda
OpenAI on Friday said it banned a set of accounts linked to what it said was an Iranian covert influence operation that leveraged ChatGPT to generate content that, among other things, focused on the upcoming U.S. presidential election. "This week we identified and took down a cluster of ChatGPT accounts that were generating content for a covert Iranian influence operation identified as Storm-2035," OpenAI said. "The operation used ChatGPT to generate content focused on a number of topics -- including commentary on candidates on both sides in the U.S. presidential election - which it then shared via social media accounts and websites." The artificial intelligence (AI) company said the content did not achieve any meaningful engagement, with a majority of the social media posts receiving negligible to no likes, shares, and comments. It further noted it had found little evidence that the long-form articles created using ChatGPT were shared on social media platforms. The articles catered to U.S. politics and global events, and were published on five different websites that posed as progressive and conservative news outlets, indicating an attempt to target people on opposite sides of the political spectrum. OpenAI said its ChatGPT tool was used to create comments in English and Spanish, which were then posted on a dozen accounts on X and one on Instagram. Some of these comments were generated by asking its AI models to rewrite comments posted by other social media users. "The operation generated content about several topics: mainly, the conflict in Gaza, Israel's presence at the Olympic Games, and the U.S. presidential election -- and to a lesser extent politics in Venezuela, the rights of Latinx communities in the U.S. (both in Spanish and English), and Scottish independence," OpenAI said. "They interspersed their political content with comments about fashion and beauty, possibly to appear more authentic or in an attempt to build a following." Storm-2035 was also one of the threat activity clusters highlighted last week by Microsoft, which described it as an Iranian network "actively engaging U.S. voter groups on opposing ends of the political spectrum with polarizing messaging on issues such as the US presidential candidates, LGBTQ rights, and the Israel-Hamas conflict." Some of the phony news and commentary sites set up by the group include EvenPolitics, Nio Thinker, Savannah Time, Teorator, and Westland Sun. These sites have also been observed utilizing AI-enabled services to plagiarize a fraction of their content from U.S. publications. The group is said to be operational from 2020. Microsoft has further warned of an uptick in foreign malign influence activity targeting the U.S. election over the past six months from both Iranian and Russian networks, the latter of which have been traced back to clusters tracked as Ruza Flood (aka Doppelganger), Storm-1516, and Storm-1841 (aka Rybar). "Doppelganger spreads and amplifies fabricated, fake or even legitimate information across social networks," French cybersecurity company HarfangLab said. "To do so, social networks accounts post links that initiate an obfuscated chain of redirections leading to final content websites." However, indications are that the propaganda network is shifting its tactics in response to aggressive enforcement, increasingly using non-political posts and ads and spoofing non-political and entertainment news outlets like Cosmopolitan, The New Yorker and Entertainment Weekly in an attempt to evade detection, per Meta. The posts contain links that, when tapped, redirects users to a Russia war- or geopolitics-related article on one of the counterfeit domains mimicking entertainment or health publications. The ads are created using compromised accounts. The social media company, which has disrupted 39 influence operations from Russia, 30 from Iran, and 11 from China since 2017 across its platforms, said it uncovered six new networks from Russia (4), Vietnam (1), and the U.S. (1) in the second quarter of 2024. "Since May, Doppelganger resumed its attempts at sharing links to its domains, but at a much lower rate," Meta said. "We've also seen them experiment with multiple redirect hops including TinyURL's link-shortening service to hide the final destination behind the links and deceive both Meta and our users in an attempt to avoid detection and lead people to their off-platform websites." The development comes as Google's Threat Analysis Group (TAG) also said this week that it had detected and disrupted Iranian-backed spear-phishing efforts aimed at compromising the personal accounts of high-profile users in Israel and the U.S., including those associated with the U.S. presidential campaigns. The activity has been attributed to a threat actor codenamed APT42, a state-sponsored hacking crew affiliated with Iran's Islamic Revolutionary Guard Corps (IRGC). It's known to share overlaps with another intrusion set known as Charming Kitten (aka Mint Sandstorm). "APT42 uses a variety of different tactics as part of their email phishing campaigns -- including hosting malware, phishing pages, and malicious redirects," the tech giant said. "They generally try to abuse services like Google (i.e. Sites, Drive, Gmail, and others), Dropbox, OneDrive and others for these purposes." The broad strategy is to gain the trust of their targets using sophisticated social engineering techniques with the goal of getting them off their email and into instant messaging channels like Signal, Telegram, or WhatsApp, before pushing bogus links that are designed to collect their login information. The phishing attacks are characterized by the use of tools like GCollection (aka LCollection or YCollection) and DWP to gather credentials from Google, Hotmail, and Yahoo users, Google noted, highlighting APT42's "strong understanding of the email providers they target." "Once APT42 gains access to an account, they often add additional mechanisms of access including changing recovery email addresses and making use of features that allow applications that do not support multi-factor authentication like application-specific passwords in Gmail and third-party app passwords in Yahoo," it added.
[8]
OpenAI blocks Iranian group's ChatGPT accounts for targeting US election
OpenAI said on Friday it had taken down accounts of an Iranian group for using its ChatGPT chatbot to generate content meant for influencing the U.S. presidential election and other issues. The operation, identified as Storm-2035, used ChatGPT to generate content focused on topics such as commentary on the candidates on both sides in the U.S. elections, the conflict in Gaza and Israel's presence at the Olympic Games and then shared it via social media accounts and websites. Investigation by the Microsoft-backed AI company showed that ChatGPT was used for generating long-form articles and shorter social media comments. OpenAI said the operation did not appear to have achieved meaningful audience engagement. Majority of the identified social media posts received few or no likes, shares or comments and the company did not see indications of web articles being shared across social media. The accounts have been banned from using OpenAI's services and the company continues to monitor activities for any further attempts to violate policies, it said. Earlier in August, a Microsoft threat-intelligence report said Iranian network Storm-2035, comprising four websites masquerading as news outlets, is actively engaging U.S. voter groups on opposing ends of the political spectrum. The engagement was being built with "polarizing messaging on issues such as the U.S. presidential candidates, LGBTQ rights, and the Israel-Hamas conflict," the report stated. Democratic candidate Kamala Harris and Republican rival Donald Trump are locked in a tight race, ahead of the Nov. 5 presidential election. The AI firm said in May it had disrupted five covert influence operations that sought to use its models for "deceptive activity" across the internet. (Reporting by Juby Babu in Mexico City; Editing by Mohammed Safi Shamsi)
[9]
OpenAI blocks Iranian group's ChatGPT accounts for targeting US election
Investigation by the Microsoft-backed AI company showed that ChatGPT was used for generating long-form articles and shorter social media comments. OpenAI said the operation did not appear to have achieved meaningful audience engagement. Majority of the identified social media posts received few or no likes, shares or comments and the company did not see indications of web articles being shared across social media. The accounts have been banned from using OpenAI's services and the company continues to monitor activities for any further attempts to violate policies, it said. Earlier in August, a Microsoft threat-intelligence report said Iranian network Storm-2035, comprising four websites masquerading as news outlets, is actively engaging U.S. voter groups on opposing ends of the political spectrum. The engagement was being built with "polarizing messaging on issues such as the U.S. presidential candidates, LGBTQ rights, and the Israel-Hamas conflict," the report stated. Democratic candidate Kamala Harris and Republican rival Donald Trump are locked in a tight race, ahead of the Nov. 5 presidential election. The AI firm said in May it had disrupted five covert influence operations that sought to use its models for "deceptive activity" across the internet. (Reporting by Juby Babu in Mexico City; Editing by Mohammed Safi Shamsi)
[10]
OpenAI Blocks Iranian Group's ChatGPT Accounts for Targeting US Election
(Reuters) - OpenAI said on Friday it had taken down accounts of an Iranian group for using its ChatGPT chatbot to generate content meant for influencing the U.S. presidential election and other issues. The operation, identified as Storm-2035, used ChatGPT to generate content focused on topics such as commentary on the candidates on both sides in the U.S. elections, the conflict in Gaza and Israel's presence at the Olympic Games and then shared it via social media accounts and websites. Investigation by the Microsoft-backed AI company showed that ChatGPT was used for generating long-form articles and shorter social media comments. OpenAI said the operation did not appear to have achieved meaningful audience engagement. Majority of the identified social media posts received few or no likes, shares or comments and the company did not see indications of web articles being shared across social media. The accounts have been banned from using OpenAI's services and the company continues to monitor activities for any further attempts to violate policies, it said. Earlier in August, a Microsoft threat-intelligence report said Iranian network Storm-2035, comprising four websites masquerading as news outlets, is actively engaging U.S. voter groups on opposing ends of the political spectrum. The engagement was being built with "polarizing messaging on issues such as the U.S. presidential candidates, LGBTQ rights, and the Israel-Hamas conflict," the report stated. Democratic candidate Kamala Harris and Republican rival Donald Trump are locked in a tight race, ahead of the Nov. 5 presidential election. The AI firm said in May it had disrupted five covert influence operations that sought to use its models for "deceptive activity" across the internet. (Reporting by Juby Babu in Mexico City; Editing by Mohammed Safi Shamsi)
[11]
OpenAI says Iranian group used ChatGPT to try to influence U.S. election
Sorry, a summary is not available for this article at this time. Please try again later. SAN FRANCISCO -- Artificial intelligence company OpenAI said Friday that an Iranian group had used its ChatGPT chatbot to generate content to be posted on websites and social media with the aim of stirring up polarization among American voters in the presidential election. The sites and social media accounts OpenAI discovered posted articles and opinions made with help from ChatGPT on topics ranging from the conflict in Gaza to the Olympic Games. They also posted material about the U.S. presidential election, spreading misinformation and writing critically about both candidates, a company report said. Some appeared on sites Microsoft said last week were used by Iran to post fake news articles intended to amp up political division in the United States, OpenAI said. The AI company banned the ChatGPT accounts associated with the Iranian efforts and said their posts had not gained widespread attention from social media users. OpenAI found "a dozen" accounts on X and one on Instagram that it linked to the Iranian operation and said all were taken down after it notified those social media companies. Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, said the activity was the first case of the company detecting an operation that had the U.S. election as a primary target. "Even though it doesn't seem to have reached people, it's an important reminder, we all need to stay alert but stay calm," he said. The OpenAI report adds to recent evidence of tech-centric Iranian attempts to influence the U.S. election, detailed in reports from Microsoft and Google. One website flagged Friday by OpenAI, Teorator, bills itself as "your ultimate destination for uncovering the truths they don't want you to know," and posted articles critical of Democratic vice-presidential candidate Tim Walz. Another site called Even Politics posted articles critical of Republican candidate Donald Trump and other conservative figures such as Elon Musk. In May, OpenAI first detailed attempts by government actors to use its AI to create propaganda, saying it detected groups from Iran, Russia, China and Israel using ChatGPT to create content in multiple languages. None of those influence operations got widespread traction with internet users, Nimmo said at the time. OpenAI also has acknowledged that it's possible the company had failed to detect stealthier operations using its technology. As billions of people vote in elections around the world this year, democracy advocates, politicians and AI researchers have raised concerns about the ability of AI to potentially make it easier to generate large amounts of propaganda that appears to be written by real people. So far, authorities have not reported widespread evidence that foreign governments are succeeding in influencing Americans to vote a certain way.
[12]
OpenAI blocks Iranian group from ChatGPT, says it targeted US election
OpenAI said on Friday it had taken down accounts of an Iranian group for using its ChatGPT chatbot to generate content meant for influencing the U.S. presidential election and other issues. The operation, identified as Storm-2035, used ChatGPT to generate content focused on topics such as commentary on the candidates on both sides in the U.S. elections, the conflict in Gaza and Israel's presence at the Olympic Games and then shared it via social media accounts and websites. An investigation by the Microsoft-backed AI company showed that ChatGPT was used for generating long-form articles and shorter social media comments. OpenAI said the operation did not appear to have achieved meaningful audience engagement. Most of the identified social media posts received few or no likes, shares or comments, and the company did not see indications of web articles being shared across social media. The accounts have been banned from using OpenAI's services, and the company continues to monitor activities for any further attempts to violate policies, it said. Earlier in August, a Microsoft threat-intelligence report said Iranian network Storm-2035, comprising four websites masquerading as news outlets, is actively engaging U.S. voter groups on opposing ends of the political spectrum. The engagement was being built with "polarizing messaging on issues such as the U.S. presidential candidates, LGBTQ rights, and the Israel-Hamas conflict," the report stated. Democratic nominee Kamala Harris and Republican rival Donald Trump are locked in a tight race, ahead of the November 5 presidential election. The AI firm said in May it had disrupted five covert influence operations that sought to use its models for "deceptive activity" across the internet.
[13]
OpenAI says it disrupted Iranian influence operation using ChatGPT
OpenAI said Friday that it disrupted an Iranian influence operation that was using ChatGPT to generate content related to the U.S. presidential election and other topics. The network known as Storm-2035 used the company's chatbot, powered by artificial intelligence (AI), to create content, including "commentary on candidates on both sides in the U.S. presidential election," that was then shared on social media. OpenAI has banned the accounts from using its services. It emphasized that the operation "does not appear to have achieved meaningful audience engagement," with the identified social media posts receiving few or no likes, shares or comments. "Notwithstanding the lack of meaningful audience engagement resulting from this operation, we take seriously any efforts to use our services in foreign influence operations," OpenAI wrote in a blog post. "Accordingly, as part of our work to support the wider community in disrupting this activity after removing the accounts from our services, we have shared threat intelligence with government, campaign, and industry stakeholders," it added. The operation used ChatGPT to generate long-form articles that were published to five websites posing as progressive or conservative news outlets, as well as to write short social media comments in English and Spanish from accounts on X and Instagram posing as both progressives and conservatives. The content mainly focused on the conflict in Gaza, Israel's presence at the Olympic Games and the U.S. presidential election, although some focused on Venezuelan politics, Latinx rights in the U.S. and Scottish independence, according to OpenAI. The accounts used by the operation interspersed this content with "comments about fashion and beauty, possibly to appear more authentic or in an attempt to build a following," the AI startup noted. OpenAI's disruption of this Iranian influence operation comes after former President Trump's campaign said last weekend that some of its internal communications were hacked by "foreign sources hostile to the United States." Trump's campaign pointed to a report from Microsoft on Iran's influence operations targeting the 2024 election, which revealed that Iranian hackers "broke into the account of a 'high ranking official'" on a presidential campaign in June. The same report also featured information on Storm-2035, noting that it was "masquerading as news outlets" and "actively engaging US voter groups on opposing ends of the political spectrum with polarizing messaging on issues such as the US presidential candidates, LGBTQ rights, and the Israel-Hamas conflict." OpenAI noted in Friday's blog post that it "benefited from information about the operation published by Microsoft last week."
[14]
OpenAI Blocks Iranian Group's ChatGPT Accounts For Targeting US Election, Says Covert Operation Did Not Achieve 'Meaningful Audience Engagement'
OpenAI has uncovered and dismantled a covert Iranian influence operation leveraging ChatGPT, marking the second such instance of disclosing the adversarial use of its AI models since May. What Happened: According to a post by Microsoft Corp.-backed OpenAI, the operation, named Storm-2035, was generating content using ChatGPT accounts to manipulate public opinion during the 2024 elections. The accounts involved have been banned from using OpenAI's services. The operation produced long-form articles and short social media comments on various topics, including U.S. politics, global events, and the U.S. presidential election. The content was shared via social media and websites posing as news outlets. Despite the operation's efforts, it did not achieve significant audience engagement. Most social media posts received minimal interaction, and web articles were not widely shared, the AI startup said. OpenAI's investigation was aided by information from Microsoft. "The operation generated content about several topics: mainly, the conflict in Gaza, Israel's presence at the Olympic Games, and the U.S. presidential election," the company noted. OpenAI has shared threat intelligence with government, campaign, and industry stakeholders to support the wider community in disrupting such activities. Why It Matters: This incident is part of a broader trend of using AI tools for disinformation campaigns. In May, OpenAI revealed that its AI models were being exploited by networks associated with Russia, China, Iran, and Israel to spread disinformation globally. The company disclosed that five covert influence operations had utilized its AI models to generate misleading text and images. In June, OpenAI announced plans to restrict access to its tools in China amid rising tensions and pressure from the U.S. government to curb China's access to advanced AI technology. Despite these restrictions, developers in China have been using OpenAI's tools via virtual private networks and other means. Concerns about election security have been heightened following a cyberattack on Donald Trump's presidential campaign. Former White House officials have warned about potential future cyberattacks, suggesting that someone might be running the 2016 playbook again. On Thursday, Alphabet Inc.'s Google confirmed that Iranian hackers linked to the Revolutionary Guard targeted the personal email accounts of individuals associated with the U.S. presidential campaigns of President Joe Biden and former President Trump. These attacks, which began in May, have been aimed at current and former government officials, as well as campaign affiliates. Check out more of Benzinga's Consumer Tech coverage by following this link. Read Next: Mark Zuckerberg Explains Why Facebook Beat Google, Microsoft, And Yahoo Who Were 'Fumbling Around:' 'We Were Like A Ragtag Group Of Children' Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo courtesy: Shutterstock Market News and Data brought to you by Benzinga APIs
[15]
OpenAI shuts down election influence operation using ChatGPT | TechCrunch
OpenAI has banned a cluster of ChatGPT accounts linked to an Iranian influence operation that was generating content about the U.S. presidential election, according to a blog post on Friday. The company says the operation created AI-generated articles and social media posts, though it doesn't seem that it reached much of an audience. This is not the first time OpenAI has banned accounts linked to state-affiliated actors using ChatGPT maliciously. In May the company disrupted five campaigns using ChatGPT to manipulate public opinion. These episodes are reminiscent of state actors using social media platforms, like Facebook and Twitter, to attempt to influence previous election cycles. Now similar groups (or perhaps the same ones) are using generative AI to flood social channels with misinformation. Similar to social media companies, OpenAI seems to be adopting a whack-a-mole approach, banning accounts associated with these efforts as they come up. OpenAI says its investigation of this cluster of accounts benefited from a Microsoft Threat Intelligence report published last week, which identified the group (which it calls Storm-2035) as part of a broader campaign to influence US elections operating since 2020. Microsoft said Storm-2035 is an Iranian network with multiple sites imitating news outlets and "actively engaging US voter groups on opposing ends of the political spectrum with polarizing messaging on issues such as the US presidential candidates, LGBTQ rights, and the Israel-Hamas conflict." The playbook, as it has proven to be in other operations, is not necessarily to promote one policy or another but to sow dissent and conflict. OpenAI identified five website fronts for Storm-2035, presenting as both progressive and conservative news outlets with convincing domain names like "evenpolitics.com." The group used ChatGPT to draft several long form articles, including one alleging that "X censors Trump's tweets," which Elon Musk's platform certainly has not done (if anything, Musk is encouraging former President Donald Trump to engage more on X). On social media, OpenAI identified a dozen X accounts and one Instagram account controlled by this operation. The company says ChatGPT was used to rewrite various political comments, which were then posted on these platforms. One of these tweets falsely, and confusingly, alleged that Kamala Harris attributes "increased immigration costs" to climate change, followed by "#DumpKamala." OpenAI says it did not see evidence Storm-2035's articles were shared widely, and noted a majority of its social media posts received few to no likes, shares, or comments. This is often the case with these operations, which are quick and cheap to spin up using AI tools like ChatGPT. Expect to see many more notices like this as the election approaches and partisan bickering online intensifies.
[16]
Breaking: OpenAI Cracks Down Iran Influence On US Presidential Election
The operation targeted both progressive and conservative audiences with minimal engagement. The firm behind ChatGPT, OpenAI said that it has started cracking down on Iran-linked accounts that are using their platforms to spread news on US Presidential election. This development has gained notable traction, especially after Donald Trump's election campaign claimed that it had faced a security breach from Iran hackers. The firm said that the Iran-linked accounts are using its generative AI technologies to spread misinformation. OpenAI, in a recent announcement, said that it is battling against foreign influence operations that are using its AI technologies. The AI firm said that it has identified and banned a flurry of accounts to cover Iranian influence operation "Storm-2035". The accounts, according to the report, have been using ChatGPT to generate and spread misinformation related to various topics. Notably, one of the most notable topics was the US Presidential election and the campaigns. The AI firm said that the operation was focused on both long-form articles and short social media comments, which were shared across different platforms.
[17]
OpenAI shuts down Iranian influence operation targeting US election
NEW YORK - OpenAI removed a network of Iranian accounts that used its ChatGPT chatbot to try to wage a foreign influence campaign targeting the U.S. presidential election by generating longform articles and social media comments, the company said Friday. The accounts created content that appeared to be from liberal and conservative-leaning users, including posts suggesting that former President Donald Trump was being censored on social media and was prepared to declare himself king of the U.S. Another described Vice President Kamala Harris' selection of Tim Walz for her running mate as a "calculated choice for unity." The influence campaign, which also included posts about Israel's war on Gaza, the Olympic Games in Paris and fashion and beauty subjects, doesn't appear to have received significant audience engagement, said Ben Nimmo, principal investigator on OpenAI's Intelligence and Investigations team, in a news briefing Friday. "The operation tried to play both sides but it didn't look like it got engagement from either," he said. The Iranian operation marks the latest suspicious social media effort that used AI only to fail to get much traction, a possible indication that foreign operatives are still figuring out how to capitalize on a new crop of artificial intelligence tools that can quickly spit out convincing writing and images for little to no cost. Microsoft Corp. in June said it had detected pro-Russian accounts trying to amplify a fabricated video showing violence at the Olympics. And Meta Platforms Inc. earlier this year said it had removed hundreds of Facebook accounts associated with influence operations from Iran, China and Russia, some of which relied on AI tools to spread disinformation. OpenAI on Friday didn't specify the exact number of accounts it removed. The startup said it also identified a dozen accounts on X, formerly Twitter, and one on Instagram involved in the effort. Instagram removed the account in question, which generally was focused on Scotland and posted about food. X did not immediately respond to a request for comment. The disclosure comes after suspected Iranian hackers compromised Trump's political campaign, sparking a federal investigation into possible foreign meddling ahead of the U.S. elections in November. The U.S. intelligence community has consistently warned about foreign governments trying to shape Americans' opinions. The Office of the Director of National Intelligence in July said that Iran, Russia and China were recruiting people in the U.S. to try spreading their propaganda. OpenAI in May said that networks from Russia, China, Iran and Israel had tried using the company's AI products to boost their propaganda efforts. At the time, OpenAI said the networks it disrupted had used AI to generate text and images in a larger volume than otherwise would have been possible by human creators, helping the content appear more authentic. However the campaigns still failed to generate significantly more engagement, according to the startup.
[18]
ChatGPT bans multiple accounts linked to Iranian operation creating false news reports
OpenAI deactivated several ChatGPT accounts using the artificial intelligence chatbot to spread disinformation as part of an Iranian influence operation, the company reported Friday. The covert operation called Storm-2035, generated content on a variety of topics including the U.S. presidential election, the American AI company announced Friday. However, the accounts were banned before the content garnered a large audience. The operation also generated misleading content on "the conflict in Gaza, Israel's presence at the Olympic Games" as well as "politics in Venezuela, the rights of Latinx communities in the U.S. (both in Spanish and English), and Scottish independence." The scheme also included some fashion and beauty content possibly in an attempt to seem authentic or build a following, OpenAI added. "We take seriously any efforts to use our services in foreign influence operations. Accordingly, as part of our work to support the wider community in disrupting this activity after removing the accounts from our services, we have shared threat intelligence with government, campaign, and industry stakeholders," the company said. The company said it found no evidence that real people interacted or widely shared the content generated by the operation. Most of the identified social posts received little to no likes, shares or comments, the news release said. Company officials also found no evidence of the web articles being shared on social media. The disinformation campaign was on the low end of The Breakout Scale, which measures the impact of influence operations from a scale of 1 to 6. The Iranian operation scored a Category 2. The company said it condemns attempts to "manipulate public opinion or influence political outcomes while hiding the true identity or intentions of the actors behind them." The company will use its AI technology to better detect and understand abuse. "OpenAI remains dedicated to uncovering and mitigating this type of abuse at scale by partnering with industry, civil society, and government, and by harnessing the power of generative AI to be a force multiplier in our work. We will continue to publish findings like these to promote information-sharing and best practices," the company said. Earlier this year, the company reported similar foreign influence efforts using its AI tools based in Russia, China, Iran and Israel but those attempts also failed to reach a significant audience.
[19]
OpenAI Says It Disrupted an Iranian Misinformation Campaign
OpenAI said on Friday that it had discovered and disrupted an Iranian influence campaign that used the company's generative artificial intelligence technologies to spread misinformation online, including content related to the U.S. presidential election. The San Francisco A.I. company said it had banned several accounts linked to the campaign from its online services. The Iranian effort, OpenAI added, did not seem to reach a sizable audience. "The operation doesn't appear to have benefited from meaningfully increased audience engagement because of the use of A.I.," said Ben Nimmo, a principal investigator for OpenAI who has spent years tracking covert influence campaigns from positions at companies including OpenAI and Meta. "We did not see signs that it was getting substantial engagement from real people at all." The popularity of generative A.I. like OpenAI's online chatbot, ChatGPT, has raised questions about how such technologies might contribute to online disinformation, especially in a year when there are major elections across the globe. In May, OpenAI released a first-of-its-kind report showing that it had identified and disrupted five other online campaigns that used its technologies to deceptively manipulate public opinion and influence geopolitics. Those efforts were run by state actors and private companies in Russia, China and Israel as well as Iran. These covert operations used OpenAI's technology to generate social media posts, translate and edit articles, write headlines and debug computer programs, typically to win support for political campaigns or to swing public opinion in geopolitical conflicts. This week, OpenAI identified several ChatGPT accounts that were using its chatbot to generate text and images for a covert Iranian campaign that the company called Storm-2035. The company said the campaign had used ChatGPT to generate content related to a variety of topics, including commentary on candidates in the U.S. presidential election. In some cases, the commentary seemed progressive. In other cases, it seemed conservative. It also dealt with hot-button topics ranging from the war in Gaza to Scottish independence. The campaign, OpenAI said, used its technologies to generate articles and shorter comments posted on websites and on social media. In some cases, the campaign used ChatGPT to rewrite comments posted by other social media users. OpenAI added that a majority of the campaign's social media posts had received few or no likes, shares or comments, and that it had found little evidence that web articles produced by the campaigns were shared across social media. (The New York Times has sued OpenAI and its partner, Microsoft, claiming copyright infringement of news content related to A.I. systems.)
Share
Share
Copy Link
OpenAI has taken action against Iranian hacker groups using ChatGPT to influence the US presidential elections. The company has blocked several accounts and is working to prevent further misuse of its AI technology.
OpenAI, the company behind the popular AI chatbot ChatGPT, has recently uncovered and thwarted attempts by Iranian state-sponsored hacker groups to manipulate the upcoming US presidential elections 1. The company has taken swift action by blocking several accounts associated with these groups, demonstrating the growing concern over the potential misuse of AI technologies in political interference 2.
The Iranian hackers reportedly used ChatGPT to create content aimed at swaying public opinion and spreading disinformation related to the US elections. Their tactics included generating politically charged messages, creating fake social media profiles, and crafting persuasive arguments to influence voters 3. The hackers' sophisticated approach highlights the evolving landscape of cyber threats and the potential for AI tools to be weaponized for political purposes.
In response to this threat, OpenAI has not only blocked the identified accounts but also implemented enhanced monitoring systems to detect and prevent similar misuse in the future 4. The company is working closely with cybersecurity experts and government agencies to strengthen its defenses against such attacks. OpenAI has emphasized its commitment to ensuring that its AI technologies are not exploited for malicious purposes, particularly in sensitive contexts like elections.
This incident has sparked a broader discussion about the role of AI in election security and the need for robust safeguards. Experts warn that as AI technologies become more advanced and accessible, the potential for their misuse in influencing democratic processes grows 5. Governments and tech companies are now facing increased pressure to develop comprehensive strategies to counter AI-enabled election interference.
The revelation of Iranian involvement in attempting to manipulate US elections through AI has further strained diplomatic relations between the two countries. US officials have condemned these actions, viewing them as a direct attack on democratic processes. The incident has also led to calls for international cooperation in establishing norms and regulations for the use of AI in political contexts.
As the 2024 US presidential election approaches, this incident serves as a wake-up call for policymakers, tech companies, and voters alike. It underscores the urgent need for robust AI governance frameworks and heightened vigilance against sophisticated cyber threats. The challenge lies in balancing the innovative potential of AI technologies with the imperative of safeguarding democratic institutions and processes from malicious interference.
Reference
[1]
[3]
[5]
OpenAI reports multiple instances of ChatGPT being used by cybercriminals to create malware, conduct phishing attacks, and attempt to influence elections. The company has disrupted over 20 such operations in 2024.
15 Sources
15 Sources
OpenAI's ChatGPT rejected more than 250,000 requests to generate images of U.S. election candidates in the lead-up to Election Day, as part of efforts to prevent AI-driven misinformation and election interference.
9 Sources
9 Sources
OpenAI has banned multiple accounts for misusing ChatGPT in surveillance and influence campaigns, highlighting the ongoing challenge of preventing AI abuse while maintaining its benefits for legitimate users.
15 Sources
15 Sources
OpenAI has made significant strides in AI technology, from training models to produce easily understandable text to considering the development of its own AI chips. These developments could reshape the landscape of artificial intelligence and its applications.
2 Sources
2 Sources
US intelligence officials report that Russia, Iran, and China are using artificial intelligence to enhance their election interference efforts. Russia is identified as the most prolific producer of AI-generated content aimed at influencing the 2024 US presidential election.
10 Sources
10 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved