Curated by THEOUTPOST
On Thu, 10 Oct, 12:02 AM UTC
15 Sources
[1]
OpenAI confirms threat actors use ChatGPT to write malware
OpenAI has disrupted over 20 malicious cyber operations abusing its AI-powered chatbot, ChatGPT, for debugging and developing malware, spreading misinformation, evading detection, and conducting spear-phishing attacks. The report, which focuses on operations since the beginning of the year, constitutes the first official confirmation that generative mainstream AI tools are used to enhance offensive cyber operations. The first signs of such activity were reported by Proofpoint in April, who suspected TA547 (aka "Scully Spider") of deploying an AI-written PowerShell loader for their final payload, Rhadamanthys info-stealer. Last month, HP Wolf researchers reported with high confidence that cybercriminals targeting French users were employing AI tools to write scripts used as part of a multi-step infection chain. The latest report by OpenAI confirms the abuse of ChatGPT, presenting cases of Chinese and Iranian threat actors leveraging it to enhance the effectiveness of their operations. The first threat actor outlined by OpenAI is 'SweetSpecter,' a Chinese adversary first documented by Cisco Talos analysts in November 2023 as a cyber-espionage threat group targeting Asian governments. OpenAI reports that SweetSpecter targeted them directly, sending spear phishing emails with malicious ZIP attachments masked as support requests to the personal email addresses of OpenAI employees. If opened, the attachments triggered an infection chain, leading to SugarGh0st RAT being dropped on the victim's system. Upon further investigation, OpenAI found that SweetSpecter was using a cluster of ChatGPT accounts that performed scripting and vulnerability analysis research with the help of the LLM tool. The threat actors utilized ChatGPT for the following requests: The second case concerns the Iranian Government Islamic Revolutionary Guard Corps (IRGC)-affiliated threat group 'CyberAv3ngers,' known for targeting industrial systems in critical infrastructure locations in Western countries. OpenAI reports that accounts associated with this threat group asked ChatGPT to produce default credentials in widely used Programmable Logic Controllers (PLCs), develop custom bash and Python scripts, and obfuscate code. The Iranian hackers also used ChatGPT to plan their post-compromise activity, learn how to exploit specific vulnerabilities, and choose methods to steal user passwords on macOS systems, as listed below. The third case highlighted in OpenAI's report concerns Storm-0817, also Iranian threat actors. That group reportedly used ChatGPT to debug malware, create an Instagram scraper, translate LinkedIn profiles into Persian, and develop a custom malware for the Android platform along with the supporting command and control infrastructure, as listed below. The malware created with the help of OpenAI's chatbot can steal contact lists, call logs, and files stored on the device, take screenshots, scrutinize the user's browsing history, and get their precise position. "In parallel, STORM-0817 used ChatGPT to support the development of server side code necessary to handle connections from compromised devices," reads the Open AI report. "This allowed us to see that the command and control server for this malware is a WAMP (Windows, Apache, MySQL & PHP/Perl/Python) setup and during testing was using the domain stickhero[.]pro." All OpenAI accounts used by the above threat actors were banned, and the associated indicators of compromise, including IP addresses, have been shared with cybersecurity partners. Although none of the cases described above give threat actors new capabilities in developing malware, they constitute proof that generative AI tools can make offensive operations more efficient for low-skilled actors, assisting them in all stages, from planning to execution.
[2]
Chinese and Iranian hackers use ChatGPT and LLM tools to create malware and phishing attacks -- OpenAI report has recorded over 20 cyberattacks created with ChatGPT
OpenAI says it will be working with the community to avoid such exploits. If there's one sign that AI is more trouble than it is worth, OpenAI confirms that over twenty cyberattacks have occurred, all created via ChatGPT. The report confirms that generative AI was used to conduct spear-phishing attacks, debug and develop malware, and conduct other malicious activity. The report confirms two cyberattacks using the generative AI ChatGPT. Cisco Talos reported the first in November 2024, which was used by Chinese threat actors who targeted Asian governments. This attack used a spear phishing method called 'SweetSpecter,' which includes a ZIP file with a malicious file that, if downloaded and opened, would create an infection chain on the user's system. OpenAI discovered that SweetSpecter was created using multiple accounts that used ChatGPT to develop scripts and discover vulnerabilities using an LLM tool. The second AI-enhanced cyberattack was from an Iran-based group called 'CyberAv3ngers' that used ChatGPT to exploit vulnerabilities and steal user passwords from macOS-based PCs. The third attack, led by another Iran-based group called Storm-0817, used ChatGPT to develop malware for Android. The malware stole contact lists, extracted call logs and browser history, got the device's precise location, and accessed files on the infected devices. All these attacks used existing methods to develop malware, and according to the report, there has been no indication that ChatGPT created substantially new malware. Regardless, it shows how easy it is for threat actors to trick generative AI services into creating malicious attack tools. It opens a new can of worms, showing it is easier for anyone with the required knowledge to trigger ChatGPT to make something with evil intent. While there are security researchers who discover such potential exploits to report and have them patched, attacks like this would create the need to discuss implementation limitations on generative AI. As of now, OpenAI concludes that it will continue to improve its AI to prevent such methods from being used. In the meantime, it will work with internal safety and security teams. The company also said it will continue to share its findings with industry peers and the research community to prevent such a situation from happening. Though this is happening with OpenAI, it would be counterproductive if major players with their own generative AI platforms did not use protection to avoid such attacks. However, knowing that it is challenging to prevent such attacks, respective AI companies need safeguards to prevent issues rather than cure them.
[3]
OpenAI says it shuts down multiple campaigns using its systems for cybercrime
Hackers are using ChatGPT to try to influence elections around the world OpenAI, the company behind the famed Chat-GPT generative Artificial Intelligence (AI) solution, says it has recently blocked multiple malicious campaigns abusing its services. In a report, the company said it blocked more than 20 operations and deceptive networks around the world in 2024 so far. These operations varied in nature, size, and targets. Sometimes, the crooks would use it to debug malware, and sometimes they would use it to write content (website articles, fake biographies for social media accounts, fake profile pictures, etc.). While this sounds sinister and dangerous, OpenAI says the threat actors failed to gain any significant traction with these campaigns: "Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences," it said. But 2024 is election year - not just in the States, but elsewhere around the world - and OpenAI has seen ChatGPT abused by threat actors trying to influence pre-election campaigns. It mentioned multiple groups, including one called "Zero Zeno. This Israeli-based commercial company "briefly" generated social media comments about elections in India - a campaign that was disrupted "less than 24 hours after it began." The company added in June 2024, just before the elections for the European Parliament, it disrupted an operation dubbed "A2Z", which focused on Azerbaijan and its neighbors. Other notable mentions included generating comments about the European Parliament elections in France, and politics in Italy, Poland, Germany, and the US. Luckily, none of these campaigns made any significant progress, and once OpenAI banned them, they were stopped entirely: "The majority of social media posts that we identified as being generated from our models received few or no likes, shares, or comments, although we identified some occasions when real people replied to its posts," OpenAI concluded. "After we blocked its access to our models, this operation's social media accounts that we had identified stopped posting throughout the election periods in the EU, UK and France."
[4]
Using ChatGPT to make fake social media posts backfires on bad actors
OpenAI claims cyber threats are easier to detect when attackers use ChatGPT. Using ChatGPT to research cyber threats has backfired on bad actors, OpenAI revealed in a report analyzing emerging trends in how AI is currently amplifying online security risks. Not only do ChatGPT prompts expose what platforms bad actors are targeting -- and in at least one case enabled OpenAI to link a covert influence campaign on X and Instagram for the first time -- but they can also reveal new tools that threat actors are testing to evolve their deceptive activity online, OpenAI claimed. OpenAI's report comes amid heightening scrutiny of its tools during a major election year where officials globally fear AI might be used to boost disinformation and propaganda like never before. Their report detailed 20 times OpenAI disrupted covert influence operations and deceptive networks attempting to use AI to sow discord or breach vulnerable systems. "These cases allow us to begin identifying the most common ways in which threat actors use AI to attempt to increase their efficiency or productivity," OpenAI explained. One case involved a "suspected China-based adversary" called SweetSpecter, which used ChatGPT prompts to attempt to engage both government and OpenAI employees with an unsuccessful spear phishing campaign. In the email to OpenAI employees, SweetSpecter posed as a ChatGPT user troubleshooting an issue with the platform detailed in an attachment. Clicking on that attachment would have launched "Windows malware known as SugarGh0st RAT," OpenAI said, giving SweetSpecter "control over the compromised machine" and allowing them "to do things like execute arbitrary commands, take screenshots, and exfiltrate data." Fortunately for OpenAI, the company spam filter deterred the threat without any employees receiving the emails. OpenAI believes that it uncovered SweetSpecter's first known attack on a US-based AI company after monitoring SweetSpecter's ChatGPT prompts boldly asking for help with the attack. Prompts included asking for "themes that government department employees would find interesting" or "good names for attachments to avoid being blocked." SweetSpecter also asked ChatGPT about "vulnerabilities" in various apps and "for help finding ways to exploit infrastructure belonging to a prominent car manufacturer," OpenAI said.
[5]
OpenAI says cybercriminals are using ChatGPT more to influence elections
OpenAI has seen a number of attempts where its AI models have been used to generate fake content, including long-form articles and social media comments, aimed at influencing elections, the ChatGPT maker said in a report on Wednesday. Cybercriminals are increasingly using AI tools, including ChatGPT, to aid in their malicious activities such as creating and debugging malware, and generating fake content for websites and social media platforms, the startup said. So far this year it neutralized more than 20 such attempts, including a set of ChatGPT accounts in August that were used to produce articles on topics that included the U.S. elections, the company said. It also banned a number of accounts from Rwanda in July that were used to generate comments about the elections in that country for posting on social media site X.
[6]
Bad Actors Use OpenAI in Attempt to Manipulate Elections
Cybercriminals are increasingly using AI tools, including ChatGPT, to aid in their malicious activities such as creating and debugging malware, and generating fake content for websites and social media platforms, the startup said. So far this year it neutralized more than 20 such attempts, including a set of ChatGPT accounts in August that were used to produce articles on topics that included the U.S. elections, the company said. It also banned a number of accounts from Rwanda in July that were used to generate comments about the elections in that country for posting on social media site X.
[7]
OpenAI Blocks 20 Global Malicious Campaigns Using AI for Cybercrime and Disinformation
OpenAI on Wednesday said it has disrupted more than 20 operations and deceptive networks across the world that attempted to use its platform for malicious purposes since the start of the year. This activity encompassed debugging malware, writing articles for websites, generating biographies for social media accounts, and creating AI-generated profile pictures for fake accounts on X. "Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences," the artificial intelligence (AI) company said. It also said it disrupted activity that generated social media content related to elections in the U.S., Rwanda, and to a lesser extent India and the European Union, and that none of these networks attracted viral engagement or sustained audiences. This included efforts undertaken by an Israeli commercial company named STOIC (also dubbed Zero Zeno) that generated social media comments about Indian elections, as previously disclosed by Meta and OpenAI earlier this May. Some of the cyber operations highlighted by OpenAI are as follows - Elsewhere, the company said it took steps to block several clusters, including an influence operation codenamed A2Z and Stop News, of accounts that generated English- and French-language content for subsequent posting on a number of websites and social media accounts across various platforms. "[Stop News] was unusually prolific in its use of imagery," researchers Ben Nimmo and Michael Flossman said. "Many of its web articles and tweets were accompanied by images generated using DALL·E. These images were often in cartoon style, and used bright color palettes or dramatic tones to attract attention." Two other networks identified by OpenAI Bet Bot and Corrupt Comment have been found to use their API to generate conversations with users on X and send them links to gambling sites, as well as manufacture comments that were then posted on X, respectively. The disclosure comes nearly two months after OpenAI banned a set of accounts linked to an Iranian covert influence operation called Storm-2035 that leveraged ChatGPT to generate content that, among other things, focused on the upcoming U.S. presidential election. "Threat actors most often used our models to perform tasks in a specific, intermediate phase of activity -- after they had acquired basic tools such as internet access, email addresses and social media accounts, but before they deployed 'finished' products such as social media posts or malware across the internet via a range of distribution channels," Nimmo and Flossman wrote. Cybersecurity company Sophos, in a report published last week, said generative AI could be abused to disseminate tailored misinformation by means of microtargeted emails. This entails abusing AI models to concoct political campaign websites, AI-generated personas across the political spectrum, and email messages that specifically target them based on the campaign points, thereby allowing for a new level of automation that makes it possible to spread misinformation at scale. "This means a user could generate anything from benign campaign material to intentional misinformation and malicious threats with minor reconfiguration," researchers Ben Gelman and Adarsh Kyadige said. "It is possible to associate any real political movement or candidate with supporting any policy, even if they don't agree. Intentional misinformation like this can make people align with a candidate they don't support or disagree with one they thought they liked."
[8]
Propagandists around the world keep trying to use ChatGPT, according to OpenAI report
The report said groups seeking to influence elections were trying to automate tasks with the AI tool but had struggled make meaningful breakthroughs. Propagandists seeking to influence elections around the globe have tried to use ChatGPT in their operations, according to a report released Wednesday by the technology's creator, OpenAI. While ChatGPT is generally seen as one of the leading AI chatbots on the market it also heavily moderates how people use its product. OpenAI is the only major tech company to repeatedly release public reports about how bad actors have tried to misuse its Large Language Model, or LLM, product, giving some insight into how propagandists and criminal or state-backed hackers have tried to use the technology and may use it with other AI models. OpenAI said in its report that this year it has stopped people who tried to use ChatGPT to generate content about elections in the U.S., Rwanda, India, and the European Union. It's not clear whether any were widely seen. In one instance, the company described an Iranian propaganda operation of fake English-language news websites that purported to reflect different American political stances, though it's not clear that those sites have ever gotten substantial engagement from real people. They also used ChatGPT to create social media posts in support of those sites, according to the report. In a media call last month, U.S. intelligence officials said that propagandists working for Iran, as well as Russia and China, have all incorporated AI into their ongoing propaganda operations aimed at U.S. voters but that none appear to have found major success. Last month, the U.S. indicted three Iranian hackers it said were behind an ongoing operation to hack and release documents from Donald Trump's presidential campaign. Another operation that OpenAI says is linked to people in Rwanda was used to create partisan posts on X in favor of the Patriotic Front, the repressive party that has ruled Rwanda since the end of the country's genocide in the early 1990s. They were part of a larger campaign that repeatedly spammed pro-party posts on X, a documented propaganda campaign that posted messages -- often the same few messages -- more than 650,000 times. The company also blocked two campaigns this year -- one created social media comments about the E.U. parliamentary elections, and another created content about India's general elections -- very quickly after they began. Neither got any substantial interaction, OpenAI said, but it's also not clear whether the people behind the campaigns simply moved to other AI models created by different companies. OpenAI also described how one particular Iranian hacker group that targeted water and wastewater plants repeatedly tried to use ChatGPT in multiple stages of its operation. A spokesperson for Iran's mission to the United Nations didn't respond to an email requesting comment about the water plant hacking campaign or propaganda operation. The group, called CyberAv3ngers, appears to have gone dormant or has disbanded after the Treasury Department sanctioned it in February. Before that, it was known for hacking water and wastewater plants in the U.S. and Israel that use an Israeli software program called Unitronics. There is no indication that the hackers ever damaged any American water systems, but they did breach several U.S. facilities that used Unitronics. Federal authorities said last year that the hackers were often able to get into Unitronics systems by using default usernames and passwords. According to OpenAI's report, they also tried to get ChatGPT to tell them the default login credentials for other companies that provide industrial control systems software. They also asked ChatGPT for a host of other things in that operation, including information about what internet routers are most commonly used in Jordan and how to find vulnerabilities a hacker might exploit, and for help with multiple coding questions. OpenAI also reported something cybersecurity and China experts have long suspected but hasn't been made explicitly public. Hackers working for China -- a country the U.S. routinely accuses of conducting cyberespionage to benefit its industries and which has prioritized artificial intelligence -- conducted a campaign to try to hack the personal and corporate email accounts of OpenAI employees. The phishing campaign was unsuccessful, the report claims. A spokesperson for the Chinese Embassy in Washington didn't immediately respond to a request for comment. A consistent theme of malicious actors' use of AI is that they often try to automate different parts of their work, but the technology so far hasn't led to major breakthroughs in hacking or creating effective propaganda, said Ben Nimmo, OpenAI's principal investigator for intelligence and investigations. "The threat actors look like they're still experimenting with different approaches to AI, but we haven't seen evidence of this leading to meaningful breakthroughs in their ability to build viral audiences," Nimmo said.
[9]
OpenAI sees continued attempts to use AI models for election interference
OpenAI has seen continued attempts by cybercriminals to use its artificial intelligence (AI) models for fake content aimed at interfering with this year's elections, the ChatGPT maker said in a new report. According to OpenAI's report, released Wednesday, the AI developer discovered and disrupted more than 20 operations this year that tried to influence the election with the company's technology, including its popular tool, ChatGPT. These deceptive networks attempted to use OpenAI's models to generate a variety of fake content, some of which was intended to be shared by fake personas on social media, the report stated. OpenAI's models were also used to make articles for websites, analyze and reply to social media posts or debug malware, the tech giant said. This activity was detected in-part by OpenAI's own AI tools, which often caught it in a matter of minutes, according to the report. While threat actors may be "experimenting" with OpenAI models, the company emphasized their reach is limited. "Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences," the report wrote. The report laid out several examples of the misuse it observed in recent months. In early July, for example, the company said it banned several ChatGPT accounts from Rwanda after discovering they were being used to generate comments about the country's elections. And in August, OpenAI disrupted a "covert Iranian influence operation" which produced social media comments and long-form articles about the U.S. election, conflict in the Middle East, Venezuelan politics and Scottish independence. Most of these posts received little engagement and there were no indications these were shared across social media sites, the report noted. Fears about how the elections could be compromised have ramped up amid a flurry of recent reports about foreign adversaries' attempts to meddle with the U.S. presidential election this November. Last month, federal intelligence officials warned foreign adversaries are using AI to enhance ongoing discrimination efforts. Countries involved with this misuse include Russia, Iran and China, the officials said. Microsoft also released a report that found Russian influence operations were behind a viral video falsely accusing Vice President Harris of a hit-and-run, while the Justice Department seized more than 30 web domains used by Russia for covert campaigns. Former President Trump's campaign, meanwhile, was hacked in June by Iran, which sought to share information with President Biden's campaign, according to the FBI.
[10]
OpenAI says bad actors are using its platform to disrupt elections, but with little 'viral engagement'
Big technology companies are betting that a new wave of smaller, more precise AI models will be more effective when it comes to the needs of businesses in sectors like law, finance, and health care. OpenAI is increasingly becoming a platform of choice for cyber actors looking to influence democratic elections across the globe. In a 54-page report published Wednesday, the ChatGPT creator said that it's disrupted "more than 20 operations and deceptive networks from around the world that attempted to use our models." The threats ranged from AI-generated website articles to social media posts by fake accounts. The company said its update on "influence and cyber operations" was intended to provide a "snapshot" of what it's seeing and to identify "an initial set of trends that we believe can inform debate on how AI fits into the broader threat landscape." OpenAI's report lands less than a month before the U.S. presidential election. Beyond the U.S., it's a significant year for elections worldwide, with contests taking place that affect upward of 4 billion people in more than 40 countries. The rise of AI-generated content has led to serious election-related misinformation concerns, with the number of deepfakes that have been created increasing 900% year over year, according to data from Clarity, a machine learning firm. Misinformation in elections is not a new phenomenon. It's been a major problem dating back to the 2016 U.S. presidential campaign, when Russian actors found cheap and easy ways to spread false content across social platforms. In 2020, social networks were inundated with misinformation on Covid vaccines and election fraud. Lawmakers' concerns today are more focused on the rise in generative AI, which took off in late 2022 with the launch of ChatGPT and is now being adopted by companies of all sizes. OpenAI wrote in its report that election-related uses of AI "ranged in complexity from simple requests for content generation, to complex, multi-stage efforts to analyze and reply to social media posts." The social media content related mostly to elections in the U.S. and Rwanda, and to a lesser extent, elections in India and the EU, OpenAI said. In late August, an Iranian operation used OpenAI's products to generate "long-form articles" and social media comments about the U.S. election, as well as other topics, but the company said the majority of identified posts received few or no likes, shares and comments. In July, the company banned ChatGPT accounts in Rwanda that were posting election-related comments on X. And in May, an Israeli company used ChatGPT to generate social media comments about elections in India. OpenAI wrote that it was able to address the case within less than 24 hours. In June, OpenAI addressed a covert operation that used its products to generate comments about the European Parliament elections in France, and politics in the U.S., Germany, Italy and Poland. The company said that while most social media posts it identified received few likes or shares, some real people did reply to the AI-generated posts. None of the election-related operations were able to attract "viral engagement" or build "sustained audiences" via the use of ChatGPT and OpenAI's other tools, the company wrote.
[11]
OpenAI sees increasing use of its AI models for influencing elections
Oct 9 (Reuters) - OpenAI has seen a number of attempts where its AI models have been used to generate fake content, including long-form articles and social media comments, aimed at influencing elections, the ChatGPT maker said in a report on Wednesday. Cybercriminals are increasingly using AI tools, including ChatGPT, to aid in their malicious activities such as creating and debugging malware, and generating fake content for websites and social media platforms, the startup said. Advertisement · Scroll to continue So far this year it neutralized more than 20 such attempts, including a set of ChatGPT accounts in August that were used to produce articles on topics that included the U.S. elections, the company said. It also banned a number of accounts from Rwanda in July that were used to generate comments about the elections in that country for posting on social media site X. None of the activities that attempted to influence global elections drew viral engagement or sustainable audiences, OpenAI added. Advertisement · Scroll to continue There is increasing worry about the use of AI tools and social media sites to generate and propagate fake content related to elections, especially as the U.S. gears for presidential polls. According to the U.S. Department of Homeland Security, the U.S. sees a growing threat of Russia, Iran and China attempting to influence the Nov. 5 elections, including by using AI to disseminate fake or divisive information. OpenAI cemented its position as one of the world's most valuable private companies last week after a $6.6 billion funding round. ChatGPT has 250 million weekly active users since its launch in November 2022. Reporting by Deborah Sophia in Bengaluru; Editing by Anil D'Silva Our Standards: The Thomson Reuters Trust Principles., opens new tab
[12]
OpenAI Sees Increasing Use of Its AI Models for Influencing Elections
(Reuters) - OpenAI has seen a number of attempts where its AI models have been used to generate fake content, including long-form articles and social media comments, aimed at influencing elections, the ChatGPT maker said in a report on Wednesday. Cybercriminals are increasingly using AI tools, including ChatGPT, to aid in their malicious activities such as creating and debugging malware, and generating fake content for websites and social media platforms, the startup said. So far this year it neutralized more than 20 such attempts, including a set of ChatGPT accounts in August that were used to produce articles on topics that included the U.S. elections, the company said. It also banned a number of accounts from Rwanda in July that were used to generate comments about the elections in that country for posting on social media site X. None of the activities that attempted to influence global elections drew viral engagement or sustainable audiences, OpenAI added. There is increasing worry about the use of AI tools and social media sites to generate and propagate fake content related to elections, especially as the U.S. gears for presidential polls. According to the U.S. Department of Homeland Security, the U.S. sees a growing threat of Russia, Iran and China attempting to influence the Nov. 5 elections, including by using AI to disseminate fake or divisive information. OpenAI cemented its position as one of the world's most valuable private companies last week after a $6.6 billion funding round. ChatGPT has 250 million weekly active users since its launch in November 2022. (Reporting by Deborah Sophia in Bengaluru; Editing by Anil D'Silva)
[13]
OpenAI sees increasing use of its AI models for influencing elections
(Reuters) - OpenAI has seen a number of attempts where its AI models have been used to generate fake content, including long-form articles and social media comments, aimed at influencing elections, the ChatGPT maker said in a report on Wednesday. Cybercriminals are increasingly using AI tools, including ChatGPT, to aid in their malicious activities such as creating and debugging malware, and generating fake content for websites and social media platforms, the startup said. So far this year it neutralized more than 20 such attempts, including a set of ChatGPT accounts in August that were used to produce articles on topics that included the U.S. elections, the company said. It also banned a number of accounts from Rwanda in July that were used to generate comments about the elections in that country for posting on social media site X. None of the activities that attempted to influence global elections drew viral engagement or sustainable audiences, OpenAI added. There is increasing worry about the use of AI tools and social media sites to generate and propagate fake content related to elections, especially as the U.S. gears for presidential polls. According to the U.S. Department of Homeland Security, the U.S. sees a growing threat of Russia, Iran and China attempting to influence the Nov. 5 elections, including by using AI to disseminate fake or divisive information. OpenAI cemented its position as one of the world's most valuable private companies last week after a $6.6 billion funding round. ChatGPT has 250 million weekly active users since its launch in November 2022. (Reporting by Deborah Sophia in Bengaluru; Editing by Anil D'Silva)
[14]
OpenAI: How ChatGPT Could Be Used to Manipulate US Elections
ChatGPT can be used to automate engagement and interaction on social media platforms. ChatGPT can be used to give responses to users over social media, participate in discussions, and spread misinformation in real time. This automation can create an illusion of widespread support or opposition. This further influences public perception and . Example: ChatGPT has been deployed to engage with users on social media, spreading disinformation and creating the appearance of genuine public discourse. It can interact with thousands of users simultaneously, increasing the spread and impact of disinformation campaigns. ChatGPT's ability to generate convincing fake news, create deepfake texts, manipulate social media algorithms, target specific demographics, and automate engagement affect the integrity of democratic processes. The potential misuse of ChatGPT to manipulate US elections highlights the need for ethical considerations while using AI technologies. Artificial intelligence poses significant risks, but there are certain risks if misused. The capabilities of AI can be exploited to spread misinformation, influence voter behavior, and create false narratives, thereby undermining public trust in elections. To mitigate these risks, policymakers and social media platforms must collaborate on implementing effective measures to detect . This includes developing advanced detection mechanisms for AI-generated content and promoting digital literacy among the public. This also involves enforcing stringent regulations on the use of AI in political campaigns. By taking proactive measures, one can use the benefits of AI while protecting the election process from cybercriminals.
[15]
Ahead Of Trump Vs. Harris Faceoff, ChatGPT Parent OpenAI Uncovers Election Interference Misuse, But Sees No 'Meaningful Breakthrough' - Microsoft (NASDAQ:MSFT), Alphabet (NASDAQ:GOOGL)
ChatGPT-parent OpenAI has disclosed that its platform is being misused by malicious entities to meddle with democratic elections across the globe. What Happened: According to the 54-page report published on Wednesday, OpenAI has thwarted over 20 global operations and deceptive networks that sought to misuse its models. The threats ranged from AI-generated website articles to social media posts by fake accounts. It also highlights that election-related uses of AI ranged from simple content generation requests to complex, multi-stage efforts to analyze and respond to social media posts. "Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences," the AI startup stated. See Also: Nvidia CEO Jensen Huang Once Spoke About The Importance Of Every Country Needing A Sovereign AI -- But What Exactly Is This? The majority of the social media content is related to elections in the U.S. and Rwanda, and to a lesser extent, elections in India and the EU. Despite these attempts, OpenAI stated that none of the election-related operations were able to attract viral engagement or build sustained audiences using its tools. OpenAI also said that a suspected China-based threat actor, "SweetSpecter," attempted to spear phish its employees' personal and corporate email accounts but was unsuccessful. Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox. Why It Matters: OpenAI's report comes less than a month before the U.S. presidential election. Kamala Harris has a slight edge over Donald Trump, according to a recent Reuters/Ipsos poll. The poll shows Harris leading with 46% compared to Trump's 43%, reflecting a closer contest as Trump reduces his previous six-point gap. Earlier this year, AI image creation tools from OpenAI and Microsoft Corp were reported to be used for spreading election-related disinformation. Previously, networks associated with Russia, China, Iran, and Israel have been found exploiting OpenAI's AI tools for global disinformation. In February 2024, AI chatbots like GPT-4 and Google's Gemini were found to be spreading false and misleading information about the U.S. presidential primaries. Following this, Google took preemptive measures to prevent its AI chatbot, Gemini, from becoming a source of misinformation. Check out more of Benzinga's Consumer Tech coverage by following this link. Read Next: Apple Supplier Foxconn To Build World's Largest Nvidia Superchip Factory Amid Soaring AI Demand Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Image Generated Using Midjourney Market News and Data brought to you by Benzinga APIs
Share
Share
Copy Link
OpenAI reports multiple instances of ChatGPT being used by cybercriminals to create malware, conduct phishing attacks, and attempt to influence elections. The company has disrupted over 20 such operations in 2024.
OpenAI, the company behind the popular AI chatbot ChatGPT, has released a report confirming that threat actors are using its AI-powered tool to enhance their malicious cyber operations. The company has disrupted over 20 such operations since the beginning of 2024, marking the first official confirmation of mainstream AI tools being used for offensive cyber activities [1][2].
Several threat groups have been identified using ChatGPT for malware-related activities. Chinese and Iranian hackers were found leveraging the AI tool to debug existing malware, develop new malicious software, and create supporting infrastructure for their operations [1][2].
One notable case involves the Iranian group Storm-0817, which used ChatGPT to develop custom malware for Android devices. This malware can steal contact lists, call logs, browser history, and access files on infected devices, as well as obtain precise location data [1][4].
OpenAI reported that a Chinese threat actor, dubbed 'SweetSpecter', targeted the company directly with spear-phishing emails. These emails contained malicious ZIP attachments disguised as support requests, which, if opened, would trigger an infection chain leading to the deployment of SugarGh0st RAT on the victim's system [1][4].
Perhaps most concerning is the use of ChatGPT in attempts to influence elections worldwide. OpenAI has observed multiple instances where its AI models were used to generate fake content, including long-form articles and social media comments, aimed at swaying public opinion during election periods [3][5].
Specific examples include:
OpenAI has taken swift action by banning all accounts associated with these malicious activities and sharing indicators of compromise with cybersecurity partners [1]. The company emphasizes that while these incidents don't represent new capabilities in malware development, they demonstrate how AI tools can make offensive operations more efficient for low-skilled actors [1][2].
OpenAI maintains that the majority of AI-generated social media posts received little to no engagement, and many operations ceased entirely after access to the AI models was blocked [3]. However, the company acknowledges the need for continued vigilance and improvement of AI safeguards [2].
As 2024 is a significant election year globally, there are growing concerns about the potential misuse of AI in influencing public opinion. OpenAI has committed to working with internal safety and security teams, as well as sharing findings with industry peers and the research community to prevent such abuses [2][3].
The incidents highlight the double-edged nature of AI technology, showcasing both its potential for misuse and its ability to help detect and prevent cyber threats. As AI continues to evolve, it will be crucial for companies like OpenAI to stay ahead of bad actors and implement robust security measures to protect users and maintain the integrity of democratic processes worldwide.
Reference
[1]
OpenAI has taken action against Iranian hacker groups using ChatGPT to influence the US presidential elections. The company has blocked several accounts and is working to prevent further misuse of its AI technology.
19 Sources
OpenAI's ChatGPT rejected more than 250,000 requests to generate images of U.S. election candidates in the lead-up to Election Day, as part of efforts to prevent AI-driven misinformation and election interference.
9 Sources
OpenAI reveals a foiled phishing attack by a suspected China-based group, highlighting cybersecurity risks in the AI industry amid US-China tech rivalry.
5 Sources
A critical vulnerability in ChatGPT's macOS app could have allowed hackers to plant false memories, enabling long-term data exfiltration. The flaw, now patched, highlights the importance of AI security.
2 Sources
Barracuda researchers uncover a large-scale phishing campaign impersonating OpenAI, highlighting the growing intersection of AI and cybersecurity threats.
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved