Curated by THEOUTPOST
On Tue, 24 Sept, 8:05 AM UTC
10 Sources
[1]
Russia, Iran and China are using AI in election interference efforts, U.S. intelligence officials say
Propagandists in China, Iran and Russia are using artificial intelligence to create content designed to deceive Americans ahead of the November presidential election, federal intelligence officials said Monday. In a conference call about foreign election interference efforts organized by the Office of the Director of National Intelligence, officials said the U.S. intelligence community has concluded that AI has made it easier to create disinformation, but has not fundamentally changed the way those actors operate. "The IC considers AI a malign influence accelerant, not yet a revolutionary influence tool. In other words, information operations are the threat, and AI is an enabler," said one ODNI official, referring to the U.S. intelligence community. The official requested not to be named as a condition for participating in the call. "Thus far, the IC has not seen it revolutionize such operations," he said. In its assessment of the impact of disinformation, the official noted that U.S. adversaries struggle to avoid detection by Western AI companies, have not developed particularly advanced AI models of their own, and struggle to effectively disseminate AI-generated content. ODNI declined to provide specific examples of the disinformation that it was referring to but said that, in general, the number of election interference efforts was increasing ahead of November. The ODNI call comes after the National Security Agency said earlier this year it had detected hackers and propagandists increasingly using AI to help them seem like convincing English speakers. In January, an NSA official said hackers and propagandists around the world were increasingly using generative AI chatbots like ChatGPT when trying to communicate with potential victims. In August, OpenAI, the company behind ChatGPT, said it had banned accounts linked to an attempted Iranian operation that in part aimed to create content aimed at influencing the U.S. election. Russia has by far the biggest disinformation operation aimed at the U.S. election and correspondingly has created the most AI-generated content, the official said, including text, images, audio and video, the official said. Its propagandists also still rely on human actors for some videos, as in one identified by Microsoft and Clemson University researchers in which actors stage a video of a fake attack on a Trump supporter. As in previous calls, officials reiterated that Iran preferred to hurt Trump's campaign, while China runs down ballot and general anti-democracy influence operations but is not pushing one candidate over another. Russia, on the other hand, wants Trump to beat any Democratic candidate given his policy positions on Ukraine. Federal officials have formally accused Russia of masterminding two sprawling influence campaigns aimed at influencing American voters: covertly funding a media company that paid right-wing influencers to publish videos, and maintaining fake news sites that appear to have little viewership. The U.S. has also said Iran is behind an operation in which hackers stole files from Republican nominee Donald Trump's campaign and sent them to media outlets, which generally have refrained from publishing them. Russia and Iran have denied wrongdoing. Russia has a much more sophisticated understanding of American politics than Iran, the intelligence official said Monday. Iranian online propagandists that pretend to be American have pushed immigration as a divisive issue. Russia, on the other hand, understands it's more effective to target voters in swing states.
[2]
Russia and Iran using AI to influence US election: DNI
They can "quickly and convincingly tailor synthetic content," an official said. Russia and Iran are using artificial intelligence to influence the American election, U.S. intelligence officials said on Monday. "Foreign actors are using AI to more quickly and convincingly tailor synthetic content," an official with the Office of the Director of National Intelligence said. "The IC (intelligence community) considers AI a malign influence, accelerant, not yet a revolutionary influence tool." Officials saw AI being used in overseas elections, but it has now made its way to American elections, according to intelligence officials, who says there is evidence Russian manipulated Vice President Kamala Harris' speeches. Russia "has generated the most AI content related to the election, and has done so across all four mediums, text, images, audio and video," an ODNI official said. "These items include AI generated content of and about prominent U.S. figures whose content is also consistent with Russia's broader efforts to boost the former president's candidacy and denigrate the Vice President and the Democratic Party, including through conspiratorial Americans," according to an ODNI official. Russian AI content has sought to exploit hot-button issues to further divide Americans, the ODNI said. "The IC also assesses that Russian influence actors were responsible for altering videos of the vice president's speeches," the official said. Russia's altering of videos runs the "gamut" from painting her in a personal bad light, and compared to her opponent and they are using both AI and staged videos, the official said. The country was targeting President Joe Biden's former campaign, but once he dropped out of the race it had to "adapt" to targeting the vice president's campaign, the official said. "Russia is a much more sophisticated actor in the influence space in general, and they have a better understanding of how U.S. elections work and where to target and what state what states to target," an ODNI official said. Iran has also used AI in its election influence efforts, including help in writing fake social media posts and news articles to further Iran's objectives, which are to denigrate the former President Donald Trump's candidacy, the official said. Iran is also using AI to sow discord on hot-button issues, an official said. "One of the benefits of generative AI models is to overcome various language barriers, and so Iran can use the tools to help do that, and so one of the issues that could be attractive or using foreign language for that is immigration," an ODNI official said. "The reason why Iran is focused on immigration is because they perceive it to be a divisive issue in the United States, and they identify themes, and this is broadly speaking, they identify themes with which they think will create further discord the United States." Officials have previously assessed Iran prefers that Vice President Harris win the 2024 election. China has also been using AI to generate fake news anchors and social media content with pro-China propaganda, they said. The intelligence community assesses that AI is an "accelerant" to influence operations, but doesn't yet have the capability to be believable. Adversaries are also using AI to go back and forth with people in the comments. As to whether what occurred in 2020 might happen again -- where the election might not be called on Election Day -- this period is something the IC is "watching" closely and is of "great interest." "The various influence actors have fairly steady state influence operations that seek the stoking of division and undermine U.S. democracy," an ODNI official said. During the Democratic primaries, the use of an AI generated robocall was used to give misinformation about voting the result was state criminal charges being brought against the individual who sent the recording and an FCC fine. A foreign adversary engaging in that tactic would be a "top concern" for intelligence officials, an official said.
[3]
U.S. officials say Russia is embracing AI for its election influence efforts
Russia is the most prolific foreign influence actor using artificial intelligence to generate content targeting the 2024 presidential election, U.S. intelligence officials said on Monday. The cutting-edge technology is making it easier for Russia as well as Iran to quickly and more convincingly tailor often-polarizing content aimed at swaying American voters, an official from the Office of the Director of National Intelligence, who spoke on condition of anonymity, told reporters at a briefing. "The [intelligence community] considers AI a malign influence accelerant, not yet a revolutionary influence tool," the official said. "In other words, information operations are the threat, and AI is an enabler." Intelligence officials have previously said they saw AI used in elections overseas. "Our update today makes clear that this is now happening here," the ODNI official said. Russian influence operations have spread synthetic images, video, audio, and text online, officials said. That includes AI-generated content "of and about prominent U.S. figures" and material seeking to emphasize divisive issues such as immigration. Officials said that's consistent with the Kremlin's broader goal to boost former President Donald Trump and denigrate Vice President Kamala Harris. But Russia is also using lower-tech methods. The ODNI official said Russian influence actors staged a video in which a woman claimed to be a victim of a hit-and-run by Harris in 2011. There's no evidence that ever happened. Last week, Microsoft also said Russia was behind the video, which was spread by a website claiming to be a nonexistent local San Francisco TV station. Russia is also behind manipulated videos of Harris's speeches, the ODNI official said. They may have been altered using editing tools or with AI. They were disseminated on social media and using other methods. "One of the efforts we see Russian influence actors do is, when they create this media, try to encourage its spread," the ODNI official said. The official said the videos of Harris had been altered in a range of ways, to "paint her in a bad light both personally but also in comparison to her opponent" and to focus on issues Russia believes are divisive. Iran has also tapped AI to generate social media posts and write fake stories for websites posing as legitimate news outlets, officials said. The intelligence community has said Iran is seeking to undercut Trump in the 2024 election. Iran has used AI to create such content in both English and Spanish, and is targeting Americans "across the political spectrum on polarizing issues" including the war in Gaza and the presidential candidates, officials said. China, the third main foreign threat to U.S. elections, is using AI in its broader influence operations that aim to shape global views of China and amplify divisive topics in the U.S. such as drug use, immigration, and abortion, officials said. However, officials said they had not identified any AI-powered operations targeting the outcome of voting in the U.S. The intelligence community has said Beijing's influence operations are more focused on down-ballot races in the U.S. than the presidential contest. U.S. officials, lawmakers, tech companies, and researchers have been concerned about the potential for AI-powered manipulation to upend this year's election campaign, such as deepfake videos or audio depicting candidates doing or saying something they didn't or misleading voters about the voting process. While those threats may yet still materialize as election day draws closer, so far AI has been used more frequently in different ways: by foreign adversaries to improve productivity and boost volume, and by political partisans to generate memes and jokes. On Monday, the ODNI official said foreign actors have been slow to overcome three main obstacles to AI-generated content becoming a greater risk to American elections: first, overcome guardrails built into many AI tools without being detected; second, develop their own sophisticated models; and third, strategically target and distribute AI content. As Election Day nears, the intelligence community will be monitoring for foreign efforts to introduce deceptive or AI-generated content in a variety of ways, including "laundering material through prominent figures," using fake social media accounts or websites posing as news outlets, or "releasing supposed 'leaks' of AI-generated content that appear sensitive or controversial," the ODNI report said. Earlier this month, the Justice Department accused Russian state broadcaster RT, which the U.S. government says operates as an arm of Russian intelligence services, of funneling nearly $10 million to pro-Trump American influencers who posted videos critical of Harris and Ukraine. The influencers say they didn't know the money came from Russia.
[4]
Russia Taps AI to Tip 2024 Election Toward Trump, Stoke Immigration Fears
Russia is tapping all forms of AI-generated content -- text, images, audio, and video -- to influence the 2024 presidential election, specifically to denigrate Vice President Kamala Harris and elect Donald Trump, according to US officials. Iran and China are also using new technologies to influence the American public, but Russia is the number one offender, according to a new report from the Office of the Director of National Intelligence (ODNI). Bad actors are "increasing their election influence activities as we approach November," using both homegrown and existing AI tools, the ODNI says. For example, Russian actors staged a social media video of a woman who claims she was a victim of a hit-and-run accident by Harris, the ODNI says. (Microsoft reached a similar conclusion in a recent Threat Analysis Center report.) AI-generated content also seeks to stoke concerns about immigration, a cornerstone of Trump's campaign rhetoric. Russian actors have also doctored clips of Harris' speeches, replacing words, according to The Washington Post, which was briefed by ODNI and the FBI on their findings. Last month, Russia was also found to be paying right-wing US influencers to create videos in support of Russia's interest, such as opposing aid to Ukraine (YouTube later pulled the videos.) These efforts are part of "Russia's broader efforts to boost the former President's candidacy and denigrate the Vice President and the Democratic Party, including through conspiratorial narratives," says the ODNI. It's a familiar playbook, going back to at least 2016, when Russian agents, many now wanted by the FBI, hacked into state voter databases to delete registrations, leak documents from both parties, and steal identities. Thousands of bot and human accounts flooded social feeds with inflammatory, divisive information on hot-button issues like immigration and gun control. They started campaigns like #Hillary4Prison, and on at least one occasion, sent a Russian person to a rally wearing a Hillary Clinton mask and prison uniform costume, Time reports. Iran is using AI to generate social media posts and write fake news articles for fake websites that claim to be real publications, the ODNI says. The content is in both English and Spanish, and targets polarizing issues such as the Israel-Gaza conflict and information about the presidential candidates. China's efforts seek to shape global views of China and stoke division within the US, and do not have a specific election outcome. For example, pro-China actors have used AI-generated news anchors and social media accounts with AI-generated profile pictures to "sow divisions on issues such as drug use, immigration, and abortion," according to the ODNI.
[5]
Russia produced most AI content to sway presidential vote, US intelligence official says
An official from the Office of the Director of National Intelligence (ODNI), speaking on condition of anonymity, made the comment in a briefing to reporters on the alleged use of AI by Russia and other countries to influence the Nov. 5 vote.Russia has generated more AI content to influence the U.S. presidential election than any other foreign power as part of its broader effort to boost Republican candidate Donald Trump over Democrat Kamala Harris, a U.S. intelligence official said on Monday. The official from the Office of the Director of National Intelligence (ODNI), speaking on condition of anonymity, made the comment in a briefing to reporters on the alleged use of AI by Russia and other countries to influence the Nov. 5 vote. AI content produced by Moscow is "consistent with Russia's broader efforts to boost the former president's (Trump) candidacy and denigrate the vice president (Harris) and the Democratic Party, including through conspiratorial narratives," he said. The Russian embassy in Washington did not immediately respond to a request for comment. Russia previously has denied interfering in the U.S. election. Like other forms of artificial intelligence, generative AI learns from past data how to take actions. Using that training, it creates new content like text, pictures and videos that appears to have been produced by humans. The ODNI official said Russia is generating more AI content to influence the November election than any other country, but did not provide a volume of that AI content. He said Russia was a much more sophisticated actor and had a better understanding of how U.S. elections work and appropriate targets. Asked how Russia was disseminating AI content, the official pointed to a July 9 Justice Department announcement of the disruption of an alleged Moscow-backed operation that used AI-enhanced social media accounts to spread pro-Kremlin messages in the U.S. and elsewhere. The official said "Russian influence actors" staged a widely reported video in which a woman claimed she was a victim of a hit-and-run car accident by Harris. The video, however, was staged rather than produced through AI, he said. Microsoft said last week its research showed that video was the work of a covert Russian disinformation operation. China has been using AI content in an attempt to influence how it is perceived worldwide, but not to sway the outcome of the U.S. election, the official said. "China is using AI in broader influence operations seeking to shape global views of China and amplify divisive U.S. political issues," the official said. "We are not yet seeing China use AI for any specific operations targeting U.S. election outcomes." Iranian influence actors have used AI to help generate posts for social media and "write inauthentic news articles for websites that claim to be real news sites," the official said. The content created by the Iranian actors is in English and Spanish. It has targeted American voters "across the political spectrum on polarizing issues" such as Israel and the conflict in Gaza, and on the presidential candidates, the official said. The Iranian mission to the United Nations did not immediately respond to a request for comment. Iran has previously denied interfering in the U.S. vote.
[6]
Russia, Iran use AI to boost U.S. influence operations, officials say
American intelligence officials say Moscow's efforts, aimed mostly at undermining Harris, are the most aggressive. Russia, Iran and China are using artificial intelligence tools as they increase their effort to sway the American population ahead of the November election, U.S. intelligence officials said Monday, with Moscow especially set on denigrating Vice President Kamala Harris. Russia, the most aggressive and skilled of the three countries, is emphasizing stories and comments that demean the Democratic presidential candidate's personal qualities or positions, officials from the Office of the Director of National Intelligence and the FBI said in a briefing for reporters. The ODNI also released a one-page summary of its assessment, the latest in a series on foreign influence during the campaign. Russia has doctored clips of Harris's speeches to replace some of her words, an ODNI official told The Washington Post, and has used generative AI to create false text, photos, video and audio. Officials said they agreed with a determination by Microsoft researchers a week ago that Russia was behind a viral staged video in which an actress falsely claimed that Harris had injured her in a hit-and-run car accident, garnering millions of view. The officials, who spoke on the condition of anonymity for security reasons, said they did not study speech by Americans and so could not say which pieces of disinformation got more traction or boosts by high-profile figures. But they did point to a recent indictment and related documents this month alleging that Russian officials invested $10 million in a Tennessee media company that paid well-known right-wing influencers for videos that promoted Russian interests, such as opposing U.S. aid to Ukraine. The influencers themselves were not charged with any crimes, and most have said they did not know the company was backed by Russia. Russia is continuing to use unwitting or witting Americans to spread its messages, the officials said, as well as imitating websites of established media and using human commenters to drive traffic to those sites, which contain articles generated by AI. The national intelligence officials said generative AI was an accelerant for influence efforts rather than a revolutionary change in them. For it to have a bigger impact, adversaries would need at least one of three things, they said: the ability to circumvent usage restrictions on some large language models, the ability to create their own models, or an effective means of distributing the content in the target country. The intelligence officials said they had compared notes with U.S. AI companies and social media companies about tactics, while leaving it to the FBI to have any contacts with firms about specific accounts. In all cases, they said, decisions about what to do with the content or the accounts was left strictly to the companies. Like Russia, Iran and China have promoted content that aims to exacerbate domestic divisions, the group said. Iran has been seeking to build on differences over the war in Gaza and using AI to create faked news articles in English and Spanish. China has focused on drug use, immigration and abortion. Iran has acted to hurt Republican candidate and former president Donald Trump's prospects, including by breaching his campaign and sending stolen documents to the media. China is more interested in lower-level campaigns where the candidates might support or oppose its priorities, the officials said.
[7]
AI not yet a 'revolutionary influence tool,' US says
Russia, Iran and China are not giving up on the use of artificial intelligence to sway American voters ahead of November's presidential election even though U.S. intelligence agencies assess the use of AI has so far failed to revolutionize the election influence efforts. The new appraisal released late Monday from the Office of the Director of National Intelligence comes just more than 40 days before U.S. voters head to the polls. It follows what officials describe as a "steady state" of influence operations by Moscow, Tehran and Beijing aimed at impacting the race between former Republican President Donald Trump and current Democratic Vice President Kamala Harris, as well as other statewide and local elections. "Foreign actors are using AI to more quickly and convincingly tailor synthetic content," said a U.S. intelligence official, who briefed reporters on the condition of anonymity to discuss the latest findings. "AI is an enabler," the official added. "A malign influence accelerant, not yet a revolutionary influence tool." It is not the first time U.S. officials have expressed caution about how AI could impact the November election. A top official at the Cybersecurity and Infrastructure Security Agency (CISA), the U.S. agency charged with overseeing election security, told VOA earlier this month that to this point the malicious use of AI has not been able to live up to some of the hype. "Generative AI is not going to fundamentally introduce new threats to this election cycle," said CISA senior adviser Cait Conley. "What we're seeing is consistent with what we expected to see." That does not mean, however, that U.S. adversaries are not trying. The new U.S. intelligence assessment indicates Russia, Iran and China have used AI to generate text, images, audio and video and distribute them across all major social media platforms. Russia, Iran and China have yet to respond to requests for comment. All three have previously rejected U.S. allegations regarding election influence campaigns. While U.S. intelligence officials would not say how many U.S. voters have been exposed to such malign AI products, there is reason to think that some of the efforts are, at least for the moment, falling short. "The quality is not as believable as you might expect," said the U.S. intelligence official. One reason, the official said, is because Russia, Iran and China have struggled to overcome restrictions built into some of the more advanced AI tools while simultaneously encountering difficulties developing their own AI models. There are also indications that all three U.S. adversaries have to this point failed to find ways to more effectively use AI to find and target receptive audiences. "To do scaled AI operations is not cheap," according to Clint Watts, a former FBI special agent and counterterror consultant who heads up the Microsoft Threat Analysis Center (MTAC). "Some of the infrastructure and the resources of it [AI], the models, the data it needs to be trained [on] - very challenging at the moment," Watts told a cybersecurity summit in Washington earlier this month. "You can make more of everything misinformation, disinformation, but it doesn't mean they'll be very good." In some cases, U.S. adversaries see traditional tactics, which do not rely on AI, as equally effective. For instance, U.S. intelligence officials on Monday said a video claiming that Vice President Harris injured a girl in a 2011 hit-and-run accident was staged by Russian influence actors, confirming an assessment last week by Microsoft. The officials also said altered videos showing Harris speaking slowly, also the result of Russian influence actors, could have been done without relying on AI. For now, experts and intelligence officials agree that when it comes to AI, Russia, Iran and China have settled on quantity over quality. Microsoft has tracked hundreds of instances of AI use by Russia, Iran and China over the past 14 months. And while U.S. intelligence officials would not say how much AI-generated material has been disseminated, they agree Russian-linked actors, especially, have been leading the way. "These items include AI-generated content of and about prominent U.S. figures ... consistent with Russia's broader efforts to boost the former president's candidacy and denigrate the vice president and the Democratic Party," the U.S. intelligence official said, calling Russia one of the most sophisticated actors in knowing how to target American voters. Those efforts included an AI-boosted effort to spread disinformation with a series of fake web domains masquerading as legitimate U.S. news sites, interrupted earlier this month by the U.S. Department of Justice. Iran, which has sought to hurt the re-election bid by former President Trump, has also copied the Russian playbook, according to the new U.S. assessment, seeking to sow discord among U.S. voters. Tehran has also been experimenting, using AI to help spread its influence campaign not just in English, but also in Spanish, especially when seeking to generate anger among voters over immigration. "One of the benefits of generative AI models is to overcome various language barriers," the U.S. intelligence official said. "So Iran can use the tools to help do that," the official added, calling immigration "obviously an issue where Iran perceives they could stoke discord." Beijing, in some ways, has opted for a more sophisticated use of AI, according to the U.S. assessment, using it to generate fake news anchors in addition to fake social media accounts. But independent analysts have questioned the reach of China's efforts under its ongoing operation known as "Spamouflage." A recent report by the social media analytics firm Graphika found that, with few exceptions, the Chinese accounts "failed to garner significant traction in authentic online communities discussing the election." U.S. intelligence officials have also said the majority of the Chinese efforts have been aimed not at Trump or Harris, but at state and local candidates perceived as hostile to Beijing. U.S. intelligence officials on Monday refused to say how many other countries are using AI in an effort to influence the outcome of the U.S. presidential election. Earlier this month, U.S. Deputy Attorney General Lisa Monaco said Washington was "seeing more actors in this space acting more aggressively in a more polarized environment and doing more with technologies, in particular AI."
[8]
Russia produced most AI content to sway presidential vote, US intelligence official says
AI content produced by Moscow is "consistent with Russia's broader efforts to boost the former president's (Trump) candidacy and denigrate the vice president (Harris) and the Democratic Party, including through conspiratorial narratives," he said. Russia previously has denied interfering in the U.S. election. The ODNI official said that Russia is generating more AI content to influence the election than any other country, but did not provide a figure for the volume of the AI content Moscow has produced. "Russia is a much more sophisticated actor in the influence space in general," the official said, adding that Moscow has a better understanding than other U.S. rivals of how American elections work and the states to target with its influence operations. China has been using AI content in an attempt to influence how it is perceived worldwide, but not to sway the outcome of the U.S. election, the official said. "China is using AI in broader influence operations seeking to shape global views of China and amplify divisive U.S. political issues," the official said. "We are not yet seeing China use AI for any specific operations targeting U.S. election outcomes." Iranian influence actors have used AI to help generate posts for social media and "write inauthentic news articles for websites that claim to be real news sites," the official said. The content created by the Iranian actors is in English and Spanish and have targeted American voters "across the political spectrum on polarizing issues" such as Israel and the conflict in Gaza, and on the presidential candidates, the official said. Iran has previously denied interfering in the U.S. vote. (Reporting by Jonathan Landay and David Brunnstrom; Editing by Stephen Coates)
[9]
Russia, Iran Overcome Language Barriers To Spread Disinformation To Voters, Says Federal Agency: AI Improving, Not Revolutionizing Foreign Election Interference
The Office of the Director of National Intelligence or ODNI has reported that AI is being used to improve, not revolutionize, foreign influence operations targeting the 2024 U.S. elections. What Happened: In a statement on Monday, ODNI said that AI is viewed as a malign influence accelerant rather than a revolutionary tool in the hands of foreign operatives. "The risk to U.S. elections from foreign AI-generated content depends on the ability of foreign actors to overcome restrictions built into many AI tools and remain undetected, develop their own sophisticated models, and strategically target and disseminate such content," ODNI said. Foreign operatives, particularly from Russia and Iran, are leveraging AI to overcome language barriers and disseminate disinformation to U.S. voters. For instance, Iran has used AI to generate Spanish content on immigration, a divisive U.S. political issue. Russia, on the other hand, has produced the most AI-related content concerning the U.S. election. See Also: Elon Musk Reacts After Mark Cuban Says He Would Buy X 'In A Heartbeat' This is consistent with its efforts to support former President Donald Trump and denigrate Vice President Kamala Harris' campaign, noted CNN. China is also using AI to amplify divisive U.S. political issues, but not specifically to influence election outcomes, according to the new US intelligence assessment. "For example, pro-China online actors this year have used AI-generated news anchors and inauthentic social media accounts with AI-generated profile pictures to sow divisions on issues such as drug use, immigration, and abortion," ODNI stated. Why It Matters: Previously it was reported that Russian operatives allegedly staged a video falsely claiming that Harris paralyzed a young girl in a hit-and-run accident in 2011. Last month, the Trump campaign blamed "foreign sources hostile to the U.S." for a cyberattack. This was followed by President Joe Biden accusing Russia of seeking to interfere with the 2024 election. Earlier this month, U.S. intelligence agencies revealed that Iran attempted to share hacked information from the Trump campaign with the Biden campaign. Image via Unsplash Read Next: Jeff Bezos-Backed Perplexity AI In Discussion With Marriott And Nike As It Prepares To Challenge To Google's Dominance Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
[10]
US adversaries using AI to enhance disinformation effort ahead of election
The intelligence community warned Monday that foreign adversaries are using artificial intelligence to enhance ongoing disinformation efforts. An official with the Office of the Director of National Intelligence (ODNI) said Russia is the top creator of such content, spreading false information through text, images, audio, and video, most of it aimed at pushing divisive issues or creating false narratives about U.S. political figures. "This content is also consistent with Russia's broader efforts to boost the former president's candidacy and denigrate the Vice President and the Democratic Party, including through conspiratorial narratives," the official said. The warning is a shift from July, when officials said they had seen U.S. adversaries push AI-created content abroad, but not yet in the U.S. Iran is also using AI-created content but has used the tools primarily to create fake news websites, including translating content into Spanish to spread disinformation to various U.S. populations. "One of the benefits of generative AI models is to overcome various language barriers, and so Iran can use the tools to help do that," the ODNI official said. Iran, like Russia and China, has focused on issues like immigration, and has also pushed disinformation regarding the Israel-Gaza conflict. "The reason why Iran is focused on immigration is because they perceive it to be a divisive issue in the United States, and they identify themes with which they think will create further discord in the United States," the official said. The official noted that not all false content shared in recent weeks has been creating using AI, noting that a fake video made with a woman claiming to have been hit by Vice President Kamala Harris in a hit-and-run was a "staged video." The official nonetheless noted the intelligence community agreed with a Microsoft assessment that Russia was behind the video and added that it has likewise created content that alters Harris's voice.
Share
Share
Copy Link
US intelligence officials report that Russia, Iran, and China are using artificial intelligence to enhance their election interference efforts. Russia is identified as the most prolific producer of AI-generated content aimed at influencing the 2024 US presidential election.
US intelligence officials have raised alarms about the increasing use of artificial intelligence (AI) by foreign adversaries to interfere in the upcoming 2024 US presidential election. Russia, Iran, and China have been identified as the primary actors leveraging AI technologies to amplify their influence campaigns 1.
According to a senior US intelligence official, Russia has emerged as the most prolific producer of AI-generated content aimed at swaying the presidential vote 5. The country's efforts include creating fake personas and websites to spread disinformation and manipulate public opinion.
Russian operatives are reportedly using AI to create and disseminate content that aims to tip the 2024 election in favor of former President Donald Trump 4. Additionally, they are focusing on stoking fears about immigration, a tactic designed to exploit existing social and political divisions within the United States.
Iran has also been observed employing AI in its influence operations. The country is using the technology to create more convincing fake personas on social media platforms, making it increasingly difficult to distinguish between genuine users and state-sponsored actors 2.
While China's use of AI in election interference appears less pronounced, intelligence officials note that the country is still actively engaged in influence campaigns. China's efforts are primarily focused on shaping US policy and public opinion to align with its interests, rather than supporting specific candidates 3.
The integration of AI into foreign influence campaigns presents new challenges for US intelligence agencies and social media platforms. The technology allows for the rapid creation and dissemination of highly convincing fake content, including deepfake videos and manipulated audio 1.
In response to these threats, US intelligence agencies are working to enhance their capabilities to detect and counter AI-generated disinformation. Collaboration with tech companies and social media platforms is also being emphasized to improve the identification and removal of fake content 2.
Experts stress the importance of public awareness and media literacy in combating the influence of AI-generated disinformation. Educating voters about the potential for manipulated content and encouraging critical thinking when consuming online information are seen as crucial steps in maintaining the integrity of the electoral process 3.
Reference
[1]
A new intelligence report suggests that Russia, Iran, and China are likely to employ artificial intelligence in attempts to sway the 2024 US presidential election. The report highlights concerns about the potential misuse of AI technologies in spreading disinformation and manipulating public opinion.
4 Sources
4 Sources
Microsoft warns of escalating online interference efforts by Russia, China, and Iran as the 2024 US presidential election approaches, with each nation employing distinct strategies and leveraging AI technologies.
4 Sources
4 Sources
Microsoft reveals that Russia and China are using AI-generated content and deepfakes to target U.S. political figures, including Vice President Kamala Harris and several Republican lawmakers, ahead of the upcoming elections.
2 Sources
2 Sources
As the 2024 U.S. presidential election approaches, artificial intelligence emerges as a powerful and potentially disruptive force, raising concerns about misinformation, deepfakes, and foreign interference while also offering new campaign tools.
6 Sources
6 Sources
The United States has issued a stark warning about Russia's increasingly sophisticated and pervasive efforts to interfere in elections worldwide. This alert comes as nations prepare for crucial upcoming polls, including the US presidential election in 2024.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved