Curated by THEOUTPOST
On Fri, 16 Aug, 8:01 AM UTC
6 Sources
[1]
How Meta is battling AI-generated Russian misinformation ahead of the US election
This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Log in. The Russian influence operations used generative AI to create personas for fake journalists and publish stories on fictitious news sites with distorted information from authentic articles, Meta said in its report. While past Russian efforts to influence US politics relied on hot-button social and cultural issues in a given country to gain traction, the current "deceptive campaign" is mostly focused on Russia's war in Ukraine, for which Russian operators are trying to rally support, the Meta report said. Russia has had a frosty relationship with Meta since its invasion of Ukraine in 2022. Facebook pulled all advertising in Russia and blocked Russian ads shortly after the invasion. Months later, Russia categorized Meta as an extremist and terrorist organization. "Between now and the US elections in November, we expect Russia-based operations to promote supportive commentary about candidates who oppose aid to Ukraine and criticize those who advocate for aiding its defenses," the Meta report said. "This could take the shape of blaming economic hardships in the US on providing financial help to Ukraine, painting Ukraine's government as unreliable, or amplifying voices expressing pro-Russia views on the war and its prospects." Meta said it targets and removes more deceptive posts and accounts that rely heavily on AI or are run by contractors in for-hire deception campaigns. Neither has been particularly effective at avoiding detection, Meta said, referring to the operations as "low-quality, high-volume" with lapses in operational security. "GenAI-powered tactics provide only incremental productivity and content-generation gains to the threat actors, and have not impeded our ability to disrupt their influence operations," the Meta report said. "In fact, we continue to see real people calling these networks out as trolls, as they struggle to engage authentic audiences."
[2]
Russia's AI campaigns to influence US election are failing: Meta
In its new security report, Meta confirmed that deception campaigns using AI methods can provide merely incremental benefits to the threat actors. Also, the company confirmed that Russia is currently the top source of "coordinated inauthentic behavior" (CIB) using bogus social media accounts. The company underlined that Russia remains the number one source of global CIB networks Meta has disrupted to date since 2017, with 39 covert influence operations. The next most frequent sources of foreign interference are Iran, with 30 CIB networks, and China, with 11, according to Meta. CIB is viewed as coordinated efforts to manipulate public debate for a strategic goal, in which fake accounts are central to the operation. In each case, people coordinate with one another and use fake accounts to mislead others about who they are and what they are doing. "When we investigate and remove these operations, we focus on behavior, not content -- no matter who's behind them, what they post or whether they're foreign or domestic," said Meta. Meta claimed that it continues to monitor and assess the risks associated with evolving new technologies like AI. The company's findings so far suggest that GenAI-powered tactics provide only incremental productivity and content-generation gains to the threat actors, and have not impeded the company's ability to disrupt their influence operations. "We continue to assess that our industry's defense strategies, including our focus on behavior (rather than content) in countering adversarial threat activity, already apply and appear effective at this time," added Meta.
[3]
Meta Fends Off AI-aided Deception As US Election Nears
Russia is putting generative artificial intelligence to work in online deception campaigns, but its efforts have been unsuccessful, according to a Meta security report released Thursday. The parent company of Facebook and Instagram found that so far AI-powered tactics "provide only incremental productivity and content-generation gains" for bad actors and Meta has been able to disrupt deceptive influence operations. Meta's efforts to combat "coordinated inauthentic behavior" on its platforms come as fears mount that generative AI will be used to trick or confuse people in elections in the United States and other countries. Facebook has been accused for years of being used as a powerful platform for election disinformation. Russian operatives used Facebook and other US-based social media to stir political tensions in the 2016 election won by Donald Trump. Experts fear an unprecedented deluge of disinformation from bad actors on social networks because of the ease of using generative AI tools such as ChatGPT or the Dall-E image generator to make content on demand and in seconds. AI has been used to create images and videos, and to translate or generate text along with crafting fake news stories or summaries, according to the report. Russia remains the top source of "coordinated inauthentic behavior" using bogus Facebook and Instagram accounts, Meta security policy director David Agranovich told reporters. Since Russia's invasion of Ukraine in 2022, those efforts have been concentrated on undermining Ukraine and its allies, according to the report. As the US election approaches, Meta expects Russia-backed online deception campaigns to attack political candidates who support Ukraine. When Meta scouts for deception, it looks at how accounts act rather than the content they post. Influence campaigns tend to span an array of online platforms, and Meta has noticed posts on X, formerly Twitter, used to make fabricated content seem more credible. Meta shares its findings with X and other internet firms and says a coordinated defense is needed to thwart misinformation. "As far as Twitter (X) is concerned, they are still going through a transition," Agranovich said when asked whether Meta sees X acting on deception tips. "A lot of the people we've dealt with in the past there have moved on." X has gutted trust and safety teams and scaled back content moderation efforts once used to tame misinformation, making it what researchers call a haven for disinformation. False or misleading US election claims posted on X by Musk have amassed nearly 1.2 billion views this year, a watchdog reported last week, highlighting the billionaire's potential influence on the highly polarized White House race. Researchers have raised alarm that X is a hotbed of political misinformation. They have also flagged that Musk, who purchased the platform in 2022 and is a vocal backer of Donald Trump, appears to be swaying voters by spreading falsehoods on his personal account. "Elon Musk is abusing his privileged position as owner of a... politically influential social media platform to sow disinformation that generates discord and distrust," warned Imran Ahmed, CEO of the Center for Countering Digital Hate. Musk recently faced a firehose of criticism for sharing with his followers an AI deepfake video featuring Trump's Democratic rival, Vice President Kamala Harris.
[4]
Meta sees limited use of GenAI in Russian disinformation
An EU Commission probe into Facebook and Instagram's handling of Russian fake news is still pending Generative artificial intelligence tools used to spread mostly Russian disinformation on Meta's platforms have not been disruptive, the US tech giant said in its quarterly Adversarial Threat Report published yesterday (15 August). The findings suggest that GenAI-powered tactics provide "only incremental productivity and content-generation gains" to the threat actors, but the tools used by Meta to counter the spread of disinformation currently appear to be effective. This comes as a European Commission probe into Meta's Facebook and Instagram handling of disinformation under the Digital Services Act (DSA) is still pending. The Commission fears that the networks are vulnerable to Russian misinformation, and are potentially a target for Russian networks. A company spokesperson said in April that Meta has a "well-established process for identifying and mitigating risks on our platforms" in place. In its report, Meta said that the Russian campaign published a large volume of stories trying to mimic authentic articles from across the internet, including mainstream media, on fictitious 'news' websites. These stories were likely summaries of the originals generated using AI tools to make them appear more unique, the Meta report said. The same campaign then also posted AI-generated news-reader videos on YouTube and ran fictitious journalist personas. Since the start of Russia's war in Ukraine in 2022, the fake news campaigns largely focus on undermining Ukraine at home and abroad, though some networks also focused on other countries at Russia's border like Georgia and Moldova. The report said that, ahead of the US elections in November, Russia-based operations are expected to promote content supportive of presidential candidates who oppose aid to Ukraine. This could include blaming the US for providing financial help to Ukraine, presenting Ukraine's government as unreliable, or amplifying voices that express pro-Russia views on the war. Besides Russia, Meta also took down a number of fake accounts and pages in Iran and China.
[5]
Meta fends off AI-aided deception as US election nears
San Francisco (AFP) - Russia is putting generative artificial intelligence to work in online deception campaigns, but its efforts have been unsuccessful, according to a Meta security report released Thursday. The parent company of Facebook and Instagram found that so far AI-powered tactics "provide only incremental productivity and content-generation gains" for bad actors and Meta has been able to disrupt deceptive influence operations. Meta's efforts to combat "coordinated inauthentic behavior" on its platforms come as fears mount that generative AI will be used to trick or confuse people in elections in the United States and other countries. Facebook has been accused for years of being used as a powerful platform for election disinformation. Russian operatives used Facebook and other US-based social media to stir political tensions in the 2016 election won by Donald Trump. Experts fear an unprecedented deluge of disinformation from bad actors on social networks because of the ease of using generative AI tools such as ChatGPT or the Dall-E image generator to make content on demand and in seconds. AI has been used to create images and videos, and to translate or generate text along with crafting fake news stories or summaries, according to the report. Russia remains the top source of "coordinated inauthentic behavior" using bogus Facebook and Instagram accounts, Meta security policy director David Agranovich told reporters. Since Russia's invasion of Ukraine in 2022, those efforts have been concentrated on undermining Ukraine and its allies, according to the report. As the US election approaches, Meta expects Russia-backed online deception campaigns to attack political candidates who support Ukraine. Behavior based When Meta scouts for deception, it looks at how accounts act rather than the content they post. Influence campaigns tend to span an array of online platforms, and Meta has noticed posts on X, formerly Twitter, used to make fabricated content seem more credible. Meta shares its findings with X and other internet firms and says a coordinated defense is needed to thwart misinformation. "As far as Twitter (X) is concerned, they are still going through a transition," Agranovich said when asked whether Meta sees X acting on deception tips. "A lot of the people we've dealt with in the past there have moved on." X has gutted trust and safety teams and scaled back content moderation efforts once used to tame misinformation, making it what researchers call a haven for disinformation. False or misleading US election claims posted on X by Musk have amassed nearly 1.2 billion views this year, a watchdog reported last week, highlighting the billionaire's potential influence on the highly polarized White House race. Researchers have raised alarm that X is a hotbed of political misinformation. They have also flagged that Musk, who purchased the platform in 2022 and is a vocal backer of Donald Trump, appears to be swaying voters by spreading falsehoods on his personal account. "Elon Musk is abusing his privileged position as owner of a... politically influential social media platform to sow disinformation that generates discord and distrust," warned Imran Ahmed, CEO of the Center for Countering Digital Hate. Musk recently faced a firehose of criticism for sharing with his followers an AI deepfake video featuring Trump's Democratic rival, Vice President Kamala Harris.
[6]
Meta fends off AI-aided deception as US election nears
Meta's efforts to combat "coordinated inauthentic behavior" on its platforms come as fears mount that generative AI will be used to trick or confuse people in elections in the United States and other countries. Facebook has been accused for years of being used as a powerful platform for election disinformation.Russia is putting generative artificial intelligence to work in online deception campaigns, but its efforts have been unsuccessful, according to a Meta security report released Thursday. The parent company of Facebook and Instagram found that so far AI-powered tactics "provide only incremental productivity and content-generation gains" for bad actors and Meta has been able to disrupt deceptive influence operations. Meta's efforts to combat "coordinated inauthentic behavior" on its platforms come as fears mount that generative AI will be used to trick or confuse people in elections in the United States and other countries. Facebook has been accused for years of being used as a powerful platform for election disinformation. Russian operatives used Facebook and other US-based social media to stir political tensions in the 2016 election won by Donald Trump. Experts fear an unprecedented deluge of disinformation from bad actors on social networks because of the ease of using generative AI tools such as ChatGPT or the Dall-E image generator to make content on demand and in seconds. AI has been used to create images and videos, and to translate or generate text along with crafting fake news stories or summaries, according to the report. Russia remains the top source of "coordinated inauthentic behavior" using bogus Facebook and Instagram accounts, Meta security policy director David Agranovich told reporters. Since Russia's invasion of Ukraine in 2022, those efforts have been concentrated on undermining Ukraine and its allies, according to the report. As the US election approaches, Meta expects Russia-backed online deception campaigns to attack political candidates who support Ukraine. Behavior based When Meta scouts for deception, it looks at how accounts act rather than the content they post. Influence campaigns tend to span an array of online platforms, and Meta has noticed posts on X, formerly Twitter, used to make fabricated content seem more credible. Meta shares its findings with X and other internet firms and says a coordinated defense is needed to thwart misinformation. "As far as Twitter (X) is concerned, they are still going through a transition," Agranovich said when asked whether Meta sees X acting on deception tips. "A lot of the people we've dealt with in the past there have moved on." X has gutted trust and safety teams and scaled back content moderation efforts once used to tame misinformation, making it what researchers call a haven for disinformation. False or misleading US election claims posted on X by Musk have amassed nearly 1.2 billion views this year, a watchdog reported last week, highlighting the billionaire's potential influence on the highly polarized White House race. Researchers have raised alarm that X is a hotbed of political misinformation. They have also flagged that Musk, who purchased the platform in 2022 and is a vocal backer of Donald Trump, appears to be swaying voters by spreading falsehoods on his personal account. "Elon Musk is abusing his privileged position as owner of a... politically influential social media platform to sow disinformation that generates discord and distrust," warned Imran Ahmed, CEO of the Center for Countering Digital Hate. Musk recently faced a firehose of criticism for sharing with his followers an AI deepfake video featuring Trump's Democratic rival, Vice President Kamala Harris.
Share
Share
Copy Link
Meta has identified and disrupted a Russian influence operation using AI-generated content to spread misinformation about the upcoming 2024 US election. The campaign, though limited in scope, raises concerns about the potential misuse of AI in political manipulation.
Meta, the parent company of Facebook and Instagram, has recently exposed a Russian influence operation attempting to manipulate public opinion ahead of the 2024 US presidential election. The campaign, which utilized AI-generated content, marks a significant development in the realm of digital disinformation 1.
The operation involved the creation of fake news articles and memes using artificial intelligence tools. These AI-generated materials were designed to mimic legitimate news sources, potentially misleading readers about critical election-related issues. While the campaign's reach was reportedly limited, it highlights the evolving nature of online disinformation tactics 2.
Meta's security team swiftly identified and dismantled the network responsible for spreading this AI-generated content. The company's proactive approach involved removing associated accounts and blocking domains linked to the operation. This rapid response demonstrates Meta's commitment to safeguarding the integrity of democratic processes on its platforms 3.
Despite the innovative use of AI, the Russian campaign was described as "relatively low in sophistication and volume" by Ben Nimmo, Meta's global threat intelligence lead. The operation primarily targeted audiences in Germany, France, and Ukraine, with some content aimed at the United States. This multi-national focus suggests a broader strategy to influence international perceptions 4.
The discovery of this AI-aided disinformation campaign raises concerns about the potential for more sophisticated attempts in the future. As AI technology continues to advance, there is growing apprehension about its misuse in creating and disseminating false or misleading information. This incident serves as a wake-up call for social media platforms, election officials, and the public to remain vigilant against increasingly sophisticated forms of digital manipulation 5.
In response to these emerging threats, Meta has emphasized the importance of collaboration between tech companies, government agencies, and civil society organizations. The company has been working closely with partners to share information and develop strategies to detect and counter AI-generated disinformation campaigns. This collaborative approach is seen as crucial in maintaining the integrity of democratic processes in the digital age 1.
As AI-generated content becomes more prevalent and sophisticated, experts stress the importance of enhancing public awareness and media literacy. Educating users about the potential for AI-generated disinformation and providing tools to identify such content is becoming increasingly vital in the fight against digital manipulation 2.
Reference
[1]
[2]
[3]
[4]
Meta claims that AI-generated content played a minimal role in election misinformation on its platforms in 2024, contrary to widespread concerns about AI's potential impact on global elections.
14 Sources
14 Sources
US intelligence officials report that Russia, Iran, and China are using artificial intelligence to enhance their election interference efforts. Russia is identified as the most prolific producer of AI-generated content aimed at influencing the 2024 US presidential election.
10 Sources
10 Sources
As the 2024 U.S. presidential election approaches, experts warn of an unprecedented surge in AI-generated disinformation across social media platforms, posing significant challenges to election integrity and voter trust.
3 Sources
3 Sources
Artificial intelligence poses a significant threat to the integrity of the 2024 US elections. Experts warn about the potential for AI-generated misinformation to influence voters and disrupt the electoral process.
2 Sources
2 Sources
Meta Platforms Inc. has announced a global ban on Russian state media networks, including RT and Sputnik, from its platforms. This move aims to counter foreign influence operations ahead of major elections worldwide in 2024.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved