Curated by THEOUTPOST
On Tue, 3 Dec, 4:05 PM UTC
14 Sources
[1]
Meta: AI Made Less Than 1% of Election Misinformation on Our Apps
(Credit: Valera Golovniov/SOPA Images/LightRocket via Getty Images) In a year that's already seen as many as two billion people vote in over a dozen major elections worldwide, AI-generated misinformation was actually far less of a threat than some had predicted, according to a new Meta study. Meta, which is invested in AI's success and building a slew of new data centers to expand its AI presence, monitored trends across its apps for elections in the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico, and Brazil. It found that the risks of political deepfakes and AI-enabled disinformation "did not materialize in a significant way." "Ratings on AI content related to elections, politics, and social topics represented less than 1% of all fact-checked misinformation," the company wrote in a blog post. While there were some confirmed and suspected instances of AI use in spreading disinformation, Meta claims its existing policies mitigated them. For instance, in the month before the US Presidential election, its Imagine AI image generator rejected around 590,000 requests for deepfakes of President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden. Roughly 20 foreign interference operations aimed at influencing elections were also successfully thwarted this year, Meta says. Russia was the primary source of these operations, and Iran was the second-most frequent. Meta said some of the fake election videos it removed later reappeared on apps like X and Telegram, however, which have what it says are "fewer safeguards." X and Telegram's moderation policies have come under fire recently, from X's AI Grok being accused of pushing political misinformation to Telegram failing to do enough to stop criminal activity from proliferating on its platform. This somewhat optimistic AI study comes as Meta CEO Mark Zuckerberg eyes the future of US tech policy under the incoming Trump administration. While President-elect Donald Trump previously called Facebook his "enemy," he was recently spotted having dinner with Zuckerberg.
[2]
Meta says AI content made up less than 1% of election-related misinformation on its apps | TechCrunch
At the start of the year, there were widespread concerns about how generative AI could be used to interfere in global elections to spread propaganda and disinformation. Fast forward to the end of the year, Meta claims those fears did not play out, at least on its platforms, as it shared that the technology had limited impact across Facebook, Instagram, and Threads. The company says its findings are based on content around major elections in the U.S., Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the U.K., South Africa, Mexico, and Brazil. "While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content," the company wrote in a blog post. "During the election period in the major elections listed above, ratings on AI content related to elections, politics, and social topics represented less than 1% of all fact-checked misinformation." Meta notes that its Imagine AI image generator rejected 590,000 requests to create images of President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden in the month leading up to election day in order to prevent people from creating election-related deepfakes. The company also found that coordinated networks of accounts that were looking to spread propaganda or disinformation "made only incremental productivity and content-generation gains using generative AI." Meta says the use of AI didn't impede its ability to take down these covert influence campaigns because it focuses these accounts' behaviors, not on the content they post, regardless of whether or not they were created with AI. The tech giant also revealed that it took down around 20 new covert influence operations around the world to prevent foreign interference. Meta says the majority of the networks it disrupted didn't have authentic audiences and that some of them used fake likes and followers to appear more popular than they actually were. Meta went on to point the finger at other platforms, noting that false videos about the U.S. election linked to Russian-based influence operations were often posted on X and Telegram. "As we take stock of what we've learned during this remarkable year, we will keep our policies under review and announce any changes in the months ahead," Meta wrote.
[3]
Meta says AI-generated content was less than 1 precent of election misinformation
Warnings about AI-fueled election misinformation "did not materialize," Nick Clegg said. AI-generated content played a much smaller role in global election misinformation than what many officials and researchers had feared, according to a new analysis from Meta. In an update on its efforts to safeguard dozens of elections in 2024, the company said that AI content made up only a fraction of election-related misinformation that was caught and labeled by its fact checkers. "During the election period in the major elections listed above, ratings on AI content related to elections, politics and social topics represented less than 1% of all fact-checked misinformation," the company shared in a blog post, referring to elections in the US, UK, Bangladesh, Indonesia, India, Pakistan, France, South Africa, Mexico and Brazil, as well as the EU's Parliamentary elections. The update comes after numerous government officials and researchers for months raised the alarm about the role generative AI could play in supercharging election misinformation in a year when more than 2 billion people were expected to go to the polls. But those fears largely did not play out -- at least on Meta's platforms -- according to the company's President of Global Affairs, Nick Clegg. "People were understandably concerned about the potential impact that generative AI would have on the forthcoming elections during the course of this year, and there were all sorts of warnings about the potential risks of things like widespread deepfakes and AI-enabled disinformation campaigns," Clegg said during a briefing with reporters. "From what we've monitored across our services, it seems these risks did not materialize in a significant way, and that any such impact was modest and limited in scope." Meta didn't elaborate on just how much election-related AI content its fact checkers caught in the run-up to major elections. The company sees billions of pieces of content every day, so even small percentages can add up to a large number of posts. Clegg did, however, credit Meta's policies, including its of AI labeling earlier this year, following from the Oversight Board. He noted that Meta's own AI image generator blocked 590,000 requests to create images of Donald Trump, Joe Biden, Kamala Harris, JD Vance and Tim Walz in the month leading up to election day in the US. At the same time, Meta has increasingly taken steps to distance itself from politics altogether, as well as some past efforts to police misinformation. The company changed users' default settings on Instagram and Threads to political content, and has news on Facebook. Mark Zuckerberg has said the way the company handled some of its misinformation policies during the pandemic. Looking ahead, Clegg said Meta is still trying to strike the right balance between enforcing its rules and enabling free expression. "We know that when enforcing our policies, our error rates are still too high, which gets in the way of free expression," he said." I think we also now want to really redouble our efforts to improve the precision and accuracy with which we act."
[4]
Meta claims AI content made up less than 1pc of election misinformation
The social media giant said that its teams took down around 20 new influence operations around the world this year. Meta, the parent company of Instagram, WhatsApp and Facebook, has claimed that artificial intelligence (AI) content made up less than 1pc of election-related misinformation on its apps this year. Earlier this year, Meta revealed plans to set up a dedicated team to combat disinformation and AI misuse ahead of the EU elections, which took place in June. And now, the social media company has said in a blogpost that it ran several election operations centres around the world to "monitor and react swiftly to issues that arose" in relation to major worldwide elections. Countries and regions that held elections this year include the US, EU, Bangladesh, Indonesia, India, Pakistan, France, the UK, South Africa, Mexico and Brazil. Meta's latest announcement follows the contentious US presidential election, where AP reported that "a flood of misinformation" sought to undermine trust in voting. Commenting on the US election, Nick Clegg, president of global affairs at Meta, noted that its Imagine AI image generator rejected 590,000 requests to create images of president-elect Donald Trump, vice-president-elect JD Vance, governor Tim Walz, current vice-president Kamala Harris and current president Joe Biden in the month leading up to election day, in an effort to prevent people from creating deepfakes. The company claimed that while there were instances of confirmed or suspected use of AI to spread misinformation, "the volumes remained low, and our existing policies and processes proved sufficient" to reduce the risk around generative AI content. It added that ratings on AI content related to elections, politics and social topics represented less than 1pc of all fact-checked misinformation. Meta further said that its teams took down around 20 new "covert influence" operations around the world this year. In addition, Meta took aim at social media sites X and Telegram, saying that fake videos about the US election linked to Russian-based influence operations were posted on these platforms. X's owner Elon Musk attracted controversy this year when he shared a deepfake ad of Kamala Harris. However, it should be noted that Meta also found itself in hot water regarding misinformation and disinformation in the past: in April, the European Commission opened an investigation into Meta for allegedly violating EU rules through "deceptive advertising and political content" on Facebook and Instagram. Tackling misinformation, including AI-generated misinformation, has taken priority in recent years - in the context of America, according to the Pew Research Centre, concerns about inaccuracy regarding news coverage in the US have been raised. And while regulation slowly creeps in around the world, groups continue to use social media to spread disinformation and misinformation, influence elections and promote violence. In July, Meta said that it had "never thought about news" as a way to counter misleading content on Facebook and Instagram. A study published in Nature this year found that source-credibility information and social norms help to improve truth discernment and therefore also reduces engagement with misinformation online. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[5]
Meta Claims AI Content Was Less than 1% of Election Misinformation
Amid considerable fears about the impact that AI-generated content may have on the 2024 Presidential Election in the United States, Meta claims that less than one percent of election misinformation was created by AI, at least on its platforms, which include Facebook, Instagram, and Threads. 2024 was a big year for elections across the world. While the United States understandably sucked up a lot of the air in the room across the Western world, people in India, Indonesia, Mexico, and nations within the European Union all cast ballots this year. In a new post, "What We Saw on Our Platforms During 2024's Global Elections," Meta's president of Global Affairs, Nick Clegg, breaks down how people shared and communicates across Meta's platforms, including how people spread misinformation, a particular area of concern for many. "Since 2016 we have been evolving our approach to elections to incorporate the lessons we learn and stay ahead of emerging threats. We have a dedicated team responsible for Meta's cross-company election integrity efforts, which includes experts from our intelligence, data science, product and engineering, research, operations, content and public policy, and legal teams," Clegg writes. "In 2024, we ran a number of election operations centers around the world to monitor and react swiftly to issues that arose, including in relation to the major elections in the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico and Brazil." Clegg notes that no platform will ever strike the perfect balance between free speech and safety all the time, but admits that Meta's error rates have historically been too high. However, Clegg focuses on situations when Meta's platforms remove harmless content, rather than times when harmful content remains available. During the U.S. general election cycle, "top of feed reminders on Facebook and Instagram received more than one billion impressions," per Clegg, and these reminders included information about registering to vote, methods by which Americans can vote, and, of course, reminding people on election day actually to vote. As for content that users shared, there were significant concerns about how AI-generated content would be shared and what sort of impact it could have on misinformation. While some AI-generated content got a lot of attention online, Meta says that AI-generated content, including deepfakes, did not reach the high levels people thought they might. "From what we've monitored across our services, it seems these risks did not materialize in a significant way and that any such impact was modest and limited in scope," Clegg says. "While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content. During the election period in the major elections listed above, ratings on AI content related to elections, politics and social topics represented less than one percent of all fact-checked misinformation," he continues. Clegg doesn't specifically discuss the impressions that AI-generated fact-checked misinformation may have had. Still, regarding all identified misinformation, AI content was a tiny piece of the pie. Meta, which has its own generative AI tool, Imagine, rejected nearly 600,000 requests to generate images of the candidates and current President Joe Biden. Clegg adds that Meta signed the AI Elections Accord earlier this year, pledging to "help prevent deceptive AI content from interfering with this year's global elections." Clegg also touches on foreign interference, saying that in 2024, Meta's dedicated teams took down "around 20 new covert influence operations around the world, including in the Middle East, Asia, Europe, and the U.S." "With every major election, we want to make sure we are learning the right lessons and staying ahead of potential threats," Clegg says. "Striking the balance between free expression and security is a constant and evolving challenge."
[6]
Meta says gen AI had muted impact on global elections this year
(Reuters) - Despite widespread concern that generative AI could interfere with major elections around the globe this year, the technology had limited impact across Meta Platforms' apps, the tech company said on Tuesday. Coordinated networks of accounts seeking to spread propaganda or false content largely failed to build a significant audience on Facebook and Instagram or use AI effectively, Nick Clegg, Meta's president of global affairs, told a press briefing. The volume of AI-generated misinformation was low and Meta was able to quickly label or remove the content, he said. The snapshot from Meta comes as misinformation experts say AI content has so far failed to significantly sway public opinion, as notable deepfake videos and audio, including of President Joe Biden's voice, have been quickly debunked. Coordinated networks of accounts attempting to spread false content are increasingly shifting their activities to other social media and messaging apps with fewer safety guardrails, or are operating their own websites in order to stay online, Clegg said. Even as Meta said it was able to take down about 20 covert influence operations on its platform this year, the company has retreated from more stringent content moderation it employed during the previous U.S. presidential election in 2020. The company heard feedback from users who complained that their content had been removed unfairly, and Meta will aim to protect free expression and be more precise in enforcing its rules, Clegg said. "We feel we probably overdid it a bit," he said. "While we've been really focusing on reducing prevalence of bad content, I think we also want to redouble our efforts to improve the precision and accuracy with which we act on our rules." The move is also in response to push-back from some Republican lawmakers who have questioned what they say is censorship of certain viewpoints on social media. In an August letter to the U.S. House of Representatives Judiciary Committee, Meta CEO Mark Zuckerberg said he regretted some content take-downs the company made in response to pressure from the Biden administration. (Reporting by Sheila Dang in Austin; Editing by Chizu Nomiyama)
[7]
Meta says no sign of AI bedeviling elections in 2024
Meta on Tuesday said fears that artificial intelligence would unleash a torrent of misinformation to deceive voters around the world did not come true as elections played out around the world this year. Defenses against deceptive influence campaigns at the networking giant's platform held firm, with no evidence that such coordinated efforts got much attention online, Meta president of global affairs Nick Clegg told reporters. "I don't think the use of generative AI was a particularly effective tool for them to evade our trip wires," Clegg said of those behind coordinated disinformation campaigns. "The delta between what was expected and what appeared is quite significant." Meta says that most of the cover influence operations it has disrupted in recent years were carried out by actors from Russia, Iran and China. Meta has no intent of lowering its guard, however, since generative AI tools are expected to become more sophisticated and more prevalent. Clegg referred to 2024 as the biggest election year ever, with some 2 billion people estimated to have gone to the polls in scores of countries around the world. "People were understandably concerned about the potential impact that generative AI would have on elections during the course of this year," Clegg said during a briefing with journalists. "There were all sorts of warnings about the potential risks of things like widespread deep fakes and AI enabled disinformation campaigns." Preventing the malicious use of generative AI in elections became an industry-wide effort, according to Clegg. Clegg said he was not privy to whether Meta chief executive Mark Zuckerberg and president-elect Donald Trump discussed the tech platform's content moderation policies, when Zuckerberg was invited to Trump's Florida resort last week. Trump has been critical of Meta, accusing the platform of censoring politically conservative viewpoints. "Mark is very keen to play an active role in the debates that any administration needs to have about maintaining America's leadership in the technological sphere...and particularly the pivotal role that AI will play in that area," Clegg said. Clegg added that hindsight has led Meta to conclude that it "overdid" content moderation during the Covid-19 pandemic and that the tech company is "redoubling" efforts to improve the precision with which it targets content for removal based on its policies. "Our content rules evolve and change all the time," Clegg said. "We will definitely continue to work on all of that, mindful of the fact that we're never going to get it perfectly right and to everybody's satisfaction."
[8]
Deepfakes and AI Weren't a Big Part of Election Disinformation, Meta Says
Expertise Cybersecurity, Digital Privacy, IoT, Consumer Tech, Running and Fitness Tech, Smartphones, Wearables While much was made about the potential dangers of deepfakes and artificial intelligence-powered disinformation campaigns ahead of this past year's elections, not much actually showed up on Meta's social media platforms, the company said Tuesday. The parent of Facebook and Instagram says that while there were confirmed and suspected instances where AI was used as part of disinformation operations, "volumes remained low" and the company's existing practices were enough to minimize their impact. In addition, ratings on AI content related to elections, politics and social topics represented less than 1% of all fact-checked misinformation on its platforms. "From what we've monitored across our services, it seems these risks didn't materialize in a significant way and any such impact was modest and limited in scope," Nick Clegg, Meta's president of global affairs, said in a call with reporters. That's not to say foreign governments aren't trying to sway the options of people around the world through social media campaigns. Meta says that so far this year, its teams have taken down about 20 new covert influence operations around the world, with Russia remaining the top source of these kinds of campaigns. About 2 billion people spread across more than 70 countries were eligible to vote in national elections this year. Election security experts had fretted about the possible impacts of AI-powered deepfakes and other forms of disinformation on the voting public. Social media companies were faced with the challenge of keeping disinformation off their platforms, while not unnecessarily restricting the free expression of their users. Some politicians, including President-elect Donald Trump, frequently criticized the platforms while at the same time using them to spread baseless accusations about election fraud and immigrants.
[9]
Meta says no sign of AI bedeviling elections in 2024
Meta says that most of the cover influence operations it has disrupted in recent years were carried out by actors from Russia, Iran and China.Meta has no intent of lowering its guard, however, since generative AI tools are expected to become more sophisticated and more prevalent.Meta on Tuesday said fears that artificial intelligence would unleash a torrent of misinformation to deceive voters around the world did not come true as elections played out around the world this year. Defenses against deceptive influence campaigns at the networking giant's platform held firm, with no evidence that such coordinated efforts got much attention online, Meta president of global affairs Nick Clegg told reporters. "I don't think the use of generative AI was a particularly effective tool for them to evade our trip wires," Clegg said of those behind coordinated disinformation campaigns. "The delta between what was expected and what appeared is quite significant." Meta says that most of the cover influence operations it has disrupted in recent years were carried out by actors from Russia, Iran and China. Meta has no intent of lowering its guard, however, since generative AI tools are expected to become more sophisticated and more prevalent. Clegg referred to 2024 as the biggest election year ever, with some 2 billion people estimated to have gone to the polls in scores of countries around the world. "People were understandably concerned about the potential impact that generative AI would have on elections during the course of this year," Clegg said during a briefing with journalists. "There were all sorts of warnings about the potential risks of things like widespread deep fakes and AI enabled disinformation campaigns." Preventing the malicious use of generative AI in elections became an industry-wide effort, according to Clegg. Clegg said he was not privy to whether Meta chief executive Mark Zuckerberg and president-elect Donald Trump discussed the tech platform's content moderation policies, when Zuckerberg was invited to Trump's Florida resort last week. Trump has been critical of Meta, accusing the platform of censoring politically conservative viewpoints. "Mark is very keen to play an active role in the debates that any administration needs to have about maintaining America's leadership in the technological sphere...and particularly the pivotal role that AI will play in that area," Clegg said. Clegg added that hindsight has led Meta to conclude that it "overdid" content moderation during the Covid-19 pandemic and that the tech company is "redoubling" efforts to improve the precision with which it targets content for removal based on its policies. "Our content rules evolve and change all the time," Clegg said. "We will definitely continue to work on all of that, mindful of the fact that we're never going to get it perfectly right and to everybody's satisfaction."
[10]
Meta says no sign of AI bedeviling elections in 2024
San Francisco (AFP) - Meta on Tuesday said fears that artificial intelligence would unleash a torrent of misinformation to deceive voters around the world did not come true as elections played out around the world this year. Defenses against deceptive influence campaigns at the networking giant's platform held firm, with no evidence that such coordinated efforts got much attention online, Meta president of global affairs Nick Clegg told reporters. "I don't think the use of generative AI was a particularly effective tool for them to evade our trip wires," Clegg said of those behind coordinated disinformation campaigns. "The delta between what was expected and what appeared is quite significant." Meta says that most of the cover influence operations it has disrupted in recent years were carried out by actors from Russia, Iran and China. Meta has no intent of lowering its guard, however, since generative AI tools are expected to become more sophisticated and more prevalent. Clegg referred to 2024 as the biggest election year ever, with some 2 billion people estimated to have gone to the polls in scores of countries around the world. "People were understandably concerned about the potential impact that generative AI would have on elections during the course of this year," Clegg said during a briefing with journalists. "There were all sorts of warnings about the potential risks of things like widespread deep fakes and AI enabled disinformation campaigns." Preventing the malicious use of generative AI in elections became an industry-wide effort, according to Clegg. Clegg said he was not privy to whether Meta chief executive Mark Zuckerberg and president-elect Donald Trump discussed the tech platform's content moderation policies, when Zuckerberg was invited to Trump's Florida resort last week. Trump has been critical of Meta, accusing the platform of censoring politically conservative viewpoints. "Mark is very keen to play an active role in the debates that any administration needs to have about maintaining America's leadership in the technological sphere...and particularly the pivotal role that AI will play in that area," Clegg said. Clegg added that hindsight has led Meta to conclude that it "overdid" content moderation during the Covid-19 pandemic and that the tech company is "redoubling" efforts to improve the precision with which it targets content for removal based on its policies. "Our content rules evolve and change all the time," Clegg said. "We will definitely continue to work on all of that, mindful of the fact that we're never going to get it perfectly right and to everybody's satisfaction."
[11]
Meta says no sign of AI bedeviling elections in 2024
Meta logo is seen in this August 2022 file photo. Reuters-Yonhap Meta said Tuesday that fears that artificial intelligence would unleash a torrent of misinformation to deceive voters around the world did not come true as elections played out around the world this year. Defenses against deceptive influence campaigns at the networking giant's platform held firm, with no evidence that such coordinated efforts got much attention online, Meta president of global affairs Nick Clegg told reporters. "I don't think the use of generative AI was a particularly effective tool for them to evade our trip wires," Clegg said of those behind coordinated disinformation campaigns. "The delta between what was expected and what appeared is quite significant." Meta says that most of the cover influence operations it has disrupted in recent years were carried out by actors from Russia, Iran and China. Meta has no intent of lowering its guard, however, since generative AI tools are expected to become more sophisticated and more prevalent. Clegg referred to 2024 as the biggest election year ever, with some 2 billion people estimated to have gone to the polls in scores of countries around the world. "People were understandably concerned about the potential impact that generative AI would have on elections during the course of this year," Clegg said during a briefing with journalists. "There were all sorts of warnings about the potential risks of things like widespread deep fakes and AI enabled disinformation campaigns." Preventing the malicious use of generative AI in elections became an industry-wide effort, according to Clegg. Clegg said he was not privy to whether Meta chief executive Mark Zuckerberg and president-elect Donald Trump discussed the tech platform's content moderation policies, when Zuckerberg was invited to Trump's Florida resort last week. Trump has been critical of Meta, accusing the platform of censoring politically conservative viewpoints. "Mark is very keen to play an active role in the debates that any administration needs to have about maintaining America's leadership in the technological sphere...and particularly the pivotal role that AI will play in that area," Clegg said. Clegg added that hindsight has led Meta to conclude that it "overdid" content moderation during the Covid-19 pandemic and that the tech company is "redoubling" efforts to improve the precision with which it targets content for removal based on its policies. "Our content rules evolve and change all the time," Clegg said. "We will definitely continue to work on all of that, mindful of the fact that we're never going to get it perfectly right and to everybody's satisfaction." (AFP)
[12]
Meta says it has taken down about 20 covert influence operations in 2024
Firm names Russia as top source of such activity but says it is 'striking' how little AI was used to try to trick voters Meta has intervened to take down about 20 covert influence operations around the world this year, it has emerged - though the tech firm said fears of AI-fuelled fakery warping elections have not materialised in 2024. Nick Clegg, the president of global affairs at the company that runs Facebook, Instagram and WhatsApp, said Russia was still the No 1 source of the adversarial online activity but said in a briefing it was "striking" how little AI was used to try to trick voters in the busiest ever year for elections around the world. The former British deputy prime minister revealed that Meta, which has more than 3 billion users, had to take down just over 500,000 requests to generate images on its own AI tools of Donald Trump and Kamala Harris, JD Vance and Joe Biden in the month leading up to US election day. But the firm's security experts had to tackle a new operation using fake accounts to manipulate public debate for a strategic goal at the rate of more than one every three weeks. The "coordinated inauthentic behaviour" incidents included a Russian network using dozens of Facebook accounts and fictitious news websites to target people in Georgia, Armenia and Azerbaijan. Another was a Russia-based operation that employed AI to create fake news websites using brands such as Fox News and the Telegraph to try to weaken western support for Ukraine, and used Francophone fake news sites to promote Russia's role in Africa and to criticise that of France. "Russia remains the No 1 source of the covert influence operations we've disrupted to date - with 39 networks disrupted in total since 2017," he said. The next most frequent sources of foreign interference detected by Meta are Iran and China. Giving an evaluation of the effect of AI fakery after a wave of polls in 50 countries including the US, India, Taiwan, France, Germany and the UK, he said: "There were all sorts of warnings about the potential risks of things like widespread deepfakes and AI enabled disinformation campaigns. That's not what we've seen from what we've monitored across our services. It seems these risks did not materialise in a significant way, and that any such impact was modest and limited in scope." But Clegg warned against complacency and said the relatively low-impact of fakery using generative AI to manipulate video, voices and photos was "very, very likely to change". "Clearly these tools are going to become more and more prevalent and we're going to see more and more synthetic and hybrid content online," he said. Meta's assessment follows conclusions last month from the Centre for Emerging Technology and Security that "deceptive AI-generated content did shape US election discourse by amplifying other forms of disinformation and inflaming political debates". It said there was a lack of evidence about its impact on Donald Trump's election win. It concluded that AI-enabled threats did begin to damage the health of democratic systems in 2024 and warned "complacency must not creep [in]" before the 2025 elections in Australia and Canada. Sam Stockwell, research associate at the Alan Turing Institute, said AI tools may have shaped election discourses and amplified harmful narratives in subtle ways, particularly in the recent US election. "This included misleading claims that Kamala Harris's rally was AI-generated, and baseless rumours that Haitian immigrants were eating pets going viral with the assistance of xenophobic AI-generated memes," he said.
[13]
Global elections dodge deepfake threat
By the numbers: While Meta said its systems did catch several covert attempts to spread election disinformation using deepfakes, "the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content," the company said. State of play: Meta introduced new policies this year to prevent everyday users from inadvertently spreading election misinformation using its Meta AI chatbot, including blocking the creation of AI-generated media of politicians. Zoom out: Meta has invested heavily in broader threat intelligence over the past few years, which Clegg said has helped the company identify coordinated disinformation networks, regardless of whether they use AI or not. The big picture: Nearly half of the world's population lives in countries that held major elections this year, prompting concerns about AI deepfakes from intelligence officials globally. Reality check: Images and videos created using generative AI still lack precision and that makes it possible, at least for now, for experts to debunk them. The bottom line: The most problematic deepfakes aren't necessarily the most believable ones, but rather, the ones shared by people in power to help propel narratives or conspiracies that support their campaigns. Go deeper: Generative AI adds to election distrust without fooling many voters
[14]
Meta didn't notice major disinformation in Romanian election
The Big Tech president Nick Clegg said the influence of AI in a year of many elections has been limited. US social media company Meta did not notice major incidents on its platforms during the Romanian election, President of Global Affairs Nick Clegg told reporters in a briefing. "Throughout the elections we have been in close - almost daily - contact with both government authorities and law enforcement in Romania, including ANCOM, the Ministry of Interior, the Electoral Body, and the Romanian Cybersecurity Agency," Clegg said. "We are not seeing any evidence of major incidents on our platforms in Romania," he added. Last week Romania's National Audiovisual Council asked the European Commission to open a formal probe into the role of video sharing platform TikTok in the first round of the country's presidential elections on 24 November. Calin Georgescu, a right-wing candidate who ran independently, emerged as the winner with some 22.95% of the votes mainly because of his strong performance on TikTok. The second round takes place on 8 December. In response, the Commission sent additional questions to the platform, and hosted an online roundtable on 29 November, with ANCOM, and platforms including TikTok, Meta, Google, Microsoft, and X. In a letter seen by Euronews, sent to the Romanian authorities, TikTok said "no evidence of a Covert Influence Operation on our platform within the last several weeks for the ongoing presidential election in Romania, nor evidence of foreign influence [was found]." In a blogpost published today, Clegg said that throughout 2024, which saw elections in major democracies including India, Indonesia, Mexico and the EU, Meta's "existing policies and processes proved sufficient to reduce the risk around generative AI content". "During the election period in the major elections listed above, ratings on AI content related to elections, politics and social topics represented less than 1% of all fact-checked misinformation," he wrote. Around the US presidential election on 5 November, Meta said it rejected 590,000 requests to use Meta's Image AI to generate images of President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden. Clegg said that in general, the company's policies sometimes lead to curbs of freedom of expression. "Too often harmless content gets taken down or restricted and too many people get penalized unfairly," adding that Meta will continue to work on this in the months ahead.
Share
Share
Copy Link
Meta claims that AI-generated content played a minimal role in election misinformation on its platforms in 2024, contrary to widespread concerns about AI's potential impact on global elections.
Meta, the parent company of Facebook, Instagram, and WhatsApp, has released a report claiming that AI-generated content constituted less than 1% of all fact-checked election-related misinformation on its platforms during major global elections in 2024 1. This finding comes amid widespread concerns about the potential impact of generative AI on election integrity and misinformation campaigns.
Meta's analysis covered major elections in several countries, including the United States, Bangladesh, Indonesia, India, Pakistan, European Union Parliament, France, United Kingdom, South Africa, Mexico, and Brazil 2. The company implemented dedicated election operations centers to monitor and swiftly react to issues arising during these electoral events.
According to Nick Clegg, Meta's President of Global Affairs, the feared widespread use of deepfakes and AI-enabled disinformation campaigns "did not materialize in a significant way" 3. While there were instances of confirmed or suspected AI use in spreading disinformation, the volumes remained low, and Meta's existing policies and processes were reportedly sufficient to mitigate the risks.
Meta implemented several measures to combat AI-generated misinformation:
The company's Imagine AI image generator rejected approximately 590,000 requests to create images of political figures in the month leading up to the U.S. election 4.
Meta signed the AI Elections Accord, pledging to prevent deceptive AI content from interfering with global elections 5.
The company focused on identifying and taking down covert influence operations, disrupting around 20 such networks worldwide 2.
Meta reported thwarting approximately 20 foreign interference operations aimed at influencing elections, with Russia being the primary source, followed by Iran 1. The company also noted that some fake election videos removed from its platforms later reappeared on other apps like X (formerly Twitter) and Telegram, which Meta claims have "fewer safeguards" 1.
While Meta's report paints a relatively optimistic picture of AI's impact on election misinformation, the company acknowledges ongoing challenges in balancing free expression and security. Clegg stated that Meta's error rates in enforcing policies are still too high, potentially impeding free expression 3. The company plans to review its policies and announce any changes in the coming months 2.
Reference
[2]
[4]
Meta has identified and disrupted a Russian influence operation using AI-generated content to spread misinformation about the upcoming 2024 US election. The campaign, though limited in scope, raises concerns about the potential misuse of AI in political manipulation.
6 Sources
Recent studies reveal that AI-generated misinformation and deepfakes had little influence on global elections in 2024, contrary to widespread concerns. The limited impact is attributed to the current limitations of AI technology and users' ability to recognize synthetic content.
2 Sources
Artificial intelligence poses a significant threat to the integrity of the 2024 US elections. Experts warn about the potential for AI-generated misinformation to influence voters and disrupt the electoral process.
2 Sources
As the 2024 U.S. presidential election approaches, experts warn of an unprecedented surge in AI-generated disinformation across social media platforms, posing significant challenges to election integrity and voter trust.
3 Sources
A comprehensive look at how AI technologies were utilized in the 2024 global elections, highlighting both positive applications and potential risks.
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved