The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Sat, 28 Dec, 12:02 AM UTC
2 Sources
[1]
Deepfakes Barely Impacted 2024 Elections Because They Aren’t Very Good, Research Finds
AI is abundant, but people are good at recognizing when an image has been created using the technology. It seems that although the internet is increasingly drowning in fake images, we can at least take some stock in humanity's ability to smell BS when it matters. A slew of recent research suggests that AI-generated misinformation did not have any material impact on this year's elections around the globe because it is not very good yet. There has been a lot of concern over the years that increasingly realistic but synthetic content could manipulate audiences in detrimental ways. The rise of generative AI raised those fears again, as the technology makes it much easier for anyone to produce fake visual and audio media that appear to be real. Back in August, a political consultant used AI to spoof President Biden's voice for a robocall telling voters in New Hampshire to stay home during the state's Democratic primaries. Tools like ElevenLabs make it possible to submit a brief soundbite of someone speaking and then duplicate their voice to say whatever the user wants. Though many commercial AI tools include guardrails to prevent this use, open-source models are available. Despite these advances, the Financial Times in a new story looked back at the year and found that, across the world, very little synthetic political content went viral. It cited a report from the Alan Turing Institute which found that just 27 pieces of AI-generated content went viral during the summer's European elections. The report concluded that there was no evidence the elections were impacted by AI disinformation because "most exposure was concentrated among a minority of users with political beliefs already aligned to the ideological narratives embedded within such content." In other words, amongst the few who saw the content (before it was presumably flagged) and were primed to believe it, it reinforced those beliefs about a candidate even if those exposed to it knew the content itself was AI-generated. It cited an example of AI-generated imagery showing Kamala Harris addressing a rally standing in front of Soviet flags. In the U.S., the News Literacy Project identified more than 1,000 examples of misinformation about the presidential election, but only 6% was made using AI. On X, mentions of "deepfake" or "AI-generated" in Community Notes were typically only mentioned with the release of new image generation models, not around the time of elections. Interestingly, it seems that users on social media were more likely to misidentify real images as being AI-generated than the other way around, but in general, users exhibited a healthy dose of skepticism. If the findings are accurate, it would make a lot of sense. AI imagery is all over the place these days, but images generated using artificial intelligence still have an off-putting quality to them, exhibiting tell-tale signs of being fake. An arm might unusually long, or a face does not reflect onto a mirrored surface properly; there are many small cues that will give away that an image is synthetic. AI proponents should not necessarily cheer on this news. It means that generated imagery still has a ways to go. Anyone who has checked out OpenAI's Sora model knows the video it produces is just not very goodâ€"it appears almost like something created by a video game graphics engine (speculation is that it was trained on video games), one that clearly does not understand properties like physics. That all being said, there are still concerns to be had. The Alan Turing Institute's report did after all conclude that beliefs can be reinforced by a realistic deepfake containing misinformation even if the audience knows the media is not real; confusion around whether a piece of media is real damages trust in online sources; and AI imagery has already been used to target female politicians with pornographic deepfakes, which can be damaging psychologically and to their professional reputation as it reinforces sexist beliefs. The technology will surely continue to improve, so it is something to keep an eye on.
[2]
How we were deepfaked by election deepfakes
Around this time last year, you probably read dozens of dire warnings about generative artificial intelligence's impact on 2024's bumper crop of global elections. Deepfakes would supercharge political disinformation, leaving muddled voters unable to tell fact from fiction in a sea of realistic, personalised lies, the story went. Leaders from Sadiq Khan to the Pope spoke out against them. A World Economic Forum survey of experts ranked AI disinformation as the second-most pressing risk of 2024. Sure enough, dozens of examples were widely reported. Joe Biden's "voice" on robocalls urged primary voters to stay home; AI-generated videos of non-existent members of Marine Le Pen's family making racist jokes were viewed millions of times on TikTok while a fake audio clip of Sir Keir Starmer swearing at a staffer went viral on X. But many experts now believe there is little evidence that AI disinformation was as widespread or impactful as was feared. The Alan Turing Institute identified just 27 viral pieces of AI-generated content during the summer's UK, French and EU elections combined. Only around one in 20 British people recognised any of the most widely shared political deepfakes around the election, a separate study found. In the US, the News Literacy Project catalogued almost 1,000 examples of misinformation about the presidential election. Just 6 per cent involved generative AI. According to TikTok, removals of AI-generated content did not increase as voting day neared. Mentions of terms such as "deepfake" or "AI-generated" in X's user-submitted fact-check system, Community Notes, were more correlated with the release of new image generation models than major elections, a Financial Times analysis found. The trend held in non-western countries, too: a study found just 2 per cent of misinformation around Bangladesh's January election was deepfakes. South Africa's polarised election was "marked by an unexpected lack" of AI, researchers concluded. Microsoft, Meta and OpenAI all reported uncovering covert foreign operations attempting to use AI to influence elections this year, but none succeeded in finding a wide audience. Much of the election-related AI content that did catch on wasn't intended to trick voters. Instead, the technology was often used for emotional arguments -- creating images that felt supportive of a certain narrative, even if they were clearly unreal. Kamala Harris addressing a rally decked out with Soviet flags, for instance, or an Italian child eating a cockroach-topped pizza (in reference to the EU's supposed support for insect diets). Deceased politicians were "resurrected" to support campaigns in Indonesia and India. Such "symbolic, expressive, or satirical messages" are in line with traditional persuasion and propaganda tactics, according to Daniel Schiff, an expert in AI policy and ethics at Purdue University. Around 40 per cent of political deepfakes that a Purdue team identified were at least partly intended as satire or entertainment. What about the "liar's dividend"? This is the idea that people will claim that legitimate content showing them in a bad light is AI-generated, potentially leaving voters feeling that nothing can be believed any more. An Institute for Strategic Dialogue analysis did find widespread confusion over political content on social media, with users frequently misidentifying real images as AI generated. But most are able to apply healthy scepticism to such claims. The share of US voters who said it was difficult to understand what news is true about candidates fell between the 2020 and 2024 elections, according to Pew Research. "We've had Photoshop for ages, and we still largely trust photos," says Felix Simon, a researcher at Oxford university's Reuters Institute for the Study of Journalism who has written about deepfake fears being overblown. Of course, we cannot let our guard down. AI technology and its social impacts are advancing rapidly. Deepfakes are already proving a dangerous tool in other scenarios, such as elaborate impersonation scams or pornographic harassment and extortion. But when it comes to political disinformation, the real challenge has not changed: tackling the reasons why people are willing to believe and share falsehoods in the first place, from political polarisation to TikTok-fuelled media diets. While the threat of deepfakes may grab headlines, we should not let it become a distraction.
Share
Share
Copy Link
Recent studies reveal that AI-generated misinformation and deepfakes had little influence on global elections in 2024, contrary to widespread concerns. The limited impact is attributed to the current limitations of AI technology and users' ability to recognize synthetic content.
Contrary to widespread concerns, recent research suggests that AI-generated misinformation and deepfakes had minimal impact on global elections in 2024. Despite the rise of generative AI technologies, voters demonstrated a surprising ability to discern synthetic content from authentic media 1.
The Alan Turing Institute reported that only 27 pieces of AI-generated content went viral during the European elections last summer. In the United States, the News Literacy Project identified over 1,000 examples of election misinformation, with a mere 6% attributed to AI 2.
A study in Bangladesh found that just 2% of misinformation surrounding their January election involved deepfakes. Similarly, South Africa's election was marked by an unexpected lack of AI-generated content 2.
Several factors contributed to the minimal influence of AI-generated content:
Quality of AI-generated media: Current AI technology still produces images and videos with noticeable flaws, making them easier to identify as synthetic 1.
User skepticism: Social media users exhibited a healthy dose of skepticism towards online content, often misidentifying real images as AI-generated rather than the reverse 1.
Limited exposure: Most AI-generated content was concentrated among users with pre-existing political beliefs aligned with the narratives presented 1.
Interestingly, much of the election-related AI content that gained traction was not intended to deceive voters. Instead, it was often used to create emotional arguments or symbolic messages supporting certain narratives, even when clearly unrealistic 2.
Despite the limited impact, experts warn against complacency:
Reinforcement of beliefs: Even when audiences recognize content as AI-generated, it can still reinforce existing beliefs about candidates 1.
Erosion of trust: The prevalence of AI-generated content may damage overall trust in online sources 1.
Targeted harassment: AI imagery has been used to create pornographic deepfakes targeting female politicians, potentially damaging their reputations and reinforcing sexist beliefs 1.
As AI technology continues to advance, vigilance and ongoing research will be crucial in understanding and mitigating potential impacts on future elections and public discourse.
Reference
[1]
[2]
A comprehensive look at how AI technologies were utilized in the 2024 global elections, highlighting both positive applications and potential risks.
4 Sources
4 Sources
Artificial intelligence poses a significant threat to the integrity of the 2024 US elections. Experts warn about the potential for AI-generated misinformation to influence voters and disrupt the electoral process.
2 Sources
2 Sources
Meta claims that AI-generated content played a minimal role in election misinformation on its platforms in 2024, contrary to widespread concerns about AI's potential impact on global elections.
14 Sources
14 Sources
As the 2024 U.S. presidential election approaches, artificial intelligence emerges as a powerful and potentially disruptive force, raising concerns about misinformation, deepfakes, and foreign interference while also offering new campaign tools.
6 Sources
6 Sources
As the 2024 US presidential election approaches, the rise of AI-generated fake content is raising alarms about potential voter manipulation. Experts warn that the flood of AI-created misinformation could significantly impact the electoral process.
5 Sources
5 Sources