Curated by THEOUTPOST
On Wed, 18 Sept, 4:06 PM UTC
2 Sources
[1]
AI-generated election disinformation can subtly influence voters, deepen politicial divide
Sept. 17 (UPI) -- Experts warn that false images, video and audio created by artificial intelligence to spread disinformation about the 2024 elections can subtly influence voters and worsen the nation's political divide. A survey by Elon University's Imagining the Digital Future Center found that 69% of participants are not confident that most voters can detect fake photos, audio or video. The survey examined attitudes about AI and the role it is playing in U.S. politics in 2024. Janet Coats, managing director of the Consortium on Trust in Media and Technology at the University of Florida, told UPI that social media remains the primary channel for transmitting disinformation. AI is increasingly becoming a more common tool to enhance these misleading and false messages. "We used to call it propaganda. It's been around as long as people have been communicating in some kind of organized way," Coats said. "What we're starting to see in this election cycle is the rise of artificial intelligence as a really easy and pretty sophisticated, user-friendly tool for creating disinformation, then using social media platforms to push it out to consumers. You're almost reaching everyone." "Things move by so fast. It might make a quick impression on you and maybe you file it away in the back of your mind and you don't really dig into it, but it colors your perceptions of other things you're seeing." Prominent figures have shared AI-created content in ways that spread false information as the general election nears. Former President Donald Trump, the Republican nominee, shared an AI-generated endorsement from Taylor Swift in August. Swift would later officially endorse Trump's opponent, Vice President Kamala Harris. Weeks later, X owner Elon Musk shared an image of Harris dressed in stereotypical communist garb, falsely claiming that Harris "vows to be a communist dictator on day one." "Can you believe she wears that outfit!?" Musk posted on X. Lisa Fazio, associate professor of psychology at Vanderbilt, studies how people learn both true and false information from the world around them and how it shapes their views or become more receptive to false information. She told UPI that these posts by Trump and Musk are ways to signal to their followers what they should believe. "One of the things about Trump sharing AI-generated images of Taylor Swift is it doesn't need to convince voters she actually did the endorsement," Fazio said. "It's just another sign of, 'Trump is popular and you're with the winning side.'" AI-generated disinformation is not likely to cause a voter to switch their vote, Fazio and Coats said, but it can increase polarization, creating a greater divide between political ideologies. Social media companies like X and Meta widely adopted new policies for flagging and downgrading posts and accounts that engaged in disinformation as these played a role in sparking the Jan. 6 riot at the U.S. Capitol. These companies -- X in particular -- have scaled back their teams and mechanisms designed to combat disinformation and misinformation in the years since. Musk quickly fired the head of trust policy after taking over the company in 2022 as one of his first moves. He continued with layoffs in X's trust and safety roles. About 73% of respondents to the Elon University survey believe it is very or somewhat likely that AI will be used to influence the outcome of the election by manipulating social media. About 70% say the election will be affected by the use of AI and about 62% worry that it is likely to convince some voters not to cast ballots. False information about how, where and when people can vote and their eligibility is a bigger threat to the election than AI-generated posts like those from Trump or Musk, Coats said. Voters in New Hampshire received robocalls from an artificially generated voice purporting to be President Joe Biden leading up to the state's primary earlier this year. The calls urged voters to "save your vote" for the general election. "There could be impacts in swing states that create a perception that there's no point in me voting or I have been disqualified from voting or sending out false information on voting," Coats said. "Depressing turnout is one way in the close states to tip things one way or the other." There are some telltale signs of AI-created images. One of the apparent limitations of the technology is in its inability to create realistic hands. They often have the wrong number of fingers or are posed in an unnatural way. However the technology will continue to become more sophisticated and the content it produces will be more convincing. AI-generated voices are more difficult to detect. "Some people's voices are easy to duplicate," Coats said. "That's one of the things with Biden is he has a very distinctive speech cadence." The University of Florida and other universities are researching unique ways to harness AI themselves to spot AI-generated disinformation. The University of Washington's Allen Institute for Artificial Intelligence introduced AI software called Grover in 2019. It is used to detect fake news stories with a 92% accuracy rate. There are also tools like Check by Meedan, a fact-checking software used to detect misinformation in Africa and South America. "They're turning the machine on itself," Coats said. There have long been concerns about how technology -- most currently AI -- can influence society. Fazio said society has tended to adapt over time to recognize things like digitally edited photos and deepfakes. This gives her some optimism that the same can be true for AI-generated disinformation. "Pay attention to the source and who it's coming from. Think about if they might have motivated reasons for pushing things consistent with their point of view," she said. "One of my concerns about this type of misinformation isn't just that the existence of it might change people's minds. It's also that it might have people doubt real things." Fazio referred to a photo of a Harris campaign rally that left some users on social media skeptical about whether the crowd was real or if the images were AI-generated. There have been bills introduced in Congress since the launch of ChatGPT in 2022, attempting to regulate the use of AI. None of the bills have gained traction enough to suggest meaningful enforcement is on its way. Pursuing regulations will be a long road leading to a difficult task, Coats said. "We're kind of always fighting the last war. There's a regulatory approach we have to think through," Coats said. "Part of the issue is this is global. This is not something you can just regulate in one place and not have there be complications somewhere else." It will also raise some interesting questions. "Does the machine have free speech rights? Intellectual property rights?" Coats asked. "When they're regulating this, where are the free speech boundaries and the First Amendment boundaries? How does it all intersect? It is just a big complicated ball of string here that we're trying to figure out as that ball keeps moving."
[2]
How to Protect Yourself from AI Election Misinformation
As the 2024 race for President heats up, so too does concern over the spread of AI misinformation in elections. "It's an arms race," says Andy Parsons, senior director of Adobe's Content Authenticity Initiative (CAI). And it's not a fair race: the use of artificial intelligence to spread misinformation is a struggle in which the "bad guys don't have to open source their data sets or present at conferences or prepare papers," Parsons says. AI misinformation has already become an issue this year. Pop sensation Taylor Swift endorsed Vice President Kamala Harris after former President Donald Trump posted a fake image of her endorsing him. "It really conjured up my fears around AI, and the dangers of spreading misinformation," Swift stated in the Instagram caption of her Harris endorsement. The Swift incident isn't the first instance of AI misinformation this election cycle. Florida Governor and then Republican presidential candidate Ron DeSantis posted a campaign video in June 2023 including apparently fake images of Trump hugging former National Institute of Allergy and Infectious Diseases director Anthony Fauci. Earlier this year, a political consultant utilized artificial intelligence-generated robocalls mimicking President Joe Biden's voice to voters ahead of New Hampshire's presidential primary, suggesting that voting in the primary would preclude voters from casting ballots in November. A new study by Parsons and the CAI, published Wednesday, found that a large majority of respondents (94%) are concerned that the spread of misinformation will impact the upcoming election, and 87% of respondents said that the rise of generative AI has made it more challenging to discern fact from fiction online. The data was collected from 2,002 responses from U.S. citizens, all of whom were 18 and older. "I don't think there's anything that 93% of the American public agrees on, but apparently this is one of them, and I think there's a good reason for that," said Hany Farid, professor at the University of California, Berkeley and advisor to the CAI. "There are people on both sides creating fake content, people denying real content, and suddenly you go online like, 'What? What do I make of anything?'" A bipartisan group of lawmakers introduced legislation Tuesday that would prohibit political campaigns and outside political groups from using AI to pretend to be politicians. Multiple state legislatures have now also introduced bills regulating deepfakes in elections. Parsons believes now is a "true tipping point" in which "consumers are demanding transparency." But in the absence of significant laws or effective technological guardrails, there are things you can do to protect yourself from AI misinformation heading into November, researchers say. One key tactic is to not rely on social media for election news. "Getting off social media is step number one, two, three, four, and five; it is not a place to get reliable information," says Farid. You should view X (formerly Twitter) and Facebook as spaces for fun, not spaces to "become an informed citizen," he says. Since social media is where many deepfakes and AI misinformation disseminate, informed citizens must fact check their information with sites like Politifact, factcheck.org, and Snopes, or major media outlets, before reposting. "The fact is: more likely than not you're part of the problem, not the solution," Farid says. "If you want to be an informed citizen, fantastic. But that also means not poisoning the minds of people, and going to serious outlets doing fact checks." Another strategy is to carefully examine the images you're seeing spread online. Kaylyn Jackson Schiff and Daniel Schiff, assistant professors of political science at Purdue University, are working on a database tracking politically-relevant deepfakes. They say technology has made it harder than ever for consumers to be proactive and spot the differences between AI-generated content and reality. Still, they have found in their research that many of the popular deepfakes they study are "lower quality" than real photos. "We'll go to conferences and people say, 'Well, I knew those were fake. I knew that Trump wasn't meeting with Putin.' But we don't know if this works for everybody," Daniel Schiff says. "We know that the images that can be created can be super persuasive to coin-flip levels of accuracy detection by the public." And as the technology advances, that's only going to become more difficult, and the strategy may "not work in two years or three years," he says. To help users evaluate what they're seeing online, the CAI is working to implement "Content Credentials," which they describe as a "nutrition label" for digital content, carrying verifiable metadata like the date and time the content was created, edited, and signaling whether and how AI may have been used. Still, researchers agree that reestablishing public faith in trusted information will need to be a multifaceted effort -- including regulations, technology, and consumer media literacy. Says Farid: "There is no silver bullet."
Share
Share
Copy Link
Artificial intelligence poses a significant threat to the integrity of the 2024 US elections. Experts warn about the potential for AI-generated misinformation to influence voters and disrupt the electoral process.
As the 2024 US presidential election approaches, a new threat looms on the horizon: artificial intelligence-generated misinformation. Experts are sounding the alarm about the potential for AI to create and spread false information at an unprecedented scale, potentially influencing voters and undermining the democratic process 1.
The rapid advancement of AI technology has made it increasingly difficult to distinguish between genuine and fabricated content. From deepfake videos to AI-written articles, the line between reality and fiction is becoming increasingly blurred. This poses a significant challenge for voters who rely on online information to make informed decisions about candidates and issues 2.
One of the most concerning aspects of AI-generated misinformation is its ability to target specific demographics with tailored content. AI algorithms can analyze vast amounts of personal data to create highly persuasive and personalized false narratives, potentially swaying undecided voters or reinforcing existing biases 1.
Social media platforms are at the forefront of this battle against AI-generated misinformation. Companies like Meta, Twitter, and YouTube are investing heavily in AI detection tools and content moderation systems. However, the rapid evolution of AI technology means that these platforms are often playing catch-up with increasingly sophisticated misinformation tactics 2.
As the threat of AI-generated misinformation grows, governments and regulatory bodies are grappling with how to address this issue. Some proposed solutions include stricter regulations on AI-generated content, increased funding for digital literacy programs, and the development of advanced AI detection tools 1.
Experts emphasize that one of the most effective ways to combat AI-generated misinformation is through improved media literacy. Educating voters on how to critically evaluate online content and identify potential misinformation is crucial in maintaining the integrity of the electoral process 2.
Addressing the threat of AI-generated misinformation requires a collaborative effort between tech companies, government agencies, and civil society organizations. Initiatives such as the Partnership on AI and the Integrity Institute are working to develop best practices and ethical guidelines for AI use in political contexts 1.
As the 2024 US presidential election draws near, the battle against AI-generated misinformation intensifies. The outcome of this struggle could have far-reaching implications for the future of democracy in the digital age.
As the 2024 U.S. presidential election approaches, artificial intelligence emerges as a powerful and potentially disruptive force, raising concerns about misinformation, deepfakes, and foreign interference while also offering new campaign tools.
6 Sources
6 Sources
Artificial Intelligence is playing a significant role in the 2024 US presidential race, but not in the ways experts initially feared. Instead of deepfakes and misinformation, AI is being used for campaign organization, voter outreach, and creating viral content.
6 Sources
6 Sources
As the U.S. presidential election approaches, foreign interference and disinformation campaigns from Russia, China, and Iran have become more sophisticated and pervasive, posing significant challenges to election integrity and public trust.
8 Sources
8 Sources
As the 2024 US presidential election approaches, the rise of AI-generated fake content is raising alarms about potential voter manipulation. Experts warn that the flood of AI-created misinformation could significantly impact the electoral process.
5 Sources
5 Sources
As the 2024 U.S. presidential election approaches, experts warn of an unprecedented surge in AI-generated disinformation across social media platforms, posing significant challenges to election integrity and voter trust.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved