Curated by THEOUTPOST
On Wed, 16 Oct, 4:07 PM UTC
6 Sources
[1]
'If You Can Keep It': AI In This Election And Beyond : 1A
A woman holds a American flag during a naturalization ceremony. It's a technology that promises to bring radical change to many facets of our life - from the arts to healthcare and business. During the 2024 election season, experts warn it could also shake up the world of politics. We're talking about artificial intelligence. 2024 is the first presidential election with the powerful technology in play. Currently, there are few regulations about the use of AI in politics. Last month, the Federal Election Commission decided not to impose new rules on the tech ahead of the election. That means it's fair game and it's being used as such. In August, former president Donald Trump posted a picture of an AI generated image of Taylor Swift endorsing him. It led to a response from the pop mega star. In July, Elon Musk shared a video on X that cloned Vice President Kamala Harris' voice saying things she never said. Beyond the memes, U.S. intelligence officials say Russia and Iran are using the technology to influence our election. OpenAI, the company behind tools like ChatGPT and DALL-E, noticed these efforts as well. So how will it affect this election, and elections going forward?
[2]
4 ways AI can be used and abused in the 2024 election, from deepfakes to foreign interference
Barbara A. Trish does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment. The American public is on alert about artificial intelligence and the 2024 election. A September 2024 poll by the Pew Research Center found that well over half of Americans worry that artificial intelligence - or AI, computer technology mimicking the processes and products of human intelligence - will be used to generate and spread false and misleading information in the campaign. My academic research on AI may help quell some concerns. While this innovative technology certainly has the potential to manipulate voters or spread lies at scale, most uses of AI in the current election cycle are, so far, not novel at all. I've identified four roles AI is playing or could play in the 2024 campaign - all arguably updated versions of familiar election activities. 1. Voter information The 2022 launch of ChatGPT brought the promise and peril of generative AI into public consciousness. This technology is called "generative" because it produces text responses to user prompts: It can write poetry, answer history questions - and provide information about the 2024 election. Rather than search Google for voting information, people may instead ask generative AI a question. "How much has inflation changed since 2020?" for example. Or, "Who's running for U.S. Senate in Texas?" Some generative AI platforms such as Google's AI chatbot Gemini, decline to answer questions about candidates and voting. Some, such as Facebook's AI tool Llama, respond - and respond accurately. But generative AI can also produce misinformation. In the most extreme cases, AI can have "hallucinations," offering up wildly inaccurate results. A CBS news account from June 2024 reported that ChatGPT had given incorrect or incomplete responses to some prompts asking how to vote in battleground states. And ChatGPT didn't consistently follow the policy of its owner, OpenAI, and refer users to CanIVote.org, a respected site for voting information. As with the web, people should verify the results of AI searches. And beware: Google's Gemini now automatically returns answers to Google search queries at the top of every results page. You might inadvertently stumble into AI tools when you think you're searching the internet. 2. Deepfakes Deepfakes are fabricated images, audio and video produced by generative AI and designed to replicate reality. Essentially, these are highly convincing versions of what are now called "cheapfakes" - altered images made using basic tools such as Photoshop and video-editing software. The potential of deepfakes to deceive voters became clear when an AI-generated robocall impersonating Joe Biden before the January 2024 New Hampshire primary advised Democrats to save their votes for November. After that, the Federal Communication Commission ruled that AI-generated robocalls are subject to the same regulations as all robocalls. They cannot be auto-dialed or delivered to cellphones or landlines without prior consent. The agency also slapped a US$6 million fine on the consultant who created the fake Biden call - but not for tricking voters. He was fined for transmitting inaccurate caller-ID information. While synthetic media can be used to spread disinformation, deepfakes are now part of the creative toolbox of political advertisers. One early deepfake aimed more at persuasion than overt deception was an AI-generated ad from a 2022 mayoral race contest portraying the then-incumbent mayor of Shreveport, Louisiana, as a failing student summoned to the principal's office. The ad included a quick disclaimer that it was a deepfake, a warning not required by the federal government, but it was easy to miss. Wired magazine's AI Elections Project, which is tracking uses of AI in the 2024 cycle, shows that deepfakes haven't overwhelmed the ads voters see. But they have been used by candidates across the political spectrum, up and down the ballot, for many purposes - including deception. Former President Donald Trump hints at a Democratic deepfake when he questions the crowd size at Vice President Kamala Harris' campaign events. In lobbing such allegations, Trump is attempting to reap the "liar's dividend" - the opportunity to plant the idea that truthful content is fake. Discrediting a political opponent this way is nothing new. Trump has been claiming that the truth is really just "fake news" since at least the "birther" conspiracy of 2008, when he helped to spread rumors that presidential candidate Barack Obama's birth certificate was fake. 3. Strategic distraction Some are concerned that AI might be used by election deniers in this cycle to distract election administrators by burying them in frivolous public records requests. For example, the group True the Vote has lodged hundreds of thousands of voter challenges over the past decade working with just volunteers and a web-based app. Imagine its reach if armed with AI to automate their work. Such widespread, rapid-fire challenges to the voter rolls could divert election administrators from other critical tasks, disenfranchise legitimate voters and disrupt the election. As of now, there's no evidence that this is happening. 4. Foreign election interference Confirmed Russian interference in the 2016 election underscored that the threat of foreign meddling in U.S. politics, whether by Russia or another country invested in discrediting Western democracy, remains a pressing concern. In July, the Department of Justice seized two domain names and searched close to 1,000 accounts that Russian actors had used for what it called a "social media bot farm," similar to those Russia used to influence the opinions of hundreds of millions of Facebook users in the 2020 campaign. Artificial intelligence could give these efforts a real boost. There's also evidence that China is using AI this cycle to spread malicious information about the U.S. One such social media post transcribed a Biden speech inaccurately to suggest he made sexual references. AI may help election interferers do their dirty work, but new technology is hardly necessary for foreign meddling in U.S. politics. In 1940, the United Kingdom - an American ally - was so focused on getting the U.S. to enter World War II that British intelligence officers worked to help congressional candidates committed to intervention and to discredit isolationists. One target was the prominent Republican isolationist U.S. Rep. Hamilton Fish. Circulating a photo of Fish and the leader of an American pro-Nazi group taken out of context, the British sought to falsely paint Fish as a supporter of Nazi elements abroad and in the U.S. Can AI be controlled? Acknowledging that it doesn't take new technology to do harm, bad actors can leverage the efficiencies embedded in AI to create a formidable challenge to election operations and integrity. Federal efforts to regulate AI's use in electoral politics face the same uphill battle as most proposals to regulate political campaigns. States have been more active: 19 now ban or restrict deepfakes in political campaigns. Some platforms engage in light self-moderation. Google's Gemini responds to prompts asking for basic election information by saying, "I can't help with responses on elections and political figures right now." Campaign professionals may employ a little self-regulation, too. Several speakers at a May 2024 conference on campaign tech expressed concern about pushback from voters if they learn that a campaign is using AI technology. In this sense, the public concern over AI might be productive, creating a guardrail of sorts. But the flip side of that public concern - what Stanford University's Nate Persily calls "AI panic" - is that it can further erode trust in elections.
[3]
AI deepfakes a top concern for election officials with voting underway
PHOENIX -- In the final weeks of a divisive, high-stakes campaign season, state election officials in political battleground states say they are bracing for the unpredictable and emergent threat posed by artificial intelligence, or AI. "The number one concern we have on Election Day are some of the challenges that we have yet to face," Arizona Secretary of State Adrian Fontes said. "There are some uncertainties, particularly with generative artificial intelligence and the ways that those might be used." Fontes, a Democrat, said his office is aware that some campaigns are already using AI as a tool in his hotly contested state and that election administrators urgently need to familiarize themselves with what is real and what is not. "We're training all of our election officials, to make sure that they're familiar with some of the weapons that might be deployed against them," he said. During a series of tabletop exercises conducted over the past six months, Arizona officials for the first time confronted hypothetical scenarios involving disruptions on Election Day on Nov. 5 created or facilitated by AI. Some involved deepfake video and voice-cloning technology deployed by bad actors across social media in an attempt to dissuade people from voting, disrupt polling places, or confuse poll workers as they handle ballots. In one fictional case, an AI-generated fake news headline published on Election Day said there had been shootings at polling places and that election officials had rescheduled the vote for Nov. 6. "They walk us through those worst case scenarios so that we can be critically thinking, thinking on our toes," said Gina Roberts, voter education director for the nonpartisan Arizona Citizens Clean Elections Commission and one of the participants in the exercise. The tabletop exercise also studied recent real-world examples of AI being deployed to try to influence elections. In January, an AI-generated robocall mimicking President Joe Biden's voice was used to dissuade New Hampshire Democrats from voting in the primary. The Federal Communications Commission assessed a $6 million fine against the political consultant who made it. In September, Taylor Swift revealed on Instagram that she went public to endorse Vice President Kamala Harris to, in part, refute an AI-generated deepfake image that falsely showed her endorsing Donald Trump. There have also been high profile cases of foreign adversaries using AI to influence the campaign. OpenAI, the company behind ChatGPT, says it shut down a secret Iranian effort to use its tools to manipulate U.S. voter opinion. The Justice Department has also said that Russia is actively using AI to feed political disinformation on to social media platforms. "The primary targets of interest are going to be in swing states, and they're going to be swing voters," said Lucas Hanson, co-founder of CivAI, a nonprofit group tracking the use of A.I. in politics in order to educate the public. "An even bigger [threat] potentially is trying to manipulate voter turnout, which in some ways is easier than trying to get people to actually change their mind," Hanson said. "Whether or not that shows up in this particular election it's hard to know for sure, but the technology is there." Federal authorities say that while the risks aren't entirely new, AI is amplifying attacks on U.S. elections with "greater speed and sophistication" at lower costs. "Those threats being supercharged by advanced technologies -- the most disruptive of which is artificial intelligence," Deputy Attorney General Lisa Monaco said last month. In a bulletin to state election officials, the Department of Homeland Security warns that AI voice and video tools could be used to create fake election records; impersonate election staff to gain access to sensitive information; generate fake voter calls to overwhelm call centers; and more convincingly spread false information online. Hanson says voters need to educate themselves on spotting AI attempts to influence their views. "In images, at least for now, oftentimes if you look at the hands, then there'll be the wrong number of fingers or there will be not enough appendages. For audio, a lot of times it still sounds relatively robotic. In particular, sometimes there will be these little stutters," he said. Social media companies and U.S. intelligence agencies say they are also tracking nefarious AI-driven influence campaigns and are prepared to alert voters about malicious deepfakes and disinformation. But they can't catch them all. More than 3 in 4 Americans believe it's likely AI will be used to affect the election outcome, according to an Elon University poll conducted in April 2024. Many voters in the same poll also said they're worried they are not prepared to detect fake photos, video and audio on their own. "In the long term, if you can see something that seems impossible and it also makes you really, really mad, then there's a pretty good chance that that's not real," Hanson said. "So part of it is you have to learn to listen to your gut." In states like Arizona, which could decide a razor tight presidential race, the stakes are higher than ever. "AI is just the new kid on the block," Fontes said. "What exactly is going to happen? We're not sure. We are doing our best preparing for everything except Godzilla. We're preparing for about everything, because if Godzilla shows up, all bets are off."
[4]
Will AI Trickery and Deepfake Crash the US Presidential Election Party?
Per research, 39% of Americans believe AI will be misused with harmful intent during the presidential campaign, while only 5% think it will be primarily for good. With the US presidential elections less than a month away, widespread concerns have been raised about the potential of AI in the spread of misinformation. While the stakes are high in the close fight between former President and Republican candidate Donald Trump and Democrat Kamala Harris, the AI landscape is fraught with challenges and the need to safeguard electoral integrity. The internet is swamped with AI-generated deepfakes -- including pop star Taylor Swift endorsing Trump, actor Will Smith and Trump eating noodles, suggestive videos of Rep Alexandria Ocasio-Cortez, scam advertisements, and deepfake video of Trump running from police while being arrested, among others. Deepfake technology, which employs artificial intelligence to imitate a person's voice or appearance in audio or video, has been around for years. However, its accessibility has now intensified, allowing almost anyone with a computer to easily create convincing deepfakes at little or no cost and share them on social media. As per a recent report by video content platform Kapwing, 64% of deepfake videos of the ten most "deepfaked" individuals were of politicians and business leaders. Unsurprisingly, Donald Trump and Elon Musk topped the list. While deepfakes and election misinformation isn't new, as AI continues to evolve, its potential to disrupt electoral outcomes and the ability to manipulate public perception through realistic audio and videos become more pronounced. The risk is real. A majority of Americans say they are concerned about the impact of AI on the 2024 presidential campaign. The deepfake of Swift endorsing Trump, which the latter promoted as a fact, caused the start of an imaginary "Swifties for Trump" movement online, the singer, who initially remained silent, took to social media to refute the claim. She instead supported Trump's opponent and expressed concerns about AI and misinformation. "It really conjured up my fears around AI, and the dangers of spreading misinformation. It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter," Swift wrote in a long post on her Instagram account. "The simplest way to combat misinformation is with the truth." A new Pew Research report finds that 39% of Americans believe AI will be misused with harmful intent during the presidential campaign, while only 5% think it will be primarily for good. Besides, the report highlights that 57% of US adults-both Republicans and Democrats-say they are very concerned that people or organisations seeking to influence the election will use AI to create and distribute fake or misleading information about the candidates and campaigns. In terms of political ads, researchers from NYU's Center on Technology Policy conducted an experiment with fake ads and discovered that candidates were perceived as "less trustworthy and less appealing" when their ads included AI disclaimers. The findings highlight the need to balance the benefits of labelling, such as enhancing trust in political messaging, with the drawbacks of potentially discrediting harmless AI use. The study also identified public preferences for disclosure rules. Until 2021, Trump used social media posts to expand his reach. However, he was banned across platforms for inciting violence when his supporters stormed the US Capitol that year. In 2023, both Meta and X lifted the ban and Trump resumed posting on social media. Now, with Musk openly endorsing Trump, the Republican nominee has relied on the platform to spread his agenda. The AI and election discourse, at large, also entrusts responsibility on part of citizens to consume content with caution. Voters should understand the sensational, and often sceptical undertones of political messaging and discern the right from wrong. Early this year, big-tech companies announced requirements for labelling AI-generated content to help users distinguish between machine- and human-created material. They also pledged to make voluntary safety commitments. Meanwhile, OpenAI launched image detection tools for users to identify fake content. The company has also partnered with Microsoft to launch a fund to fight deepfakes. And Google said that its AI chatbot, Gemini, will not answer election-related queries on its platform. Even new-age startups are joining forces to craft policies to tackle misinformation and deepfakes in the AI era. For instance, Anthropic, an AI safety and research company, created a process combining expert-led 'Policy Vulnerability Testing' with automated evaluations to identify risks and better their response. The company also shared these tools online as part of its election integrity efforts. Self-regulation and policing by these platforms and big-tech companies like Meta, Google and OpenAI are a great step in the right direction, albeit insufficient. Government intervention and regulation are definitely required to offset the damage. As it turns out, we are still in the nascent stage of regulating emerging AI technologies, especially deepfakes. In July, California governor Gavin Newsom targeted political deepfakes after Musk shared an altered video of VP Kamala Harris' campaign. The thin line between memes, facts, and deepfakes is what makes regulating the entire space even more difficult. Newsom signed three Bills intended to limit the use of AI in producing misleading images or videos for political ads in preparation for the 2024 election. However, earlier this month, a federal judge upheld the first amendment law, putting this deepfake law on hold. Researchers at UChicago have studied this topic extensively, and hinted that while there are harms, it also presents us with opportunities with AI engagement. The paper noted that political parties and tech platforms should use campaigns, media outlets to leverage generative AI to help voters understand complex policies. "Beyond political learning, generative AI could also be used to facilitate communication between citizens and elected officials," it noted.
[5]
AI-generated images have become a new form of propaganda this election season
An image that was likely created with artificial intelligence tools that purported to show a young survivor of Hurricane Helene. The image got millions of views online, even after its provenance was questioned. hide caption After images of the devastation left by Hurricane Helene started to spread online, so too did an image of a crying child holding a puppy on a boat. Some of the posts on X (formerly Twitter) showing the image received millions of views. It prompted emotional responses from many users - including many Republicans eager to criticize the Biden administration's disaster response. But others quickly pointed out telltale signs that the image was likely made with generative artificial intelligence tools, such as malformed limbs and blurriness common to some AI image generators. This election cycle, such AI-generated synthetic images have proliferated on social media platforms, often after politically charged news events. People watching online platforms and the election closely say that these images are a way to spread partisan narratives with facts often being irrelevant. After X users added a community note flagging that the image of the child in the boat was likely AI-generated, some who shared the image, like Sen. Mike Lee (R-Utah), deleted their posts about it, according to Rolling Stone. But even after the image's synthetic provenance was revealed, others doubled down. "I don't know where this photo came from and honestly, it doesn't matter." wrote Amy Kremer, a Republican National Committee member representing Georgia, on X. "It's a form of political propaganda, a way to signal interest and support for a candidate, almost like in a fandom kind of style," said Renée DiResta, a professor at the McCourt School of Public Policy at Georgetown University, who recently wrote a book about online influencers. "The political campaigns then can pick it up, can retweet it, can boost it and are then seen as being sort of in on the conversation, maybe in on the joke themselves." Other images likely created by AI that depicted animals on roofs barely above flood water spread after Hurricanes Helene and Milton. After former President Trump and his running mate JD Vance amplified baseless claims about Haitian immigrants in Springfield, Ohio eating pets and wild animals, AI-generated images of Trump cuddling cats and ducks flooded X and other social media platforms popular with Republicans. Generative AI is one more tool for supporters to interact with their campaigns online, said DiResta. "It's cheap, it's easy, it's entertaining, so why wouldn't you?" In the same post defending her decision to keep the synthetic image up, Kremer also wrote: "it is emblematic of the trauma and pain people are living through." The separation between facts and the idea of a deeper truth has its echoes in Western philosophy, says Matthew Barnidge, a professor who researches online news deserts and political communication at the University of Alabama. "When you go back and dig through the works of Kant and Kierkegaard and Hegel, [there's] this notion that there is some type of deeper truth which often gets associated with something along the lines of freedom or the sublime, or some concepts like that". To be clear, when individual fact checks pile up against politicians, research suggests it can change how voters feel about them. One study showed that fact checks did change how Australians feel about their politicians. But another study showed that fact checks of Trump did not change Americans' views about him even as they changed their beliefs about individual facts. Fact checking images can be trickier than text, said Emily Vraga, a health communication researcher at University of Minnesota. "There are a lot of studies showing that people have a very hard time knowing what is real versus not when it comes to online imagery." Vraga said, "this was true even before ChatGPT." Arresting visual images can evoke emotions before people have the time to process what they are seeing. A team of researchers looked at Pinterest posts portraying vaccine needles which include misleading images with extra-large needles and a brightly colored fluid. "The needle is much smaller - that's not like a super convincing correction." said Vraga, "It's part of the larger narrative that vaccines are unnatural and dangerous." Hyper-realistic, often uncanny AI-generated images may live in a gray space between fact and fiction for viewers. While a photorealistic image of pop star Taylor Swift endorsing Trump was clearly not Swift on closer inspection, the passing resemblance had an impact on people who saw it, said New York University art historian Ara Merjian. "it wouldn't have been a scandal if someone had drawn Taylor Swift in a comic endorsing Trump." The pop star cited the AI-generated images and "misinformation" as one reason for her endorsement of Vice President Kamala Harris as president. The AI images also partly filled the space that legacy news media left behind as the news industry has shrunk and tech platforms have deprioritized news. "Who's moving into that space? Propagandists," said Barnridge. "Propaganda often presents itself not as news, but kind of seeps in in other ways through lifestyle content." Politically inspired images are just a fraction of the AI-generated images on social media platforms. Researchers have spotted AI-generated cakes, kitchens and shrimp-like Jesuses rising out of the sea. Some led to websites vying for traffic, others tried to pry viewers of their personal information and money. An investigation by 404 Media found that people in developing countries are teaching others to make trending posts using AI-generated images so Facebook will pay them for creating popular content. Payouts can be higher than typical local monthly income. Many of the images created by these content farms evoked strong, sometimes patriotic emotions. Some images looked realistic, others were more artistic. One of the more striking AI-generated images related to politics was boosted by X's owner Elon Musk. It portrayed someone resembling Harris wearing a red uniform with a hammer and sickle on her hat. Eddie Perez, a former Twitter employee who focuses on confidence in elections at nonpartisan nonprofit OSET Institute, said the image is meant to portray Harris as un-American. "The message that there is no legitimate way that Kamala Harris and her party could actually win a presidential election." Images like these are fanning political polarization, which Perez said could undermine people's trust in election results. For months, Republicans have suggested Democrats are likely to steal the election from Trump through various kinds of subterfuge. "There are many, many different ways and modalities that that strategy is being implemented. Generative A.I. is only one of many different tools in the toolkit, so to speak. Do I think that it is singularly bad or worse than many of the others? No. Does that mean that it's benign? No," said Perez.
[6]
How to spot AI deepfakes that spread election misinformation
Generative AI systems, such as ChatGPT, are trained on large datasets to create written, visual or audio content in response to prompts. When fed real images, some algorithms can produce fake photos and videos known as deepfakes. Content created with generative artificial intelligence (AI) systems is playing a role in the 2024 presidential election. While these tools can be used harmlessly, they allow bad actors to create misinformation more quickly and realistically than before, potentially increasing their influence on voters. Domestic and foreign adversaries can use deepfakes and other forms of generative AI to spread false information about a politician's platform or doctor their speeches, said Thomas Scanlon, principal researcher at Carnegie Mellon University's Software Engineering Institute and an adjunct professor at its Heinz College of Information Systems and Public Policy. "The concern with deepfakes is how believable they can be, and how problematic it is to discern them from authentic footage," Scanlon said. Voters have seen more ridiculous AI-generated content -- such as a photo of Donald Trump appearing to ride a lion -- than an onslaught of hyper-realistic deepfakes full of falsehoods, according to the Associated Press. Still, Scanlon is concerned that voters will be exposed to more harmful generative content on or shortly before Election Day, such as videos depicting poll workers saying an open voting location is closed. That sort of misinformation, he said, could prevent voters from casting their ballots because there will be little time to correct the false information. Overall, AI-generated deceit could further erode voters' trust in the country's democratic institutions and elected officials, according to the university's Block Center for Technology and Society, housed in the Heinz College of Information Systems and Public Policy. "People are just constantly being bombarded with information, and it's up to the consumer to determine: What is the value of it, but also, what is their confidence in it? And I think that's really where individuals may struggle," said Randall Trzeciak, director of the Heinz College Master of Science in Information Security Policy & Management (MSISPM) program. Leaps and bounds in generative AI For years, people have spread misinformation by manipulating photos and videos with tools such as Adobe Photoshop, Scanlon said. These fakes are easier to recognize, and they're harder for bad actors to replicate on a large scale. Generative AI systems, however, enable users to create content quickly and easily, even if they don't have fancy computers or software. People fall for deepfakes for a variety of reasons, faculty at Heinz College said. If the viewer is using a smartphone, they're more likely to blame a deepfake's poor quality on bad cell service. If a deepfake echoes a belief the viewer already has -- for example, that a political candidate would make the statement depicted -- the viewer is less likely to scrutinize it. Most people don't have time to fact-check every video they see, meaning deepfakes can sow doubt and erode trust over time, wrote Ananya Sen, an assistant professor of information technology and management at Heinz College, in a statement. He's concerned that ballot-counting livestreams, while intended to increase transparency, could be used for deepfakes. Once the false information is out there, there's little opportunity to correct it and put the genie back in the bottle. Unlike previous means of creating disinformation, generative AI can also be used to send tailor-made messages to online communities, said Ari Lightman, a professor of digital media and marketing at Heinz College. If one member of the community accidentally shares the content, the others may believe its message because they trust the person who shared it. Adversaries are "looking at consumer behavioral patterns and how people interact with technology, hoping that one of them clicks on a piece of information that might cascade into a viral release of disinformation," Lightman said. It's difficult to unmask the perpetrators of AI-generated misinformation. The creators can use virtual private networks and other mechanisms to hide their tracks. Countries with adversarial relationships with the U.S. are likely weaponizing this technology, Lightman said, but he's also concerned about individuals and terrorist groups that may be operating under the radar. What voters need to know People should trust their intuition and attempt to verify videos they believe could be deepfakes, Scanlon said. "If you see a video that's causing you to have some doubt about its authenticity, then you should acknowledge that doubt," he said. Here are a few signs that a video could be a deepfake, according to Scanlon: The Block Center has compiled a guide to help voters navigate generative AI in political campaigning. The guide encourages voters to ask candidates questions about their use of AI and to send their elected representatives letters that support stronger AI regulations. "An informed voter should take as much time as they need to have confidence in the information that goes into their decision-making process," Trzeciak said. Legislative landscape There is no comprehensive federal legislation regulating deepfakes, and several bills aimed at protecting elections from AI threats have stalled in Congress. Some states have passed laws prohibiting the creation or use of deepfakes for malicious purposes, but not all are explicitly related to election interference. The Pennsylvania State Senate has introduced a bill that would impose civil penalties on those who disseminate campaign advertisements that contain AI-generated impersonations of political candidates, as long as the courts can prove malicious intent. The bill has yet to come to a vote. The existing laws are not enough to regulate the use of deepfakes, Scanlon said. But, he added, the murky nature of cybercrimes means that any federal regulation will likely be difficult to enforce. "Enforcement will probably look like making examples of folks and groups periodically to act as a deterrent," Scanlon said. Beyond implementing and enforcing regulations, Lightman said the country needs to address the political polarization and diminished societal trust in institutions that allow misinformation to catch fire. "Everything we look at is either sarcasm or completely false propaganda. And we don't trust each other," he said. "We have to go back to having a social understanding of how what we're engaged with is eroding trust. If we can understand that, maybe we can take steps to reverse it."
Share
Share
Copy Link
As the 2024 U.S. presidential election approaches, artificial intelligence emerges as a powerful and potentially disruptive force, raising concerns about misinformation, deepfakes, and foreign interference while also offering new campaign tools.
As the United States gears up for the 2024 presidential election, artificial intelligence (AI) has emerged as a powerful and potentially disruptive force in the political landscape. This election cycle marks the first time that AI technology will play a significant role, raising both opportunities and concerns for candidates, voters, and election officials alike 12.
One of the most pressing concerns surrounding AI in the election is the proliferation of deepfakes – highly convincing fabricated images, audio, and video produced by generative AI. These deepfakes have the potential to spread misinformation rapidly and influence public opinion. Notable examples include:
The ease of creating and disseminating such content has raised alarms among election officials and cybersecurity experts. According to a Pew Research Center poll, over 57% of U.S. adults are very concerned about AI being used to create and distribute fake or misleading information about candidates and campaigns 45.
U.S. intelligence officials have warned that foreign adversaries, particularly Russia and Iran, are actively using AI to influence the election 13. OpenAI reported shutting down a secret Iranian effort to manipulate U.S. voter opinion using its tools 3. The Department of Justice has also highlighted Russia's use of AI to feed political disinformation onto social media platforms 3.
The rapid advancement of AI technology has outpaced regulatory efforts. In January 2024, the Federal Election Commission decided not to impose new rules on AI use in politics ahead of the election 1. However, some states, like California, have attempted to limit the use of AI in producing misleading political ads 5.
Tech companies have taken some steps to address these concerns:
Despite the risks, campaigns are increasingly using AI as a strategic tool. From automating voter outreach to creating personalized campaign messages, AI is reshaping how political campaigns operate 24. This has led to debates about the ethical use of AI in politics and its potential impact on voter perception.
Election officials and cybersecurity experts are working to prepare for potential AI-driven disruptions. In Arizona, for example, officials have conducted tabletop exercises to simulate AI-related election day scenarios 3. Voter education initiatives are also being launched to help the public identify AI-generated content and combat misinformation 35.
As the election approaches, the full impact of AI on the democratic process remains to be seen. While it offers new opportunities for engagement and efficiency, the technology also presents significant challenges to electoral integrity and public trust. Balancing innovation with safeguards will be crucial in navigating this new frontier of political campaigning.
Reference
[2]
[4]
Artificial intelligence poses a significant threat to the integrity of the 2024 US elections. Experts warn about the potential for AI-generated misinformation to influence voters and disrupt the electoral process.
2 Sources
2 Sources
A comprehensive look at how AI technologies were utilized in the 2024 global elections, highlighting both positive applications and potential risks.
4 Sources
4 Sources
Artificial Intelligence is playing a significant role in the 2024 US presidential race, but not in the ways experts initially feared. Instead of deepfakes and misinformation, AI is being used for campaign organization, voter outreach, and creating viral content.
6 Sources
6 Sources
As the 2024 US presidential election approaches, the rise of AI-generated fake content is raising alarms about potential voter manipulation. Experts warn that the flood of AI-created misinformation could significantly impact the electoral process.
5 Sources
5 Sources
US intelligence officials report that Russia, Iran, and China are using artificial intelligence to enhance their election interference efforts. Russia is identified as the most prolific producer of AI-generated content aimed at influencing the 2024 US presidential election.
10 Sources
10 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved