6 Sources
6 Sources
[1]
AI deepfakes blur reality in 2026 US midterm campaigns
NEW YORK, March 28 (Reuters) - As the video opens, Democratic Texas State Representative James Talarico appears to stand in front of a Texas flag, beaming. "Radicalized white men are the greatest domestic terrorist threat in our country," the U.S. Senate candidate seems to say into the camera. As a voice whispers "white men," Talarico continues: "So true. So true." But Talarico never filmed that video. Instead, the clip is an AI-generated ad from the National Republican Senatorial Committee (NRSC), the party's Senate campaign arm, featuring a computer-altered Talarico reciting social media posts he wrote years ago. The words "AI generated" show up in easy-to-miss font in the lower righthand corner. The realistic video is among a vanguard of "deepfake" advertisements that some campaigns are already deploying ahead of November's midterm elections, taking advantage of AI tools that are improving at a breakneck pace. The ads are being introduced into a media landscape with few guardrails. There is no federal regulation constraining the use of AI in political messaging, leaving only a patchwork of largely untested state laws. And while social media companies like Meta and X label certain AI-generated content, they have scrapped professional fact‑checking systems in favor of user-generated notes. Politics experts worry such videos could leave voters confused, or even deceived. The stakes are high: the election will determine which party controls Congress for the final two years of Republican President Donald Trump's term, with Democrats seemingly well positioned to capture a majority in the U.S. House of Representatives but facing longer odds in the U.S. Senate. The ads appear to be effective, political strategists and experts said. One 2025 study, published in the peer-reviewed Journal of Creative Communications, found that people struggle to identify deepfake videos and that their opinions are affected by this type of misinformation. So far, Republicans appear to be utilizing the technology more frequently than Democrats this election cycle, according to politics experts and a Reuters review of publicly available ads. The Republicans are following the lead of Trump's White House, which has released scores of AI-generated videos and gaming-inspired memes on social media that do everything from disparaging protesters to hyping up the Iran war. The Talarico ad, for instance, is one of three recent ads created by national Republicans that use deepfake technology - realistic yet fabricated videos made by AI algorithms that have become increasingly easy to create. NRSC Communications Director Joanna Rodriguez defended the ad in a statement to Reuters, saying Democrats were "panicking after seeing and hearing James Talarico's own words." JT Ennis, a spokesperson for Talarico's campaign, said that while his opponents "spend their time making deepfake videos to mislead Texans, we are uniting the people of Texas to win in November." Among Democrats, the most notable user of AI-generated videos is California Governor Gavin Newsom, a potential 2028 presidential candidate who has frequently employed deepfake videos to troll Trump. But the Democratic Party's national campaign committees have not yet sought to mirror the NRSC's efforts in midterm campaigns. ADS POSE MISINFORMATION RISKS The campaign of Republican U.S. Representative Mike Collins of Georgia, who is vying to challenge Democratic Senator Jon Ossoff in November, created a deepfake video in which Ossoff appears to say: "I just voted to keep the government shut down. They say it would hurt farmers, but I wouldn't know. I've only seen a farm on Instagram." In a statement, Collins' campaign spokesperson said that as technology evolves, the campaign "will be at the forefront embracing new tactics and strategies that pierce through lopsided legacy media coverage and deliver our message directly to voters." A spokesperson for Ossoff's campaign declined to comment on the ad. Days after the video ran, the campaign said "yes" when asked by the Atlanta Journal-Constitution if it would "commit to not using deepfakes that misattribute or fabricate words or actions of their opponents to mislead voters." Daniel Schiff, a Purdue University professor who has studied thousands of deepfakes, said the growing use of political content that spreads misinformation risks further eroding U.S. voter trust in institutions. "I think that the types of damage that we can do to the rigor and credibility of elections and democratic systems - and the ability to misinform people about candidates or social issues - very much risks being supercharged," he said. Still, political strategists say AI-generated videos can be persuasive as well as time- and cost-effective, though they stressed that they need to be used ethically. The technology can be a tool for political satire in a visual format that lends itself to watching and sharing on social media. STATES PLAY CATCH-UP ON AI With essentially no federal regulation in place, states have been playing catch-up. Twenty-eight states have passed legislation addressing the use of AI in political ads, with most focused on disclosure rather than an outright ban, according to Ilana Beller, who leads state legislative work on AI at the liberal consumer advocacy group Public Citizen. But those laws face limits. Many only apply to political campaigns rather than social media users who might spread AI-infused misinformation. Research also suggests that disclaimers are not effective in preventing voters from being persuaded by false ads, Schiff noted. AI technology is inexpensive and accessible enough that down-ballot candidates and local political groups are using it, said Brady Smith, a national Republican political strategist. For example, in February the Republican Committee for Loudoun County in northern Virginia released three AI-generated ads attacking Democratic Governor Abigail Spanberger, who took office in January. One video showed footage of Spanberger's response to Trump's State of the Union address interspersed with AI-generated video of her appearing to say things like "working hard to bring in commie socialist Marxism, free stuff for illegals, gun grabs and erasing gender norms." A spokesperson for Spanberger declined to comment. A representative for the Loudoun County Republican Committee did not reply to a request for comment. Other videos are more obviously fake. An ad for Republican Texas Attorney General Ken Paxton's primary campaign against Senator John Cornyn shows an AI-generated version of Cornyn dancing with Democratic Representative Jasmine Crockett, as a narrator says: "Publicly, they're opponents. Privately, they're perfectly in step." A disclosure in small font appears at the end, stating some AI-generated content "is satire that does not represent real events." Cornyn's campaign responded by releasing an AI-generated ad of Paxton driving a convertible with women depicted as "Mistress #1" and "Mistress #2", highlighting allegations of infidelity that have dogged the attorney general during his run. Spokespeople for Paxton and Cornyn's campaigns did not respond to requests for comment. The exchange reflects how quickly AI-generated attacks are becoming part of routine campaign messaging, despite concerns about their impact on the electoral system. "It's harmful for politicians and campaigns to continue normalizing this," Schiff said. Reporting by Helen Coster and Joseph Ax in New York, editing by Ross Colvin and Alistair Bell Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Media & Telecom Helen Coster Thomson Reuters Helen Coster is a National Affairs Correspondent at Reuters, where she writes a mix of spot news, enterprise and analysis stories, with a focus on politics and media. She previously covered the 2024 presidential race, with a focus on Republicans, and before then reported on the media industry.
[2]
'They feel true': political deepfakes are growing in influence - even if people know they aren't real
AI images of people - such as women in military contexts - are making money and serving as propaganda, researchers say Online content creators are not just building fake images and videos of prominent public figures, they are also fabricating people and using them in military contexts, which can make them money and even serve as effective propaganda, according to artificial intelligence researchers. Some of these online avatars are sexualized images of women wearing camouflage garb that have generated a significant audience and helped create an idealized image of political figures like Donald Trump, even if the viewer knows the content is not real, according to experts. "We are blending the lines between political cartoons and reality," said Daniel Schiff, an assistant professor of technology policy at Purdue University and co-director of the Governance and Responsible AI Lab (Grail). "A lot of people feel like these images or videos or the stories they convey, feel true." The amount of political deepfakes has increased dramatically in recent years, according to a Grail database. Since the start of 2025, the organization has catalogued more than 1,000 English language social media posts featuring fake images or videos of prominent political figures and politically important social issues and events. In the previous eight years combined, the organization recorded 1,344 such incidents. That uptick is largely because generative AI technology has improved, which has allowed people to quickly create such content, Schiff said. We have made it "trivially easy to generate a scene that looks pretty realistic and to place real individuals into scenes", said Sam Gregory, executive director of Witness, an organization dedicated to human rights and combating deceptive AI. But the fake avatars - which mimic real ordinary people rather than known figures - are a different matter again. In December 2025, an account for Jessica Foster, an AI-generated blond woman often in US military uniform, went live on Instagram and started sharing photos of Foster atop a bunk bed in barracks; sitting in an office chair with her feet on a desk; and walking a tarmac in high heels beside Trump, according to Fast Company. The creators intentionally used that footwear and had her feet appear prominently. The images of Foster, who is not an actual person, drew more than 1 million followers on Instagram. The posts were then linked to an account on OnlyFans, a platform largely used by pornography creators, where visitors could buy foot photos supposedly from Foster. "Why do you NEVER reply?" a user asked the Foster profile on Instagram, according to the Washington Post. The account has been removed in recent days. "A lot of the AI-generation is to basically get clicks and money or to drive people to a more lucrative place," Gregory said. But such tools can also serve a political purpose. During the war in Iran, a flood of videos have appeared on social media featuring fake female Iranian soldiers who say: "Habibi, come to Iran," the BBC reported. One of the giveaways was that Iran prohibits women from serving in combat roles. Creators also built an AI-generated female police officer that has more than 26,000 followers on TikTok. A video features it smiling with the text: "President Trump deported over 2.5 million people out of the country. Is this what you voted for? Yes." It got more than 200 likes and 23 comments, including: "absolutely yes." During the 2024 election, Trump also shared AI-generated images that depicted Taylor Swift fans supporting him. Since 2024, Trump and the White House have shared at least 18 deepfakes on social media, according to the Grail database. But the issue is not limited to the right. California governor Gavin Newsom, who many predict will run for president in 2028, has also started sharing deepfakes aimed at Trump, including one that shows the president smiling at a hologram of Jeffrey Epstein. The AI researchers said political deepfakes can still be persuasive even if consumers know they aren't real. Foster is "walking in high heels, in a military uniform, her military badge is completely wrong. There is no reason she would be hanging out with President Trump and Nicolás Maduro", Gregory said. "None of this, if you think about it, makes much sense or bears up to scrutiny. But people aren't necessarily looking for things that are real; they are looking for things that represent their beliefs." The deepfakes then make it less likely that people will reconsider those beliefs, said Valerie Wirtschafter, a Brookings Institution fellow in its artificial intelligence and emerging technology initiative. The deepfakes are "just another layer added on in terms of this process of reinforcing, rather than revisiting, what people believe is true", said Wirtschafter. The researchers worry that things will only get worse. The technology used to build Foster could also be used to produce what researchers described as "AI swarms", capable of "coordinating autonomously, infiltrating communities, and fabricating consensus efficiently", according to a recent study in Science. "It's sort of like a troll farm without actually having to have people any more," Wirtschafter said. But humans can still stop malicious actors from using AI to destabilize society, the researchers said. The Coalition for Content Provenance and Authenticity has developed a "technical standard for publishers, creators and consumers to establish the origin and edits of digital content", according to the group. It's "embedded in a photo you take on a camera or piece of content created with an AI tool or edited with an AI tool, and then distributed on a platform, so it's meant to be a set of cryptographically signed metadata", Gregory said. The technology companies then need to use that information to label whether the content included AI, Gregory said. LinkedIn, Pinterest, TikTok and YouTube have all committed to labelling AI-generated content. But an investigator with the Indicator, a media outlet, recently posted 200 AI-generated images and videos on those platforms to determine if they actually marked them. He found that the most diligent ones - LinkedIn and Pinterest - still only labelled 67% of that content; Instagram labelled just 15 of 105 fake images. Meta's oversight board recently stated that it was concerned by reports that the company was "inconsistently implementing" the Coalition's standards "even on content generated by its own AI tools, and that only a portion of such output receives proper labeling". Gregory said the inconsistent labelling is due to a "failure of political will at the senior levels" of the big tech companies. "We don't need to give up on the ability to discern what is real from synthetic," he said. "But we do need to act fast."
[3]
AI Deepfakes Blur Reality in 2026 US Midterm Campaigns
By Joseph Ax and Helen Coster NEW YORK, March 28 (Reuters) - As the video opens, Democratic Texas State Representative James Talarico appears to stand in front of a Texas flag, beaming. "Radicalized white men are the greatest domestic terrorist threat in our country," the U.S. Senate candidate seems to say into the camera. As a voice whispers "white men," Talarico continues: "So true. So true." But Talarico never filmed that video. Instead, the clip is an AI-generated ad from the National Republican Senatorial Committee (NRSC), the party's Senate campaign arm, featuring a computer-altered Talarico reciting social media posts he wrote years ago. The words "AI generated" show up in easy-to-miss font in the lower righthand corner. The realistic video is among a vanguard of "deepfake" advertisements that some campaigns are already deploying ahead of November's midterm elections, taking advantage of AI tools that are improving at a breakneck pace. The ads are being introduced into a media landscape with few guardrails. There is no federal regulation constraining the use of AI in political messaging, leaving only a patchwork of largely untested state laws. And while social media companies like Meta and X label certain AI-generated content, they have scrapped professional fact‑checking systems in favor of user-generated notes. Politics experts worry such videos could leave voters confused, or even deceived. The stakes are high: the election will determine which party controls Congress for the final two years of Republican President Donald Trump's term, with Democrats seemingly well positioned to capture a majority in the U.S. House of Representatives but facing longer odds in the U.S. Senate. The ads appear to be effective, political strategists and experts said. One 2025 study, published in the peer-reviewed Journal of Creative Communications, found that people struggle to identify deepfake videos and that their opinions are affected by this type of misinformation. So far, Republicans appear to be utilizing the technology more frequently than Democrats this election cycle, according to politics experts and a Reuters review of publicly available ads. The Republicans are following the lead of Trump's White House, which has released scores of AI-generated videos and gaming-inspired memes on social media that do everything from disparaging protesters to hyping up the Iran war. The Talarico ad, for instance, is one of three recent ads created by national Republicans that use deepfake technology - realistic yet fabricated videos made by AI algorithms that have become increasingly easy to create. NRSC Communications Director Joanna Rodriguez defended the ad in a statement to Reuters, saying Democrats were "panicking after seeing and hearing James Talarico's own words." JT Ennis, a spokesperson for Talarico's campaign, said that while his opponents "spend their time making deepfake videos to mislead Texans, we are uniting the people of Texas to win in November." Among Democrats, the most notable user of AI-generated videos is California Governor Gavin Newsom, a potential 2028 presidential candidate who has frequently employed deepfake videos to troll Trump. But the Democratic Party's national campaign committees have not yet sought to mirror the NRSC's efforts in midterm campaigns. ADS POSE MISINFORMATION RISKS The campaign of Republican U.S. Representative Mike Collins of Georgia, who is vying to challenge Democratic Senator Jon Ossoff in November, created a deepfake video in which Ossoff appears to say: "I just voted to keep the government shut down. They say it would hurt farmers, but I wouldn't know. I've only seen a farm on Instagram." In a statement, Collins' campaign spokesperson said that as technology evolves, the campaign "will be at the forefront embracing new tactics and strategies that pierce through lopsided legacy media coverage and deliver our message directly to voters." A spokesperson for Ossoff's campaign declined to comment on the ad. Days after the video ran, the campaign said "yes" when asked by the Atlanta Journal-Constitution if it would "commit to not using deepfakes that misattribute or fabricate words or actions of their opponents to mislead voters." Daniel Schiff, a Purdue University professor who has studied thousands of deepfakes, said the growing use of political content that spreads misinformation risks further eroding U.S. voter trust in institutions. "I think that the types of damage that we can do to the rigor and credibility of elections and democratic systems - and the ability to misinform people about candidates or social issues - very much risks being supercharged," he said. Still, political strategists say AI-generated videos can be persuasive as well as time- and cost-effective, though they stressed that they need to be used ethically. The technology can be a tool for political satire in a visual format that lends itself to watching and sharing on social media. STATES PLAY CATCH-UP ON AI With essentially no federal regulation in place, states have been playing catch-up. Twenty-eight states have passed legislation addressing the use of AI in political ads, with most focused on disclosure rather than an outright ban, according to Ilana Beller, who leads state legislative work on AI at the liberal consumer advocacy group Public Citizen. But those laws face limits. Many only apply to political campaigns rather than social media users who might spread AI-infused misinformation. Research also suggests that disclaimers are not effective in preventing voters from being persuaded by false ads, Schiff noted. AI technology is inexpensive and accessible enough that down-ballot candidates and local political groups are using it, said Brady Smith, a national Republican political strategist. For example, in February the Republican Committee for Loudoun County in northern Virginia released three AI-generated ads attacking Democratic Governor Abigail Spanberger, who took office in January. One video showed footage of Spanberger's response to Trump's State of the Union address interspersed with AI-generated video of her appearing to say things like "working hard to bring in commie socialist Marxism, free stuff for illegals, gun grabs and erasing gender norms." A spokesperson for Spanberger declined to comment. A representative for the Loudoun County Republican Committee did not reply to a request for comment. Other videos are more obviously fake. An ad for Republican Texas Attorney General Ken Paxton's primary campaign against Senator John Cornyn shows an AI-generated version of Cornyn dancing with Democratic Representative Jasmine Crockett, as a narrator says: "Publicly, they're opponents. Privately, they're perfectly in step." A disclosure in small font appears at the end, stating some AI-generated content "is satire that does not represent real events." Cornyn's campaign responded by releasing an AI-generated ad of Paxton driving a convertible with women depicted as "Mistress #1" and "Mistress #2", highlighting allegations of infidelity that have dogged the attorney general during his run. Spokespeople for Paxton and Cornyn's campaigns did not respond to requests for comment. The exchange reflects how quickly AI-generated attacks are becoming part of routine campaign messaging, despite concerns about their impact on the electoral system. "It's harmful for politicians and campaigns to continue normalizing this," Schiff said. (Reporting by Helen Coster and Joseph Ax in New York, editing by Ross Colvin and Alistair Bell)
[4]
AI deepfakes blur reality in 2026 US midterm campaigns - The Economic Times
AI-generated deepfake political ads are increasingly used in US campaigns, often misleading voters with fabricated content. With limited regulation and weak safeguards, experts warn these tools could erode trust in elections, as their realism makes them persuasive and difficult for audiences to detect.As the video opens, Democratic Texas State Representative James Talarico appears to stand in front of a Texas flag, beaming. "Radicalised white men are the greatest domestic terrorist threat in our country," the US Senate candidate seems to say into the camera. As a voice whispers "white men," Talarico continues: "So true. So true." But Talarico never filmed that video. Instead, the clip is an AI-generated ad from the National Republican Senatorial Committee (NRSC), the party's Senate campaign arm, featuring a computer-altered Talarico reciting social media posts he wrote years ago. The words "AI generated" show up in easy-to-miss font in the lower righthand corner. The realistic video is among a vanguard of "deepfake" advertisements that some campaigns are already deploying ahead of November's midterm elections, taking advantage of AI tools that are improving at a breakneck pace. The ads are being introduced into a media landscape with few guardrails. There is no federal regulation constraining the use of AI in political messaging, leaving only a patchwork of largely untested state laws. And while social media companies like Meta and X label certain AI-generated content, they have scrapped professional fact-checking systems in favor of user-generated notes. Politics experts worry such videos could leave voters confused, or even deceived. The stakes are high: the election will determine which party controls Congress for the final two years of Republican President Donald Trump's term, with Democrats seemingly well positioned to capture a majority in the US House of Representatives but facing longer odds in the US Senate. The ads appear to be effective, political strategists and experts said. One 2025 study, published in the peer-reviewed Journal of Creative Communications, found that people struggle to identify deepfake videos and that their opinions are affected by this type of misinformation. So far, Republicans appear to be utilising the technology more frequently than Democrats this election cycle, according to politics experts and a Reuters review of publicly available ads. The Republicans are following the lead of Trump's White House, which has released scores of AI-generated videos and gaming-inspired memes on social media that do everything from disparaging protesters to hyping up the Iran war. The Talarico ad, for instance, is one of three recent ads created by national Republicans that use deepfake technology - realistic yet fabricated videos made by AI algorithms that have become increasingly easy to create. NRSC Communications Director Joanna Rodriguez defended the ad in a statement to Reuters, saying Democrats were "panicking after seeing and hearing James Talarico's own words." JT Ennis, a spokesperson for Talarico's campaign, said that while his opponents "spend their time making deepfake videos to mislead Texans, we are uniting the people of Texas to win in November." Among Democrats, the most notable user of AI-generated videos is California Governor Gavin Newsom, a potential 2028 presidential candidate who has frequently employed deepfake videos to troll Trump. But the Democratic Party's national campaign committees have not yet sought to mirror the NRSC's efforts in midterm campaigns. Ads pose misinformation risks The campaign of Republican US Representative Mike Collins of Georgia, who is vying to challenge Democratic Senator Jon Ossoff in November, created a deepfake video in which Ossoff appears to say: "I just voted to keep the government shut down. They say it would hurt farmers, but I wouldn't know. I've only seen a farm on Instagram." In a statement, Collins' campaign spokesperson said that as technology evolves, the campaign "will be at the forefront embracing new tactics and strategies that pierce through lopsided legacy media coverage and deliver our message directly to voters." A spokesperson for Ossoff's campaign declined to comment on the ad. Days after the video ran, the campaign said "yes" when asked by the Atlanta Journal-Constitution if it would "commit to not using deepfakes that misattribute or fabricate words or actions of their opponents to mislead voters." Daniel Schiff, a Purdue University professor who has studied thousands of deepfakes, said the growing use of political content that spreads misinformation risks further eroding US voter trust in institutions. "I think that the types of damage that we can do to the rigor and credibility of elections and democratic systems - and the ability to misinform people about candidates or social issues - very much risks being supercharged," he said. Still, political strategists say AI-generated videos can be persuasive as well as time- and cost-effective, though they stressed that they need to be used ethically. The technology can be a tool for political satire in a visual format that lends itself to watching and sharing on social media. States play catch up on AI With essentially no federal regulation in place, states have been playing catch-up. Twenty-eight states have passed legislation addressing the use of AI in political ads, with most focused on disclosure rather than an outright ban, according to Ilana Beller, who leads state legislative work on AI at the liberal consumer advocacy group Public Citizen. But those laws face limits. Many only apply to political campaigns rather than social media users who might spread AI-infused misinformation. Research also suggests that disclaimers are not effective in preventing voters from being persuaded by false ads, Schiff noted. AI technology is inexpensive and accessible enough that down-ballot candidates and local political groups are using it, said Brady Smith, a national Republican political strategist. For example, in February the Republican Committee for Loudoun County in northern Virginia released three AI-generated ads attacking Democratic Governor Abigail Spanberger, who took office in January. One video showed footage of Spanberger's response to Trump's State of the Union address interspersed with AI-generated video of her appearing to say things like "working hard to bring in commie socialist Marxism, free stuff for illegals, gun grabs and erasing gender norms." A spokesperson for Spanberger declined to comment. A representative for the Loudoun County Republican Committee did not reply to a request for comment. Other videos are more obviously fake. An ad for Republican Texas Attorney General Ken Paxton's primary campaign against Senator John Cornyn shows an AI-generated version of Cornyn dancing with Democratic Representative Jasmine Crockett, as a narrator says: "Publicly, they're opponents. Privately, they're perfectly in step." A disclosure in small font appears at the end, stating some AI-generated content "is satire that does not represent real events." Cornyn's campaign responded by releasing an AI-generated ad of Paxton driving a convertible with women depicted as "Mistress #1" and "Mistress #2", highlighting allegations of infidelity that have dogged the attorney general during his run. Spokespeople for Paxton and Cornyn's campaigns did not respond to requests for comment. The exchange reflects how quickly AI-generated attacks are becoming part of routine campaign messaging, despite concerns about their impact on the electoral system. "It's harmful for politicians and campaigns to continue normalizing this," Schiff said.
[5]
AI deepfakes blur reality in 2026 US midterm campaigns | BreakingNews
As the video opens, Democratic Texas State Representative James Talarico appears to stand in front of a Texas flag, beaming. "Radicalised white men are the greatest domestic terrorist threat in our country," the US Senate candidate seems to say into the camera. As a voice whispers "white men," Talarico continues: "So true. So true." But Talarico never filmed that video. Instead, the clip is an AI-generated ad from the National Republican Senatorial Committee (NRSC), the party's Senate campaign arm, featuring a computer-altered Talarico reciting social media posts he wrote years ago. The words "AI generated" show up in easy-to-miss font in the lower righthand corner. The realistic video is among a vanguard of "deepfake" advertisements that some campaigns are already deploying ahead of November's midterm elections, taking advantage of AI tools that are improving at a breakneck pace. The ads are being introduced into a media landscape with few guardrails. There is no federal regulation constraining the use of AI in political messaging, leaving only a patchwork of largely untested state laws. And while social media companies like Meta and X label certain AI-generated content, they have scrapped professional fact-checking systems in favor of user-generated notes. Politics experts worry such videos could leave voters confused, or even deceived. The stakes are high: the election will determine which party controls Congress for the final two years of Republican president Donald Trump's term, with Democrats seemingly well positioned to capture a majority in the US House of Representatives but facing longer odds in the US Senate. The ads appear to be effective, political strategists and experts said. One 2025 study, published in the peer-reviewed Journal of Creative Communications, found that people struggle to identify deepfake videos and that their opinions are affected by this type of misinformation. So far, Republicans appear to be utilising the technology more frequently than Democrats this election cycle, according to politics experts and a Reuters review of publicly available ads. The Republicans are following the lead of Trump's White House, which has released scores of AI-generated videos and gaming-inspired memes on social media that do everything from disparaging protesters to hyping up the Iran war. The Talarico ad, for instance, is one of three recent ads created by national Republicans that use deepfake technology - realistic yet fabricated videos made by AI algorithms that have become increasingly easy to create. NRSC Communications Director Joanna Rodriguez defended the ad in a statement to Reuters, saying Democrats were "panicking after seeing and hearing James Talarico's own words." JT Ennis, a spokesperson for Talarico's campaign, said that while his opponents "spend their time making deepfake videos to mislead Texans, we are uniting the people of Texas to win in November." Among Democrats, the most notable user of AI-generated videos is California Governor Gavin Newsom, a potential 2028 presidential candidate who has frequently employed deepfake videos to troll Trump. But the Democratic Party's national campaign committees have not yet sought to mirror the NRSC's efforts in midterm campaigns. The campaign of Republican US Representative Mike Collins of Georgia, who is vying to challenge Democratic Senator Jon Ossoff in November, created a deepfake video in which Ossoff appears to say: "I just voted to keep the government shut down. "They say it would hurt farmers, but I wouldn't know. I've only seen a farm on Instagram." In a statement, Collins' campaign spokesperson said that as technology evolves, the campaign "will be at the forefront embracing new tactics and strategies that pierce through lopsided legacy media coverage and deliver our message directly to voters." A spokesperson for Ossoff's campaign declined to comment on the ad. Days after the video ran, the campaign said "yes" when asked by the Atlanta Journal-Constitution if it would "commit to not using deepfakes that misattribute or fabricate words or actions of their opponents to mislead voters." Daniel Schiff, a Purdue University professor who has studied thousands of deepfakes, said the growing use of political content that spreads misinformation risks further eroding U.S. voter trust in institutions. "I think that the types of damage that we can do to the rigor and credibility of elections and democratic systems - and the ability to misinform people about candidates or social issues - very much risks being supercharged," he said. Still, political strategists say AI-generated videos can be persuasive as well as time- and cost-effective, though they stressed that they need to be used ethically. The technology can be a tool for political satire in a visual format that lends itself to watching and sharing on social media. With essentially no federal regulation in place, states have been playing catch-up. 28 states have passed legislation addressing the use of AI in political ads, with most focused on disclosure rather than an outright ban, according to Ilana Beller, who leads state legislative work on AI at the liberal consumer advocacy group Public Citizen. But those laws face limits. Many only apply to political campaigns rather than social media users who might spread AI-infused misinformation. Research also suggests that disclaimers are not effective in preventing voters from being persuaded by false ads, Schiff noted. AI technology is inexpensive and accessible enough that down-ballot candidates and local political groups are using it, said Brady Smith, a national Republican political strategist. For example, in February the Republican Committee for Loudoun County in northern Virginia released three AI-generated ads attacking Democratic Governor Abigail Spanberger, who took office in January. One video showed footage of Spanberger's response to Trump's State of the Union address interspersed with AI-generated video of her appearing to say things like "working hard to bring in commie socialist Marxism, free stuff for illegals, gun grabs and erasing gender norms." A spokesperson for Spanberger declined to comment. A representative for the Loudoun County Republican Committee did not reply to a request for comment. Other videos are more obviously fake. An ad for Republican Texas Attorney General Ken Paxton's primary campaign against Senator John Cornyn shows an AI-generated version of Cornyn dancing with Democratic Representative Jasmine Crockett, as a narrator says: "Publicly, they're opponents. Privately, they're perfectly in step." A disclosure in small font appears at the end, stating some AI-generated content "is satire that does not represent real events." Cornyn's campaign responded by releasing an AI-generated ad of Paxton driving a convertible with women depicted as "Mistress #1" and "Mistress #2", highlighting allegations of infidelity that have dogged the attorney general during his run. Spokespeople for Paxton and Cornyn's campaigns did not respond to requests for comment. The exchange reflects how quickly AI-generated attacks are becoming part of routine campaign messaging, despite concerns about their impact on the electoral system. "It's harmful for politicians and campaigns to continue normalising this," Schiff said.
[6]
AI deepfakes blur reality in 2026 US midterm campaigns
NEW YORK, March 28 (Reuters) - As the video opens, Democratic Texas State Representative James Talarico appears to stand in front of a Texas flag, beaming. "Radicalized white men are the greatest domestic terrorist threat in our country," the U.S. Senate candidate seems to say into the camera. As a voice whispers "white men," Talarico continues: "So true. So true." But Talarico never filmed that video. Instead, the clip is an AI-generated ad from the National Republican Senatorial Committee (NRSC), the party's Senate campaign arm, featuring a computer-altered Talarico reciting social media posts he wrote years ago. The words "AI generated" show up in easy-to-miss font in the lower righthand corner. The realistic video is among a vanguard of "deepfake" advertisements that some campaigns are already deploying ahead of November's midterm elections, taking advantage of AI tools that are improving at a breakneck pace. The ads are being introduced into a media landscape with few guardrails. There is no federal regulation constraining the use of AI in political messaging, leaving only a patchwork of largely untested state laws. And while social media companies like Meta and X label certain AI-generated content, they have scrapped professional fact-checking systems in favor of user-generated notes. Politics experts worry such videos could leave voters confused, or even deceived. The stakes are high: the election will determine which party controls Congress for the final two years of Republican President Donald Trump's term, with Democrats seemingly well positioned to capture a majority in the U.S. House of Representatives but facing longer odds in the U.S. Senate. The ads appear to be effective, political strategists and experts said. One 2025 study, published in the peer-reviewed Journal of Creative Communications, found that people struggle to identify deepfake videos and that their opinions are affected by this type of misinformation. So far, Republicans appear to be utilizing the technology more frequently than Democrats this election cycle, according to politics experts and a Reuters review of publicly available ads. The Republicans are following the lead of Trump's White House, which has released scores of AI-generated videos and gaming-inspired memes on social media that do everything from disparaging protesters to hyping up the Iran war. The Talarico ad, for instance, is one of three recent ads created by national Republicans that use deepfake technology - realistic yet fabricated videos made by AI algorithms that have become increasingly easy to create. NRSC Communications Director Joanna Rodriguez defended the ad in a statement to Reuters, saying Democrats were "panicking after seeing and hearing James Talarico's own words." JT Ennis, a spokesperson for Talarico's campaign, said that while his opponents "spend their time making deepfake videos to mislead Texans, we are uniting the people of Texas to win in November." Among Democrats, the most notable user of AI-generated videos is California Governor Gavin Newsom, a potential 2028 presidential candidate who has frequently employed deepfake videos to troll Trump. But the Democratic Party's national campaign committees have not yet sought to mirror the NRSC's efforts in midterm campaigns. ADS POSE MISINFORMATION RISKS The campaign of Republican U.S. Representative Mike Collins of Georgia, who is vying to challenge Democratic Senator Jon Ossoff in November, created a deepfake video in which Ossoff appears to say: "I just voted to keep the government shut down. They say it would hurt farmers, but I wouldn't know. I've only seen a farm on Instagram." In a statement, Collins' campaign spokesperson said that as technology evolves, the campaign "will be at the forefront embracing new tactics and strategies that pierce through lopsided legacy media coverage and deliver our message directly to voters." A spokesperson for Ossoff's campaign declined to comment on the ad. Days after the video ran, the campaign said "yes" when asked by the Atlanta Journal-Constitution if it would "commit to not using deepfakes that misattribute or fabricate words or actions of their opponents to mislead voters." Daniel Schiff, a Purdue University professor who has studied thousands of deepfakes, said the growing use of political content that spreads misinformation risks further eroding U.S. voter trust in institutions. "I think that the types of damage that we can do to the rigor and credibility of elections and democratic systems - and the ability to misinform people about candidates or social issues - very much risks being supercharged," he said. Still, political strategists say AI-generated videos can be persuasive as well as time- and cost-effective, though they stressed that they need to be used ethically. The technology can be a tool for political satire in a visual format that lends itself to watching and sharing on social media. STATES PLAY CATCH-UP ON AI With essentially no federal regulation in place, states have been playing catch-up. Twenty-eight states have passed legislation addressing the use of AI in political ads, with most focused on disclosure rather than an outright ban, according to Ilana Beller, who leads state legislative work on AI at the liberal consumer advocacy group Public Citizen. But those laws face limits. Many only apply to political campaigns rather than social media users who might spread AI-infused misinformation. Research also suggests that disclaimers are not effective in preventing voters from being persuaded by false ads, Schiff noted. AI technology is inexpensive and accessible enough that down-ballot candidates and local political groups are using it, said Brady Smith, a national Republican political strategist. For example, in February the Republican Committee for Loudoun County in northern Virginia released three AI-generated ads attacking Democratic Governor Abigail Spanberger, who took office in January. One video showed footage of Spanberger's response to Trump's State of the Union address interspersed with AI-generated video of her appearing to say things like "working hard to bring in commie socialist Marxism, free stuff for illegals, gun grabs and erasing gender norms." A spokesperson for Spanberger declined to comment. A representative for the Loudoun County Republican Committee did not reply to a request for comment. Other videos are more obviously fake. An ad for Republican Texas Attorney General Ken Paxton's primary campaign against Senator John Cornyn shows an AI-generated version of Cornyn dancing with Democratic Representative Jasmine Crockett, as a narrator says: "Publicly, they're opponents. Privately, they're perfectly in step." A disclosure in small font appears at the end, stating some AI-generated content "is satire that does not represent real events." Cornyn's campaign responded by releasing an AI-generated ad of Paxton driving a convertible with women depicted as "Mistress #1" and "Mistress #2", highlighting allegations of infidelity that have dogged the attorney general during his run. Spokespeople for Paxton and Cornyn's campaigns did not respond to requests for comment. The exchange reflects how quickly AI-generated attacks are becoming part of routine campaign messaging, despite concerns about their impact on the electoral system. "It's harmful for politicians and campaigns to continue normalizing this," Schiff said. (Reporting by Helen Coster and Joseph Ax in New York, editing by Ross Colvin and Alistair Bell)
Share
Share
Copy Link
AI-generated deepfakes are flooding the 2026 US midterm campaigns, with Republicans deploying fabricated videos of Democratic candidates. Over 1,000 political deepfakes have been catalogued since early 2025—matching eight years of prior incidents combined. With no federal regulation and weakened social media safeguards, experts warn these realistic yet fabricated videos could erode voter trust in democratic systems.
AI deepfakes are reshaping political campaigns ahead of the November 2026 US midterm elections, introducing a new era where fabricated videos blur reality and challenge voter perception. The National Republican Senatorial Committee recently released an AI-generated ad featuring Democratic Texas State Representative James Talarico, who appears to recite controversial social media posts he wrote years ago
1
. The video shows "AI generated" in easy-to-miss font in the lower corner, exemplifying how AI in political messaging operates with minimal transparency. This deployment of AI-generated videos marks a significant shift in how political campaigns communicate with voters, raising urgent questions about authenticity and trust.
Source: Reuters
The surge in political deepfakes has been dramatic. Since the start of 2025, researchers have catalogued more than 1,000 English language social media posts featuring fake images or videos of prominent political figures
2
. In the previous eight years combined, only 1,344 such incidents were recorded. This explosion reflects how generative AI technology has improved, making it "trivially easy to generate a scene that looks pretty realistic and to place real individuals into scenes," according to Sam Gregory, executive director of Witness2
. The technology's rapid advancement means campaigns can now produce persuasive content quickly and cost-effectively.
Source: ET
Republicans appear to be utilizing the technology more frequently than Democrats this election cycle, according to politics experts and a Reuters review of publicly available ads
1
. The Talarico ad is one of three recent ads created by national Republicans that use deepfake technology. Republican U.S. Representative Mike Collins of Georgia created a deepfake video showing Democratic Senator Jon Ossoff saying: "I just voted to keep the government shut down. They say it would hurt farmers, but I wouldn't know. I've only seen a farm on Instagram"3
. Republicans are following the lead of Donald Trump's White House, which has released scores of AI-generated videos and gaming-inspired memes on social media1
.Among Democrats, California Governor Gavin Newsom stands out as the most notable user of AI-generated videos, frequently employing deepfake videos to troll Trump
3
. Newsom, a potential 2028 presidential candidate, has shared deepfakes including one showing the president smiling at a hologram of Jeffrey Epstein2
. However, the Democratic Party's national campaign committees have not yet sought to mirror the NRSC's efforts in US midterm campaigns.The ads are being introduced into a media landscape with few guardrails. There is no federal regulation constraining the use of AI in political messaging, leaving only a patchwork of largely untested state laws
1
. Social media companies like Meta and X label certain AI-generated content, but they have scrapped professional fact-checking systems in favor of user-generated notes3
. This regulatory void creates significant risks for voter confusion and misinformation as the stakes are high—the election will determine which party controls Congress for the final two years of Trump's term.Daniel Schiff, a Purdue University professor who has studied thousands of deepfakes, warns that the growing use of political content that spreads misinformation risks further eroding U.S. voter trust in institutions
3
. "I think that the types of damage that we can do to the rigor and credibility of elections and democratic systems—and the ability to misinform people about candidates or social issues—very much risks being supercharged," he said5
. A 2025 study published in the peer-reviewed Journal of Creative Communications found that people struggle to identify deepfake videos and that their opinions are affected by this type of misinformation4
.What's particularly concerning is that political deepfakes can remain persuasive even when viewers know they aren't real. "A lot of people feel like these images or videos or the stories they convey, feel true," said Daniel Schiff
2
. People aren't necessarily looking for things that are real; they are looking for things that represent their beliefs, making deepfakes "just another layer added on in terms of this process of reinforcing, rather than revisiting, what people believe is true," according to Valerie Wirtschafter, a Brookings Institution fellow2
.Related Stories
The technology extends beyond manipulating real politicians to creating entirely fabricated personas. In December 2025, an account for Jessica Foster, an AI-generated woman often depicted in US military uniform, went live on Instagram and accumulated more than 1 million followers
2
. The posts were linked to an OnlyFans account where visitors could buy photos. During the war in Iran, a flood of videos appeared featuring fake female Iranian soldiers saying "Habibi, come to Iran"—despite Iran prohibiting women from serving in combat roles2
. These fabricated videos serve dual purposes as both propaganda and revenue generators.An AI-generated female police officer with more than 26,000 followers on TikTok posted a video stating: "President Trump deported over 2.5 million people out of the country. Is this what you voted for? Yes"
2
. The video received more than 200 likes, demonstrating how fabricated videos can effectively engage audiences and shape political narratives. Since 2024, Trump and the White House have shared at least 18 deepfakes on social media2
.As November approaches, voters face the challenge of navigating a media environment where reality and fabrication increasingly merge. Political strategists acknowledge that AI-generated videos can be persuasive as well as time- and cost-effective, though they stress the need for ethical use
1
. The technology can serve as a tool for political satire in a visual format that lends itself to watching and sharing on social media. However, without robust content labeling standards and federal oversight, distinguishing satire from deliberate misinformation becomes increasingly difficult. The question remains whether democratic systems can adapt quickly enough to preserve voter confidence in an era where seeing is no longer believing.Summarized by
Navi
[2]
[5]
1
Technology

2
Technology

3
Policy and Regulation
