6 Sources
6 Sources
[1]
Opinion | Amy Klobuchar: I Knew A.I. Deepfakes Were a Problem. Then I Saw One of Myself.
There's a centuries-old expression that "a lie can travel halfway around the world while the truth is still putting on its shoes." Today, a realistic deepfake -- an A.I.-generated video that shows someone doing or saying something they never did -- can circle the globe and land in the phones of millions while the truth is still stuck on a landline. That's why it is urgent for Congress to immediately pass new laws to protect Americans by preventing their likenesses from being used to do harm. I learned that lesson in a visceral way over the last month when a fake video of me -- opining on, of all things, the actress Sydney Sweeney's jeans -- went viral. On Jul. 30, Senator Marsha Blackburn and I led a Senate Judiciary subcommittee hearing on data privacy. We've both been leaders in the tech and privacy space and have the legislative scars to show for it. The hearing featured a wide-reaching discussion with five experts about the need for a strong federal data privacy law. It was cordial and even-keeled, no partisan flare-ups. So I was surprised later that week when I noticed a clip of me from that hearing circulating widely on X, to the tune of more than a million views. I clicked to see what was getting so much attention. That's when I heard my voice -- but certainly not me -- spewing a vulgar and absurd critique of an ad campaign for jeans featuring Sydney Sweeney. The A.I. deepfake featured me using the phrase "perfect titties" and lamenting that Democrats were "too fat to wear jeans or too ugly to go outside." Though I could immediately tell that someone used footage from the hearing to make a deepfake, there was no getting around the fact that it looked and sounded very real. As anyone would, I wanted the video taken down or at least labeled "digitally altered content." It was using my likeness to stoke controversy where it did not exist. It had me saying vile things. And while I would like to think that most people would be able to recognize it as fake, some clearly thought it was real. Studies have shown that people who see this type of content develop lasting negative views of the person in the video, even when they know it is fake. X refused to take it down or label it, even though its own policy says users are prohibited from sharing "inauthentic content on X that may deceive people," including "manipulated, or out-of-context media that may result in widespread confusion on public issues." As the video spread to other platforms, TikTok took it down and Meta labeled it as A.I. However, X's response was that I should try to get a "Community Note" to say it was a fake, something the company would not help add. For years I have been going after the growing problem that Americans have extremely limited options to get unauthorized deepfakes taken down. But this experience of sinking hours of time and resources into limiting the spread of a single video made clear just how powerless we are right now. Why should tech companies' profits rule over our rights to our own images and voices? Why do their shareholders and C.E.O.s get to make more money with the spread of viral content at the expense of our privacy and reputations? And why are there no consequences for the people who actually make the unauthorized deepfakes and spread the lies? This particular video does not in any way represent the gravest threat posed by deepfakes. In July, it was revealed that an impostor had used A.I. to pretend to be Secretary of State Marco Rubio and contacted at least three foreign ministers, a member of Congress and a governor. And this technology can turn the lives of just about anyone completely upside down. Last year, someone used A.I. to clone the voice of a high school principal in Maryland and create audio of him making racist and antisemitic comments. By the time the audio was proved to be fake, the principal had already been placed on administrative leave and families and students were left deeply hurt. There is no way to quantify the chaos that could take place going forward without legal checks. Imagine a deepfake of a bank C.E.O. that triggers a bank run; a deepfake of an influencer telling children to use drugs; or a deepfake of a U.S. president starting a war that triggers attacks on our troops. The possibilities are endless. With A.I., the technology has gotten ahead of the law, and we can't let it go any further without rules of the road. As complicated as this technology is, some solutions are within reach. Earlier this year, President Trump signed the TAKE IT DOWN Act, which Senator Ted Cruz and I pushed to create legal protections for victims when intimate images, including deepfakes, are shared without their consent. This law addresses the rise in cases of predators using A.I. tools to create nude images of victims to humiliate or extort them. We know the consequences of this can be deadly -- at least 20 children have died by suicide recently because of the threat of explicit images being shared without their consent. That bill was only the first step. That is why I am again working across the aisle on a bill to give all Americans more control over how deepfakes of our voices and visual likenesses are used. The proposed bipartisan NO FAKES Act, cosponsored by Senators Chris Coons, Marsha Blackburn, Thom Tillis and me, would give people the right to demand that social media companies remove deepfakes of their voice and likeness, while making exceptions for speech protected by the First Amendment. The United States is not alone in rising to this challenge. The European Union's A.I. Act, adopted in 2024, mandates that A.I.-generated content be clearly labeled and watermarked. And in Denmark, legislation is being considered to give every citizen copyright over their face and voice, forcing platforms to remove unauthorized deepfakes just as they would pull down copyrighted music. In the United States, and within the bounds of our Constitution, we must put in place common-sense safeguards for artificial intelligence. They must at least include labeling requirements for content that is substantially generated by A.I. We are clearly at just the tip of the iceberg. Deepfakes like the one made of me in that hearing are going to become more common, not less -- and harder for anyone to identify as A.I. The internet has an endless appetite for flashy, controversial content that stokes anger. The people who create these videos aren't going to stop at Sydney Sweeney's jeans. We can love the technology and we can use the technology, but we can't cede all the power over our own images and our privacy. It is time for members of Congress to stand up for their constituents, stop currying favor with the tech companies and set the record straight. In a democracy, we do that by enacting laws. And it is long past time to pass one. Amy Klobuchar, a Democrat, is a U.S. Senator from Minnesota. The Times is committed to publishing a diversity of letters to the editor. We'd like to hear what you think about this or any of our articles. Here are some tips. And here's our email: [email protected]. Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
[2]
Amy Klobuchar Promotes Law Against Deepfakes While Denying She Said Sydney Sweeney Has 'Perfect Titties'
"It had me saying vile things," writes the senator from Minnesota. Democratic Senator from Minnesota Amy Klobuchar recently appeared on social media in a video saying that actress Sydney Sweeney had "perfect titties" and that Democrats were "the party of ugly people." It was a deepfake, of course, and Klobuchar never uttered those words. But the senator has now written an op-ed in the New York Times to discuss the video and is calling for new legislation against deepfakes. "The A.I. deepfake featured me using the phrase 'perfect titties' and lamenting that Democrats were 'too fat to wear jeans or too ugly to go outside.'" Klobuchar wrote in the New York Times. "Though I could immediately tell that someone used footage from the hearing to make a deepfake, there was no getting around the fact that it looked and sounded very real." The video of Klobuchar was originally from a Senate Judiciary subcommittee hearing on data privacy that had been altered to make her look like she was talking about Sweeney. A recent ad from American Eagle featuring the actress became controversial because she talked about "good genes," to discuss denim from the company, a play on the word jeans. Critics said it was a reference to eugenics, and President Donald Trump even weighed in after he learned that she was a registered Republican, praising the actress. Klobuchar wrote that the fake video had gotten over a million views, and she contacted X to have it taken down or at least labeled as AI-generated content. "It was using my likeness to stoke controversy where it did not exist. It had me saying vile things. And while I would like to think that most people would be able to recognize it as fake, some clearly thought it was real," Klobuchar wrote. But Klobuchar writes that X refused to take it down or label it even though X has a policy against “inauthentic content on X that may deceive people," as well as "manipulated or out-of-context media that may result in widespread confusion on public issues." Anyone who's spent time on X since Elon Musk bought the platform knows that he doesn't really care about manipulated content as long as it serves right-wing interests. But there's also the question of why any manipulated video would need to be labeled if most people could tell it was fake. X reportedly told Klobuchar to add a Community Note, and she was miffed that the company wouldn't help her add one, according to her op-ed. Klobuchar ends her article by promoting the No Fakes Act (Nurture Originals, Foster Art, and Keep Entertainment Safe Act), which has cosponsors across party lines, including Democratic senator Chris Coons of Connecticut and Republican senators Thom Tillis of North Carolina and Marsha Blackburn of Tennessee. The senator from Minnesota writes that the bill "would give people the right to demand that social media companies remove deepfakes of their voice and likeness while making exceptions for speech protected by the First Amendment." As the EFF notes, the No Fakes Act is deeply flawed, creating what it calls a new censorship infrastructure. The latest version of the law has carve-outs for parody, satire, and commentary, but as the EFF points out, having to prove something is parody in a court of law can be extremely costly. The irony in Klobuchar drawing attention to the deepfake video is that a lot more people are now going to know it exists. And it's getting posted more on X in the wake of her op-ed. In fact, Gizmodo had difficulty finding the tweet Klobuchar says got 1 million views, but we did find plenty of other people re-posting the video now.
[3]
Sen. Klobuchar warns of AI's dangers after Sydney Sweeney "deepfake" video surfaces
Stephen Swanson is a web producer at CBS News Minnesota. Stephen was a floor director for a decade before moving to the WCCO-TV newsroom in 2011, where he focuses on general assignment reporting. Amy Klobuchar, Minnesota's senior U.S. senator, says someone used AI to simulate her voice, making a "vulgar and absurd" critique of the controversial American Eagle jeans ad featuring actress Sydney Sweeney. In a New York Times opinion piece published on Wednesday, she described her struggles to get the video -- which she said incorporates elements from a July 30 Senate hearing -- taken down after finding it on X. "For years I have been going after the growing problem that Americans have extremely limited options to get unauthorized deepfakes taken down," Klobuchar wrote. "But this experience of sinking hours of time and resources into limiting the spread of a single video made clear just how powerless we are right now." Klobuchar says "deepfake" videos like this are just the tip of the iceberg. Last month, another imposter used AI to mimic the voice of Secretary of State Marco Rubio and contacted foreign ministers, a member of Congress and a governor. She's calling on federal lawmakers to support her NO FAKES Act, which would create protections and a process to get videos removed from social media. "In the United States and within the bounds of our Constitution, we must put in place common-sense safeguards for artificial intelligence. They must at least include labeling requirements for content that is substantially generated by A.I.," she wrote in her opinion piece. Minnesota has a state law that makes it illegal to distribute AI-generated content related to elections or sexual acts, which led X owner Elon Musk to sue Attorney General Keith Ellison in April. In the lawsuit, Musk argued Minnesota's law violates X's free speech rights and "will lead to blanket censorship, including of fully protected, core political speech."
[4]
Klobuchar weighs in on deepfake video of her talking about Sydney Sweeney
Sen. Amy Klobuchar (D-Minn.) addressed the deepfake video that went viral last month of the senator's likeness offering a "vulgar and absurd critique" of actress Sydney Sweeney's "great jeans" ad campaign. In a New York Times op-ed, the moderate Democrat called on Congress to pass legislation to protect Americans from the harms of deepfakes, saying the issue requires urgent action amid the proliferation of artificial intelligence (AI) technology. "I learned that lesson in a visceral way over the last month when a fake video of me -- opining on, of all things, the actress Sydney Sweeney's jeans -- went viral," she wrote in the op-ed. Klobuchar said after she co-led a hearing on data privacy last month, she noticed "a clip of me from that hearing circulating widely on X, to the tune of more than a million views," which the senator then clicked on to watch. "That's when I heard my voice -- but certainly not me -- spewing a vulgar and absurd critique of an ad campaign for jeans featuring Sydney Sweeney," she said, referring to the controversial American Eagle advertisement that touted the actress's "great jeans." Klobuchar explained the AI deepfake featured her using derogatory phrases and "lamenting that Democrats were 'too fat to wear jeans or too ugly to go outside.'" "Though I could immediately tell that someone used footage from the hearing to make a deepfake, there was no getting around the fact that it looked and sounded very real," she said. Klobuchar said when the clip spread to other platforms, TikTok took it down, and Meta labeled the video as artificial intelligence. But she said the social platform X "refused to take it down or label it." "X's response was that I should try to get a 'Community Note' to say it was a fake, something the company would not help add," she added. The Hill has reached out to X for comment. Klobuchar noted that her experience "does not in any way represent the gravest threat posed by deepfakes" and pointed to other recent examples, including when someone used AI to pretend to be Secretary of State Marco Rubio and contacted various high-level government officials. President Trump in May signed into law a bill that Klobuchar pushed for, cracking down on so-called deepfake revenge porn -- or sexually explicit AI images and videos that are posted without the victim's consent. Klobuchar is calling now for Congress to pass her bipartisan "No Fakes Act," which "would give people the right to demand that social media companies remove deepfakes of their voice and likeness, while making exceptions for speech protected by the First Amendment," she said. "In the United States, and within the bounds of our Constitution, we must put in place common-sense safeguards for artificial intelligence. They must at least include labeling requirements for content that is substantially generated by A.I.," she wrote in the op-ed. She warned that the country is "at just the tip of the iceberg," noting, "The internet has an endless appetite for flashy, controversial content that stokes anger. The people who create these videos aren't going to stop at Sydney Sweeney's jeans." "We can love the technology and we can use the technology, but we can't cede all the power over our own images and our privacy," she wrote. "It is time for members of Congress to stand up for their constituents, stop currying favor with the tech companies and set the record straight. In a democracy, we do that by enacting laws. And it is long past time to pass one."
[5]
AI deepfake makes Senator Amy Klobuchar sound like she bashed Sydney Sweeney ad
Grok AI is being used to create porn-like deepfakes of women, including feminist X user Evie. Sen. Amy Klobuchar said she was surprised when she heard her voice in a clip on X criticizing Sydney Sweeney's American Eagle ad campaign, which aims to sell the company's jeans with the likeness of the actress' "great genes." The tone and pitch sounded like her, but they weren't her words, Klobuchar wrote in an Aug. 20 New York Times opinion piece. That's when the Minnesota Democratic senator realized it was a deepfake, a digitally altered video or audio recording that uses a person's voice or image, created by artificial intelligence. "A realistic deepfake -- an A.I.-generated video that shows someone doing or saying something they never did -- can circle the globe and land in the phones of millions while the truth is still stuck on a landline," Klobuchar wrote in the piece, titled "What I Didn't Say About Sydney Sweeney." She called the so-called video of her "a vulgar and absurd critique." Klobuchar has pushed for AI regulation on the national level - an effort that's not just supported by Democrats. In 2024, she and Sen. Ted Cruz, R-Texas, introduced a Senate bill to ban actual and artificial intelligence- generated posts of intimate imagery and deepfakes. The bill also required online platforms to "promptly remove such depictions upon receiving notice." President Donald Trump in May signed the TAKE IT DOWN Act, a law meant to outlaw deepfakes and revenge pornography. Now, companies must have a process for people to report deepfakes and nonconsensual intimate images, including revenge pornography, within 48 hours of being notified. Still, the push has its critics. That includes Republican Reps. Thomas Massie, R-Kentucky, and Eric Burlison, R-Missouri, as well as advocates concerned about how the move could limit national free speech protections. In her opinion piece, Klobuchar accused X of not following the stipulations of the new law. She said the platform didn't take down the deepfake video of her - or label it as false quickly enough. Now, Klobuchar wrote that she's looking for even more policy change to make social media companies remove deepfakes, with some exceptions for free speech protections. Her proposed bill is cosponsored by Sens. Chris Coons, D-Delaware; Thom Tillis, R-North Carolina, and Marsha Blackburn, R-Tennessee. "That bill was only the first step," she wrote in her op-ed. Klobuchar accuses X of not taking down video Klobuchar credited tech giants TikTok and Meta for taking the proper precautions to protect her and warn the public that it wasn't actually her speaking in the video. But she slammed X for not following the new law. "X refused to take it down or label it, even though its policy says users are prohibited from sharing 'inauthentic content on X that may deceive people,' including 'manipulated or out-of-context media that may result in widespread confusion on public issues,'" she wrote in the opinion piece. "They must at least include labeling requirements for content that is substantially generated by A.I," she added. X did not immediately respond to an inquiry from USA TODAY for a response on Klobucher's comments. What are the repercussions of deepfakes? Reputations are at risk when deepfakes are posted and allowed to linger online, Klobuchar warned in the Times. Deepfakes of other prominent figures, including Secretary of State Marco Rubio, Trump White House Chief of Staff Susie Wiles and even pop star Taylor Swift, have also been posted online and attracted attention to the issue. Deepfakes have also been used by young people as a bullying tactic. The U.S. Department of Homeland Security has also cited an increasing threat of deepfake identities. In her op-ed, Klobuchar cited a January 2022 study to show that "people who see this type of content develop lasting negative views of the person in the video, even when they know it is fake." "There is no way to quantify the chaos that could take place without legal checks," Klobuchar wrote later in the piece. "Imagine a deepfake of a bank C.E.O. that triggers a bank run, a deepfake of an influencer telling children to use drugs or a deepfake of a U.S. president starting a war that triggers attacks on our troops. The possibilities are endless." Contact Kayla Jimenez at [email protected]. Follow her on X at @kaylajjimenez.
[6]
Sen. Klobuchar sets record straight: She never said Sydney Sweeney...
Sen. Amy Klobuchar is calling for new legislation to address "deepfakes" after a highly realistic AI-generated video that appeared to show her making outrageous statements about Sydney Sweeney's American Eagle jeans ad went viral. The Minnesota Democrat took to the opinion page of the New York Times Wednesday to clear the air after the video made the rounds online, appearing to show her speaking at a recent Senate Judiciary subcommittee meeting on data privacy. In her op-ed, Klobuchar decried the bogus footage, which she noted was viewed online more than a million times. "The A.I. deepfake featured me using the phrase 'perfect t-tties' and lamenting that Democrats were "too fat to wear jeans or too ugly to go outside," the real Sen. Klobuchar wrote. "Though I could immediately tell that someone used footage from the hearing to make a deepfake, there was no getting around the fact that it looked and sounded very real." "If Republicans are gonna have beautiful girls with perfect t-tties in their ads, we want ads for Democrats too, you know?" the deepfake version of Klobuchar said, eerily mirroring the senator's voice and vocal style. "We want ugly, fat bitches wearing pink wigs and long-ass fake nails being loud and twerking on top of a cop car at a Waffle House because they didn't get extra ketchup, you know?" the video continued. "Just because we're the party of ugly people doesn't mean we can't be featured in ads, OK? And I know most of us are too fat to wear jeans or too ugly to go outside, but we want representation." The fake-out video's bizarro version of Klobuchar was referencing the controversial American Eagle ad campaign featuring it-girl Sydney Sweeney, in which the blonde-haired, blue-eyed beauty referred to her "good jeans" in a play on words. The ad caused an epic meltdown on the left, with TikTokkers decrying the punny commercial as "Nazi propaganda." Klobuchar said she reached out to various social media platforms where the video was circulating but had mixed results in getting it taken down. TikTok took it down and Meta labeled it as AI, but the senator said X offered no help beyond suggesting she should try to get a Community Note identifying it as fake. The whole episode, Klobuchar said, was motivation for a newly proposed piece of legislation dubbed the No Fakes Act, with Senate sponsorship on both sides of the aisle. The act would "give people the right to demand that social media companies remove deepfakes of their voice and likeness, while making exceptions for speech protected by the First Amendment," she wrote. Klobuchar said the bill will build on the success of another piece of recently passed legislation governing AI deepfakes, the Take it Down Act. Signed into law by President Trump in May, the Act criminalized the "nonconsensual publication of intimate images, including AI-generated content" and established a process for having offending images removed. Co-sponsors for the new bill include Sens. Chris Coons (D-DE.), Marsha Blackburn (R-Tenn.) and Thom Tillis (R-N.C.), Klobuchar said. "The internet has an endless appetite for flashy, controversial content that stokes anger. The people who create these videos aren't going to stop at Sydney Sweeney's jeans."
Share
Share
Copy Link
Senator Amy Klobuchar urges for new legislation to combat deepfakes after experiencing a viral AI-generated video misrepresenting her comments about actress Sydney Sweeney.
Senator Amy Klobuchar recently found herself at the center of a controversy involving an AI-generated deepfake video. The video, which went viral on social media platform X, falsely depicted the senator making inappropriate comments about actress Sydney Sweeney's appearance and criticizing her own political party
1
2
. Klobuchar immediately recognized the video as fake but noted its convincing nature, stating, "Though I could immediately tell that someone used footage from the hearing to make a deepfake, there was no getting around the fact that it looked and sounded very real"1
.Source: The Hill
The incident highlighted the varying approaches of social media platforms in dealing with AI-generated content. While TikTok removed the video and Meta labeled it as AI-generated, X (formerly Twitter) refused to take it down or label it as inauthentic
1
4
. Klobuchar expressed frustration with X's response, noting that the platform suggested she try to get a "Community Note" added to the video, without offering assistance1
.In response to this incident and the broader implications of AI-generated content, Klobuchar is advocating for stronger legislation to protect individuals from unauthorized deepfakes
3
. She is promoting the bipartisan NO FAKES Act (Nurture Originals, Foster Art, and Keep Entertainment Safe Act), which aims to give people the right to demand that social media companies remove deepfakes of their voice and likeness, while making exceptions for speech protected by the First Amendment1
4
.Source: The New York Times
Klobuchar emphasized that her experience, while concerning, is not the most severe threat posed by deepfakes. She cited other incidents, including an AI-generated impersonation of Secretary of State Marco Rubio that was used to contact foreign officials
1
5
. The senator warned of potential future scenarios, such as deepfakes triggering bank runs or influencing geopolitical events1
.The incident comes in the wake of the TAKE IT DOWN Act, signed into law by President Trump in May 2025, which addresses issues related to deepfakes and revenge pornography
4
5
. However, Klobuchar argues that more comprehensive legislation is needed to address the rapidly evolving landscape of AI-generated content1
.Related Stories
While there is growing support for AI regulation, the push for new legislation has its critics. Some lawmakers and free speech advocates have expressed concerns about potential limitations on First Amendment protections
5
. The Electronic Frontier Foundation (EFF) has criticized the NO FAKES Act, arguing that it could create a new censorship infrastructure and that proving something is parody in court could be prohibitively expensive2
.Source: CBS News
Klobuchar's call for action aligns with global efforts to address AI-generated content. She referenced the European Union's AI Act, adopted in 2024, which mandates clear labeling and watermarking of AI content
1
. The senator emphasized the need for the United States to implement similar safeguards within the bounds of the Constitution4
.As AI technology continues to advance, the debate over how to balance innovation with protection against misuse remains at the forefront of policy discussions. Klobuchar's experience serves as a stark reminder of the potential personal and societal impacts of unchecked AI-generated content in the digital age.
Summarized by
Navi
[1]
12 Dec 2024•Technology
22 May 2025•Policy and Regulation
01 Aug 2024
1
Business and Economy
2
Business and Economy
3
Technology