5 Sources
[1]
Opinion | Amy Klobuchar: I Knew A.I. Deepfakes Were a Problem. Then I Saw One of Myself.
There's a centuries-old expression that "a lie can travel halfway around the world while the truth is still putting on its shoes." Today, a realistic deepfake -- an A.I.-generated video that shows someone doing or saying something they never did -- can circle the globe and land in the phones of millions while the truth is still stuck on a landline. That's why it is urgent for Congress to immediately pass new laws to protect Americans by preventing their likenesses from being used to do harm. I learned that lesson in a visceral way over the last month when a fake video of me -- opining on, of all things, the actress Sydney Sweeney's jeans -- went viral. On Jul. 30, Senator Marsha Blackburn and I led a Senate Judiciary subcommittee hearing on data privacy. We've both been leaders in the tech and privacy space and have the legislative scars to show for it. The hearing featured a wide-reaching discussion with five experts about the need for a strong federal data privacy law. It was cordial and even-keeled, no partisan flare-ups. So I was surprised later that week when I noticed a clip of me from that hearing circulating widely on X, to the tune of more than a million views. I clicked to see what was getting so much attention. That's when I heard my voice -- but certainly not me -- spewing a vulgar and absurd critique of an ad campaign for jeans featuring Sydney Sweeney. The A.I. deepfake featured me using the phrase "perfect titties" and lamenting that Democrats were "too fat to wear jeans or too ugly to go outside." Though I could immediately tell that someone used footage from the hearing to make a deepfake, there was no getting around the fact that it looked and sounded very real. As anyone would, I wanted the video taken down or at least labeled "digitally altered content." It was using my likeness to stoke controversy where it did not exist. It had me saying vile things. And while I would like to think that most people would be able to recognize it as fake, some clearly thought it was real. Studies have shown that people who see this type of content develop lasting negative views of the person in the video, even when they know it is fake. X refused to take it down or label it, even though its own policy says users are prohibited from sharing "inauthentic content on X that may deceive people," including "manipulated, or out-of-context media that may result in widespread confusion on public issues." As the video spread to other platforms, TikTok took it down and Meta labeled it as A.I. However, X's response was that I should try to get a "Community Note" to say it was a fake, something the company would not help add. For years I have been going after the growing problem that Americans have extremely limited options to get unauthorized deepfakes taken down. But this experience of sinking hours of time and resources into limiting the spread of a single video made clear just how powerless we are right now. Why should tech companies' profits rule over our rights to our own images and voices? Why do their shareholders and C.E.O.s get to make more money with the spread of viral content at the expense of our privacy and reputations? And why are there no consequences for the people who actually make the unauthorized deepfakes and spread the lies? This particular video does not in any way represent the gravest threat posed by deepfakes. In July, it was revealed that an impostor had used A.I. to pretend to be Secretary of State Marco Rubio and contacted at least three foreign ministers, a member of Congress and a governor. And this technology can turn the lives of just about anyone completely upside down. Last year, someone used A.I. to clone the voice of a high school principal in Maryland and create audio of him making racist and antisemitic comments. By the time the audio was proved to be fake, the principal had already been placed on administrative leave and families and students were left deeply hurt. There is no way to quantify the chaos that could take place going forward without legal checks. Imagine a deepfake of a bank C.E.O. that triggers a bank run; a deepfake of an influencer telling children to use drugs; or a deepfake of a U.S. president starting a war that triggers attacks on our troops. The possibilities are endless. With A.I., the technology has gotten ahead of the law, and we can't let it go any further without rules of the road. As complicated as this technology is, some solutions are within reach. Earlier this year, President Trump signed the TAKE IT DOWN Act, which Senator Ted Cruz and I pushed to create legal protections for victims when intimate images, including deepfakes, are shared without their consent. This law addresses the rise in cases of predators using A.I. tools to create nude images of victims to humiliate or extort them. We know the consequences of this can be deadly -- at least 20 children have died by suicide recently because of the threat of explicit images being shared without their consent. That bill was only the first step. That is why I am again working across the aisle on a bill to give all Americans more control over how deepfakes of our voices and visual likenesses are used. The proposed bipartisan NO FAKES Act, cosponsored by Senators Chris Coons, Marsha Blackburn, Thom Tillis and me, would give people the right to demand that social media companies remove deepfakes of their voice and likeness, while making exceptions for speech protected by the First Amendment. The United States is not alone in rising to this challenge. The European Union's A.I. Act, adopted in 2024, mandates that A.I.-generated content be clearly labeled and watermarked. And in Denmark, legislation is being considered to give every citizen copyright over their face and voice, forcing platforms to remove unauthorized deepfakes just as they would pull down copyrighted music. In the United States, and within the bounds of our Constitution, we must put in place common-sense safeguards for artificial intelligence. They must at least include labeling requirements for content that is substantially generated by A.I. We are clearly at just the tip of the iceberg. Deepfakes like the one made of me in that hearing are going to become more common, not less -- and harder for anyone to identify as A.I. The internet has an endless appetite for flashy, controversial content that stokes anger. The people who create these videos aren't going to stop at Sydney Sweeney's jeans. We can love the technology and we can use the technology, but we can't cede all the power over our own images and our privacy. It is time for members of Congress to stand up for their constituents, stop currying favor with the tech companies and set the record straight. In a democracy, we do that by enacting laws. And it is long past time to pass one. Amy Klobuchar, a Democrat, is a U.S. Senator from Minnesota. The Times is committed to publishing a diversity of letters to the editor. We'd like to hear what you think about this or any of our articles. Here are some tips. And here's our email: [email protected]. Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
[2]
Amy Klobuchar Promotes Law Against Deepfakes While Denying She Said Sydney Sweeney Has 'Perfect Titties'
"It had me saying vile things," writes the senator from Minnesota. Democratic Senator from Minnesota Amy Klobuchar recently appeared on social media in a video saying that actress Sydney Sweeney had "perfect titties" and that Democrats were "the party of ugly people." It was a deepfake, of course, and Klobuchar never uttered those words. But the senator has now written an op-ed in the New York Times to discuss the video and is calling for new legislation against deepfakes. "The A.I. deepfake featured me using the phrase 'perfect titties' and lamenting that Democrats were 'too fat to wear jeans or too ugly to go outside.'" Klobuchar wrote in the New York Times. "Though I could immediately tell that someone used footage from the hearing to make a deepfake, there was no getting around the fact that it looked and sounded very real." The video of Klobuchar was originally from a Senate Judiciary subcommittee hearing on data privacy that had been altered to make her look like she was talking about Sweeney. A recent ad from American Eagle featuring the actress became controversial because she talked about "good genes," to discuss denim from the company, a play on the word jeans. Critics said it was a reference to eugenics, and President Donald Trump even weighed in after he learned that she was a registered Republican, praising the actress. Klobuchar wrote that the fake video had gotten over a million views, and she contacted X to have it taken down or at least labeled as AI-generated content. "It was using my likeness to stoke controversy where it did not exist. It had me saying vile things. And while I would like to think that most people would be able to recognize it as fake, some clearly thought it was real," Klobuchar wrote. But Klobuchar writes that X refused to take it down or label it even though X has a policy against “inauthentic content on X that may deceive people," as well as "manipulated or out-of-context media that may result in widespread confusion on public issues." Anyone who's spent time on X since Elon Musk bought the platform knows that he doesn't really care about manipulated content as long as it serves right-wing interests. But there's also the question of why any manipulated video would need to be labeled if most people could tell it was fake. X reportedly told Klobuchar to add a Community Note, and she was miffed that the company wouldn't help her add one, according to her op-ed. Klobuchar ends her article by promoting the No Fakes Act (Nurture Originals, Foster Art, and Keep Entertainment Safe Act), which has cosponsors across party lines, including Democratic senator Chris Coons of Connecticut and Republican senators Thom Tillis of North Carolina and Marsha Blackburn of Tennessee. The senator from Minnesota writes that the bill "would give people the right to demand that social media companies remove deepfakes of their voice and likeness while making exceptions for speech protected by the First Amendment." As the EFF notes, the No Fakes Act is deeply flawed, creating what it calls a new censorship infrastructure. The latest version of the law has carve-outs for parody, satire, and commentary, but as the EFF points out, having to prove something is parody in a court of law can be extremely costly. The irony in Klobuchar drawing attention to the deepfake video is that a lot more people are now going to know it exists. And it's getting posted more on X in the wake of her op-ed. In fact, Gizmodo had difficulty finding the tweet Klobuchar says got 1 million views, but we did find plenty of other people re-posting the video now.
[3]
Sen. Klobuchar warns of AI's dangers after Sydney Sweeney "deepfake" video surfaces
Stephen Swanson is a web producer at CBS News Minnesota. Stephen was a floor director for a decade before moving to the WCCO-TV newsroom in 2011, where he focuses on general assignment reporting. Amy Klobuchar, Minnesota's senior U.S. senator, says someone used AI to simulate her voice, making a "vulgar and absurd" critique of the controversial American Eagle jeans ad featuring actress Sydney Sweeney. In a New York Times opinion piece published on Wednesday, she described her struggles to get the video -- which she said incorporates elements from a July 30 Senate hearing -- taken down after finding it on X. "For years I have been going after the growing problem that Americans have extremely limited options to get unauthorized deepfakes taken down," Klobuchar wrote. "But this experience of sinking hours of time and resources into limiting the spread of a single video made clear just how powerless we are right now." Klobuchar says "deepfake" videos like this are just the tip of the iceberg. Last month, another imposter used AI to mimic the voice of Secretary of State Marco Rubio and contacted foreign ministers, a member of Congress and a governor. She's calling on federal lawmakers to support her NO FAKES Act, which would create protections and a process to get videos removed from social media. "In the United States and within the bounds of our Constitution, we must put in place common-sense safeguards for artificial intelligence. They must at least include labeling requirements for content that is substantially generated by A.I.," she wrote in her opinion piece. Minnesota has a state law that makes it illegal to distribute AI-generated content related to elections or sexual acts, which led X owner Elon Musk to sue Attorney General Keith Ellison in April. In the lawsuit, Musk argued Minnesota's law violates X's free speech rights and "will lead to blanket censorship, including of fully protected, core political speech."
[4]
Klobuchar weighs in on deepfake video of her talking about Sydney Sweeney
Sen. Amy Klobuchar (D-Minn.) addressed the deepfake video that went viral last month of the senator's likeness offering a "vulgar and absurd critique" of actress Sydney Sweeney's "great jeans" ad campaign. In a New York Times op-ed, the moderate Democrat called on Congress to pass legislation to protect Americans from the harms of deepfakes, saying the issue requires urgent action amid the proliferation of artificial intelligence (AI) technology. "I learned that lesson in a visceral way over the last month when a fake video of me -- opining on, of all things, the actress Sydney Sweeney's jeans -- went viral," she wrote in the op-ed. Klobuchar said after she co-led a hearing on data privacy last month, she noticed "a clip of me from that hearing circulating widely on X, to the tune of more than a million views," which the senator then clicked on to watch. "That's when I heard my voice -- but certainly not me -- spewing a vulgar and absurd critique of an ad campaign for jeans featuring Sydney Sweeney," she said, referring to the controversial American Eagle advertisement that touted the actress's "great jeans." Klobuchar explained the AI deepfake featured her using derogatory phrases and "lamenting that Democrats were 'too fat to wear jeans or too ugly to go outside.'" "Though I could immediately tell that someone used footage from the hearing to make a deepfake, there was no getting around the fact that it looked and sounded very real," she said. Klobuchar said when the clip spread to other platforms, TikTok took it down, and Meta labeled the video as artificial intelligence. But she said the social platform X "refused to take it down or label it." "X's response was that I should try to get a 'Community Note' to say it was a fake, something the company would not help add," she added. The Hill has reached out to X for comment. Klobuchar noted that her experience "does not in any way represent the gravest threat posed by deepfakes" and pointed to other recent examples, including when someone used AI to pretend to be Secretary of State Marco Rubio and contacted various high-level government officials. President Trump in May signed into law a bill that Klobuchar pushed for, cracking down on so-called deepfake revenge porn -- or sexually explicit AI images and videos that are posted without the victim's consent. Klobuchar is calling now for Congress to pass her bipartisan "No Fakes Act," which "would give people the right to demand that social media companies remove deepfakes of their voice and likeness, while making exceptions for speech protected by the First Amendment," she said. "In the United States, and within the bounds of our Constitution, we must put in place common-sense safeguards for artificial intelligence. They must at least include labeling requirements for content that is substantially generated by A.I.," she wrote in the op-ed. She warned that the country is "at just the tip of the iceberg," noting, "The internet has an endless appetite for flashy, controversial content that stokes anger. The people who create these videos aren't going to stop at Sydney Sweeney's jeans." "We can love the technology and we can use the technology, but we can't cede all the power over our own images and our privacy," she wrote. "It is time for members of Congress to stand up for their constituents, stop currying favor with the tech companies and set the record straight. In a democracy, we do that by enacting laws. And it is long past time to pass one."
[5]
Sen. Klobuchar sets record straight: She never said Sydney Sweeney...
Sen. Amy Klobuchar is calling for new legislation to address "deepfakes" after a highly realistic AI-generated video that appeared to show her making outrageous statements about Sydney Sweeney's American Eagle jeans ad went viral. The Minnesota Democrat took to the opinion page of the New York Times Wednesday to clear the air after the video made the rounds online, appearing to show her speaking at a recent Senate Judiciary subcommittee meeting on data privacy. In her op-ed, Klobuchar decried the bogus footage, which she noted was viewed online more than a million times. "The A.I. deepfake featured me using the phrase 'perfect t-tties' and lamenting that Democrats were "too fat to wear jeans or too ugly to go outside," the real Sen. Klobuchar wrote. "Though I could immediately tell that someone used footage from the hearing to make a deepfake, there was no getting around the fact that it looked and sounded very real." "If Republicans are gonna have beautiful girls with perfect t-tties in their ads, we want ads for Democrats too, you know?" the deepfake version of Klobuchar said, eerily mirroring the senator's voice and vocal style. "We want ugly, fat bitches wearing pink wigs and long-ass fake nails being loud and twerking on top of a cop car at a Waffle House because they didn't get extra ketchup, you know?" the video continued. "Just because we're the party of ugly people doesn't mean we can't be featured in ads, OK? And I know most of us are too fat to wear jeans or too ugly to go outside, but we want representation." The fake-out video's bizarro version of Klobuchar was referencing the controversial American Eagle ad campaign featuring it-girl Sydney Sweeney, in which the blonde-haired, blue-eyed beauty referred to her "good jeans" in a play on words. The ad caused an epic meltdown on the left, with TikTokkers decrying the punny commercial as "Nazi propaganda." Klobuchar said she reached out to various social media platforms where the video was circulating but had mixed results in getting it taken down. TikTok took it down and Meta labeled it as AI, but the senator said X offered no help beyond suggesting she should try to get a Community Note identifying it as fake. The whole episode, Klobuchar said, was motivation for a newly proposed piece of legislation dubbed the No Fakes Act, with Senate sponsorship on both sides of the aisle. The act would "give people the right to demand that social media companies remove deepfakes of their voice and likeness, while making exceptions for speech protected by the First Amendment," she wrote. Klobuchar said the bill will build on the success of another piece of recently passed legislation governing AI deepfakes, the Take it Down Act. Signed into law by President Trump in May, the Act criminalized the "nonconsensual publication of intimate images, including AI-generated content" and established a process for having offending images removed. Co-sponsors for the new bill include Sens. Chris Coons (D-DE.), Marsha Blackburn (R-Tenn.) and Thom Tillis (R-N.C.), Klobuchar said. "The internet has an endless appetite for flashy, controversial content that stokes anger. The people who create these videos aren't going to stop at Sydney Sweeney's jeans."
Share
Copy Link
Senator Amy Klobuchar advocates for new laws to combat AI-generated deepfakes after a fake video of her making controversial comments about actress Sydney Sweeney went viral, highlighting the urgent need for regulation in the AI era.
Senator Amy Klobuchar, a Democrat from Minnesota, recently found herself at the center of a controversy involving an AI-generated deepfake video. The video, which went viral on social media platform X (formerly Twitter), appeared to show Klobuchar making inappropriate comments about actress Sydney Sweeney's appearance in a jeans advertisement 12. The senator quickly identified the video as fake but noted its alarming realism.
Source: The Hill
The AI-generated video manipulated footage from a Senate Judiciary subcommittee hearing on data privacy, making it appear as though Klobuchar was commenting on Sweeney's "perfect titties" and criticizing Democrats as being "too fat to wear jeans or too ugly to go outside" 14. The video garnered over a million views on X, spreading rapidly across various social media platforms 2.
Klobuchar's experience in attempting to have the video removed or labeled as fake varied across platforms. While TikTok removed the video and Meta labeled it as AI-generated, X refused to take it down or label it as manipulated content 12. The senator expressed frustration with X's suggestion that she should try to get a "Community Note" added to the post, highlighting the challenges individuals face in combating the spread of deepfakes 1.
In response to this incident and the broader threat of deepfakes, Klobuchar is promoting new legislation called the NO FAKES Act (Nurture Originals, Foster Art, and Keep Entertainment Safe Act) 14. This bipartisan bill, co-sponsored by Senators Chris Coons, Marsha Blackburn, and Thom Tillis, aims to give people the right to demand that social media companies remove deepfakes of their voice and likeness, while making exceptions for speech protected by the First Amendment 15.
Klobuchar emphasized that her experience, while concerning, is not the most severe threat posed by deepfakes. She cited other incidents, such as an impostor using AI to mimic Secretary of State Marco Rubio's voice to contact foreign officials 13. The senator warned of potential chaos that could ensue without legal checks, including scenarios like deepfakes triggering bank runs or influencing military actions 1.
Source: The New York Times
The push for the NO FAKES Act builds upon the recently signed TAKE IT DOWN Act, which created legal protections for victims when intimate images, including deepfakes, are shared without consent 1. Klobuchar also highlighted international efforts to address this issue, such as the European Union's AI Act and Denmark's consideration of legislation to give citizens copyright over their face and voice 1.
While Klobuchar's call for legislation has gained bipartisan support, it has also faced criticism. The Electronic Frontier Foundation (EFF) has expressed concerns about the NO FAKES Act, describing it as potentially creating a new censorship infrastructure 2. The EFF argues that while the law includes exceptions for parody, satire, and commentary, proving these in court could be costly and challenging 2.
As AI technology continues to advance, the debate over how to balance innovation with protection against misuse remains a critical issue for lawmakers and tech companies alike. The incident involving Senator Klobuchar serves as a stark reminder of the potential for AI to be used in ways that can mislead the public and harm individuals' reputations.
Summarized by
Navi
[1]
Google's AI Mode for Search is expanding globally and introducing new agentic features, starting with restaurant reservations. The update brings personalized recommendations and collaboration tools, signaling a shift towards more interactive and intelligent search experiences.
17 Sources
Technology
10 hrs ago
17 Sources
Technology
10 hrs ago
Google releases the first comprehensive report on the energy usage of its Gemini AI model, providing unprecedented transparency in the tech industry and sparking discussions about AI's environmental impact.
7 Sources
Technology
10 hrs ago
7 Sources
Technology
10 hrs ago
Google joins the race to provide AI services to the US government, offering its Gemini AI tools to federal agencies for just 47 cents, undercutting competitors and raising concerns about potential vendor lock-in and future costs.
7 Sources
Technology
2 hrs ago
7 Sources
Technology
2 hrs ago
Microsoft is testing new AI-powered features for Windows 11's Copilot app, including semantic file search and an improved home experience, aimed at enhancing user productivity and file management.
4 Sources
Technology
10 hrs ago
4 Sources
Technology
10 hrs ago
AI-related companies have raised $118 billion in 2025, with funding concentrated in fewer companies. Major investors include SoftBank, Meta, and venture capital firms, reflecting the growing importance of AI across various sectors.
2 Sources
Business
18 hrs ago
2 Sources
Business
18 hrs ago