Curated by THEOUTPOST
On Thu, 1 Aug, 12:06 AM UTC
5 Sources
[1]
Senators introduce bill to protect individuals against AI-generated deepfakes
OpenAI joined several entertainment industry groups in backing the NO FAKES Act. Today, a group of senators introduced the , a law that would make it illegal to create digital recreations of a person's voice or likeness without that individual's consent. It's a bipartisan effort from Senators Chris Coons (D-Del.), Marsha Blackburn (R-Tenn.), Amy Klobuchar (D-Minn.) and Thom Tillis (R-N.C.), fully titled the Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2024. If it passes, the NO FAKES Act would create an option for people to seek damages when their voice, face or body are recreated by AI. Both individuals and companies would be held liable for producing, hosting or sharing unauthorized digital replicas, including ones made by generative AI. We've already seen many instances of celebrities finding their imitations of themselves out in the world. used to scam people with a fake Le Creuset cookware giveaway. A voice that sounded showed up in a ChatGPT voice demo. AI can also be used to make political candidates appear to make false statements, with the most recent example. And it's not only celebrities who can be . "Everyone deserves the right to own and protect their voice and likeness, no matter if you're Taylor Swift or anyone else," Senator Coons said. "Generative AI can be used as a tool to foster creativity, but that can't come at the expense of the unauthorized exploitation of anyone's voice or likeness." The speed of new legislation notoriously flags behind the speed of new tech development, so it's encouraging to see lawmakers taking AI regulation seriously. Today's proposed act follows the Senate's recent passage of the DEFIANCE Act, which would allow victims of sexual deepfakes to sue for damages. Several entertainment organizations have lent their support to the NO FAKES Act, including SAG-AFTRA, the RIAA, the Motion Picture Association, and the Recording Academy. Many of these groups have been pursuing their own actions to get protection against unauthorized AI recreations. SAG-AFTRA recently to try and secure a union agreement for likenesses in video games. Even OpenAI is listed among the act's backers. "OpenAI is pleased to support the NO FAKES Act, which would protect creators and artists from unauthorized digital replicas of their voices and likenesses," said Anna Makanju, OpenAI's vice president of global affairs. "Creators and artists should be protected from improper impersonation, and thoughtful legislation at the federal level can make a difference."
[2]
Senators Introduce Long-Awaited Bill to Protect Artists From AI Deepfakes
Chris Brown Sued for $15 Million by Security Guard Over Alleged Backstage Assault A bipartisan group of senators officially introduced a long-anticipated bill on Wednesday designed to protect peoples' voices and visual likenesses from being exploited with artificial intelligence re-creations without their permission. Senators Chris Coons (D-DE), Marsha Blackburn (R-TN), Amy Klobuchar (D-MN) and Thom Tillis introduced the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act on Wednesday, about eight months after first introducing a discussion draft of that bill last October. NO FAKES is similar to the NO AI Fraud Act introduced in Congress earlier this year and establishes increased protections to individuals' right of publicity by strengthening their legal claims over the unauthorized use of their voice and likenesses. As the bill notes, this right doesn't expire when someone dies and instead will pass down to an individual's heirs or the executor of their estate. The need for virtual likeness protections against AI has only been further underscored since the NO Fakes Discussion draft, with cases like the controversy around OpenAI using a voice "eerily similar" to Scarlett Johansson's after the actress turned down an offer to have her voice used for conversational AI software the ChatGPT maker was developing. Deepfake pornography has become a major concern as well, with both celebrities and students falling victim to the disturbing trend. Alexandria Ocasio-Cortez is taking the issue head-on, and the bill she introduced to fight it unanimously passed the Senate last week. AI remains a particularly hot-button topic in the music industry, and AI voice clones are being used more frequently in recordings. That's happened both with an artist's permission, such as when Randy Travis used the tech to release his first song since suffering a stroke a decade ago, and without, like when Drake used Tupac's voice in his Kendrick Lamar diss track "Taylor Made Freestyle." Drake -- who himself had his voice lifted last year in the anonymous songwriter Ghostwriter977's song "Heart on My Sleeve" -- removed "Taylor Made Freestyle" after the Tupac Shakur estate sent a cease and desist. The industry has voiced cautious support over the use of AI for music creation, but only when it's done so with rights holders' permission. The major record labels sued AI music generators Suno and Udio in June, alleging the generators used thousands of unauthorized songs of their artists to train their software. The NO FAKES introduction drew significant support from the music industry, with the Human Artistry Campaign, the RIAA, the NMPAm and the Recording Academy all voicing their approval on Wednesday. "The Human Artistry Campaign applauds Senators Coons, Blackburn, Klobuchar and Tillis for crafting strong legislation establishing a fundamental right putting every American in control of their own voices and faces against a new onslaught of highly realistic voice clones and deepfakes," Human Artistry Campaign Senior Advisor Dr. Moiya McTier said in a statement. "The NO FAKES Act will help protect people, culture and art -- with clear protections and exceptions for the public interest and free speech." The bill received kudos from some entertainment figures as well, including SAG-AFTRA president Fran Drescher and the heads of the talent agencies CAA, UTA, and WME. "In the coming decade, legislation like the NO FAKES ACT is desperately needed to protect Americans from being victimized by technology that can replicate our image and voice," Drescher said. "Thank you Sens. Blackburn, Coons, Klobuchar, and Tillis for defending human rights by introducing the NO FAKES Act. People and communities must be protected in the face of innovation."
[3]
Senate Introduces Bipartisan Bill to Protect Against AI Exploitation
In a move aimed at curbing the unauthorized use of artificial intelligence for recreating people's voices and likenesses, a bipartisan group of senators unveiled a new bill on Wednesday. The Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act seeks to safeguard individuals' rights in the digital age. The bill, introduced by Senators Chris Coons (D-DE), Marsha Blackburn (R-TN), Amy Klobuchar (D-MN), and Thom Tillis (R-NC), arrives about eight months after the initial discussion draft was presented last October. According to Rolling Stone, this legislation follows the NO AI Fraud Act and aims to bolster protections around the right of publicity, extending these rights even posthumously to heirs or estate executors. The urgency of such protections has been underscored by recent controversies, including an incident where OpenAI allegedly used a voice strikingly similar to Scarlett Johansson's without her consent. Johansson had previously declined an offer to have her voice used for conversational AI software being developed by OpenAI. Per Rolling Stone, this is just one of many examples highlighting the potential for abuse in the absence of stringent legal safeguards. Another pressing issue is the proliferation of deepfake pornography, which has victimized both celebrities and private individuals. In response to these concerns, Congresswoman Alexandria Ocasio-Cortez introduced a related bill targeting the distribution of deepfake content, which recently passed the Senate unanimously. Related: Senate Committee Blocks Attempt to Prevent AI Disclosure in Political Ads The music industry has also been significantly impacted by AI technology. Instances of AI-generated voice clones have become more frequent, with varying degrees of consent from the original artists. Randy Travis, for instance, used AI to release a song following a debilitating stroke, showcasing the technology's positive potential when used ethically. Conversely, unauthorized uses have sparked legal disputes, such as Drake's incorporation of Tupac Shakur's voice in a diss track. The track, titled "Taylor Made Freestyle," was subsequently pulled after the Shakur estate issued a cease and desist order. Drake himself was a victim of voice cloning when an anonymous songwriter released "Heart on My Sleeve" using his voice without permission. As AI continues to evolve, the NO FAKES Act represents a crucial step in ensuring individuals retain control over their digital representations.
[4]
Entertainment industry gets behind new bill that will outlaw AI deepfakes - SiliconANGLE
Entertainment industry gets behind new bill that will outlaw AI deepfakes The entertainment industry today came out in support of a new bill designed to prevent people from using artificial intelligence to recreate a person's voice or likeness without that person's consent. The bipartisan bill, "Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2024" or the "NO FAKES Act," was introduced by senators Chris Coons (D-Del.), Marsha Blackburn (R-Tenn.), Amy Klobuchar (D-Minn.), and Thom Tillis (R-N.C.). "The NO FAKES Act would hold individuals or companies liable for damages for producing, hosting, or sharing a digital replica of an individual performing in an audiovisual work, image, or sound recording that the individual never actually appeared in or otherwise approved - including digital replicas created by generative artificial intelligence (AI)," said a press release on Coons' website. If a website hosted the enhanced material, that website would by law be responsible for taking it down after being given notice. Documentaries and biographical works will remain within the law under the First Amendment, as will content that serves criticism, or parody. "Everyone deserves the right to own and protect their voice and likeness, no matter if you're Taylor Swift or anyone else," said Coons. "Generative AI can be used as a tool to foster creativity, but that can't come at the expense of the unauthorized exploitation of anyone's voice or likeness." Unsurprisingly, the bill has been cheered by various segments of the music industry. Since generative AI exploded onto the scene, there has been a slew of lawsuits in which disgruntled artists and producers have aired their concerns about what to them seems like someone stealing their content. Earlier in the year, some of the best-known names in music wrote an open letter decrying how AI replicas of their artistry were an "an assault on human creativity." The bill has already been endorsed by the Screen Actors Guild-American Federation of Television and Radio Artists, or, SAG-AFTRA. "If left unregulated, AI technology poses an existential threat not only to SAG-AFTRA's members, but to civil discourse, student health and welfare, and democracy and national security," said Duncan Crabtree-Ireland, National Executive Director and Chief Negotiator, SAG-AFTRA. The Recording Industry Association of America also voiced support, as did the Motion Picture Association, the Recording Academy, The Walt Disney Company, Warner Music Group, Universal Music Group, Sony Music, the Independent Film & Television Alliance, William Morris Endeavor, Creative Arts Agency, and the Authors Guild. It's not just the creators and publishers who've backed the bill. Open AI also came out and gave it the thumbs up, saying in a statement that "creators and artists should be protected from improper impersonation" with what it called "thoughtful legislation."
[5]
US Copyright Office calls for better legal protections against AI-generated deepfakes
The US Copyright Office has published a recommending new and improved protections against digital replicas. "We have concluded that a new law is needed," the department's report states. "The speed, precision, and scale of AI-created digital replicas calls for prompt federal action. Without a robust nationwide remedy, their unauthorized publication and distribution threaten substantial harm not only in the entertainment and political arenas, but also for private individuals." The Copyright Office's assessment reveals several areas where current laws fall short of addressing digital replicas. It describes the state level as "a patchwork of protections, with the availability of a remedy dependent on where the affected individual lives or where the unauthorized use occurred." Likewise, "existing federal laws are too narrowly drawn to fully address the harm from today's sophisticated digital replicas." Among the report's recommendations are safe harbor provisions to encourage online service providers to quickly remove unauthorized digital replicas. It also notes that "everyone has a legitimate interest in controlling the use of their likenesses, and harms such as blackmail, bullying, defamation, and use in pornography are not suffered only by celebrities," meaning laws should cover all individuals and not just the famous ones. The timing of this publication is fitting, considering that the Senate has been making notable moves this month to enact new legal structures around the use of digital replications and AI-generated copycats. Last week, the legislators passed to offer recourse for victims of sexual deepfakes. Today saw the introduction of to more broadly allow any individual to sue for damages for unauthorized use of their voice or likeness. Today's analysis is the first in several parts of the Copyright Office's investigation into AI. With plenty more questions to explore around the use of AI in art and communication, the agency's ongoing findings should prove insightful. Hopefully legislators and courts alike will continue to take them seriously.
Share
Share
Copy Link
A bipartisan group of U.S. senators has introduced legislation aimed at protecting individuals and artists from AI-generated deepfakes. The bill seeks to establish legal safeguards and address concerns about AI exploitation in various sectors.
In a significant move to address the growing concerns surrounding artificial intelligence (AI) and its potential for misuse, a bipartisan group of U.S. senators has introduced a new bill aimed at protecting individuals and artists from AI-generated deepfakes. The proposed legislation, known as the "No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act" or the "No AI FRAUD Act," seeks to establish legal safeguards against the unauthorized use of a person's likeness or voice in AI-generated content 1.
The bill, introduced by Senators Chris Coons, Amy Klobuchar, and Marsha Blackburn, outlines several crucial provisions:
The proposed legislation has garnered support from various sectors of the entertainment industry. Organizations such as the Screen Actors Guild - American Federation of Television and Radio Artists (SAG-AFTRA), the Recording Industry Association of America (RIAA), and the American Association of Independent Music have expressed their backing for the bill 4.
This legislative effort comes amid growing concerns about the potential misuse of AI technology. The U.S. Copyright Office has recently called for better legal protections against AI-generated deepfakes, highlighting the need for a comprehensive approach to addressing these issues 5.
The implications of this bill extend beyond the entertainment industry. It could have significant ramifications for:
While the bill aims to address critical concerns, it also raises questions about implementation and enforcement. Balancing the protection of individual rights with the potential benefits of AI technology in creative industries will be a key challenge for lawmakers and regulators 3.
As AI technology continues to advance rapidly, this legislative effort represents a crucial step in establishing a legal framework to govern its use and protect individuals from potential exploitation. The success of this bill could set a precedent for future AI-related regulations both in the United States and globally.
Reference
[4]
A new bill introduced in the US Senate aims to safeguard the creations of artists and journalists from unauthorized use by AI systems. The legislation proposes measures to protect copyrighted works and ensure fair compensation for creators.
7 Sources
California lawmakers are considering a bill to protect actors' likeness from unauthorized AI use. The legislation aims to require permission for creating AI deepfakes of deceased stars, addressing concerns raised by actors like Tom Hanks.
2 Sources
The U.S. Copyright Office has called for urgent legislation to address the growing concerns surrounding AI-generated deepfakes and impersonation. The office emphasizes the need for a new federal law to protect individuals' rights and regulate the use of AI in content creation.
2 Sources
The U.S. Senate has unanimously passed the DEFIANCE Act, a bipartisan bill aimed at empowering victims of non-consensual deepfake pornography. The legislation allows victims to sue creators and distributors of such content.
4 Sources
A new study reveals that 1 in 6 congresswomen have been victims of AI-generated sexually explicit deepfakes, highlighting the urgent need for legislative action to combat this growing threat.
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved