Curated by THEOUTPOST
On Thu, 10 Apr, 12:14 AM UTC
4 Sources
[1]
YouTube expands its 'likeness' detection technology, which detects AI fakes, to a handful of top creators | TechCrunch
YouTube on Wednesday announced an expansion of its pilot program designed to identify and manage AI-generated content that features the "likeness," including the face, of creators, artists, and other famous or influential figures. The company is also publicly declaring its support for the legislation known as the NO FAKES ACT, which aims to tackle the problem of AI-generated replicas that simulate someone's image or voice to mislead others and create harmful content. The company says it collaborated on the bill with its sponsors, Sens. Chris Coons (D-DE) and Marsha Blackburn (R-TN), and other industry players, including the Recording Industry Association of America (RIAA) and the Motion Picture Association (MPA). Coons and Blackburn will be announcing the reintroduction of the legislation at a press conference on Wednesday. In a blog post, YouTube explains the reasoning behind its continued support, saying that while it understands the potential for AI to "revolutionize creative expression," it also comes with a downside. "We also know there are risks with AI-generated content, including the potential for misuse or to create harmful content. Platforms have a responsibility to address these challenges proactively," according to the post. "The NO FAKES Act provides a smart path forward because it focuses on the best way to balance protection with innovation: putting power directly in the hands of individuals to notify platforms of AI-generated likenesses they believe should come down. This notification process is critical because it makes it possible for platforms to distinguish between authorized content from harmful fakes -- without it, platforms simply can't make informed decisions," YouTube says. The company introduced its likeness detection system in partnership with the Creative Artists Agency (CAA) in December 2024. The new technology builds on YouTube's efforts with its existing Content ID system, which detects copyright-protected material in users' uploaded videos. Similar to Content ID, the program works to automatically detect violating content -- in this case, simulated faces or voices that were made with AI tools, YouTube explained earlier this year. For the first time, YouTube is also sharing a list of the program's initial pilot testers. These include top YouTube creators like MrBeast, Mark Rober, Doctor Mike, the Flow Podcast, Marques Brownlee, and Estude Matemática. During the testing period, YouTube will work with the creators to scale the technology and refine its controls. The program will expand to reach more creators over the year ahead, the company also said. However, YouTube didn't say when it expects the likeness detection system to launch more publicly. In addition to the likeness detection technology pilot, the company also previously updated its privacy process to allow people to request the removal of altered or synthetic content that simulates their likeness. It also added likeness management tools that let people detect and manage how AI is used to depict them on YouTube.
[2]
YouTube is supporting the 'No Fakes Act' targeting unauthorized AI replicas
Richard Lawler is a senior editor following news across tech, culture, policy, and entertainment. He joined The Verge in 2021 after several years covering news at Engadget. Senators Chris Coons (D-DE) and Marsha Blackburn (R-TN) are again introducing their Nurture Originals, Foster Art, and Keep Entertainment Safe, or NO FAKES, Act, which standardizes rules around using making AI copies of a person's faces, names, and voices. This time, the bill -- previously introduced in 2023 and 2024 -- has the backing of a major web platform: YouTube. In a statement announcing its support, YouTube claims the act "focuses on the best way to balance protection with innovation: putting power directly in the hands of individuals to notify platforms of AI-generated likenesses they believe should come down." It joins a list of supporters that already included SAG-AFTRA and the Recording Industry Association, in spite of opposition by civil liberties groups like the Electronic Frontier Foundation (EFF), which have criticized previous drafts as too broad. The 2024 version of the bill said that online services (like YouTube) can't be held liable for storing a third-party-provided "unauthorized digital replica" if it removes the material in response to claims of unauthorized use, and notifies the uploader that it has been removed. Another exception is if the service is "primarily designed" or is marketed for its ability to produce deepfakes. YouTube has also expressed support for the Take It Down Act that would make it a crime to publish non-consensual intimate images, even if they're AI-generated deepfakes, and force social media sites to have processes to quickly remove these images when reported. The latter provision has been strongly opposed by civil liberties groups, and even some groups that advocate against NCII; despite this, it has been passed by the Senate and advanced out of a House committee earlier this week. Today, YouTube is also announcing the expansion of a pilot of the "likeness management technology" it debuted last year in partnership with CAA. YouTube pitches the program as a way for celebrities and creators to detect AI copies of themselves and submit requests to have the content removed. According to YouTube, some top creators now participating in the pilot include MrBeast, Mark Rober, and Marques Brownlee, among others.
[3]
Top YouTube Creators Sign On to Use Tool Meant to Protect Their Likenesses on the Platform
Fresh Off 'Severance' Success, But Amid Tariff Turmoil, Apple Offers Rare TV+ Discount A number of major YouTube stars have signed on to a pilot program that will give high-profile figures more control over their likenesses on the platform. The video platform says that creators like Mr. Beast (real name Jimmy Donaldson), MKBHD (Marques Brownlee) and Mark Rober have signed on to test the tool, which was announced last year. The tool's rollout comes as generative AI tech makes it almost trivially easy to replicate the appearance or voice or a celebrity. When it first announced the tool last year, YouTube unveiled a partnership with CAA that will let its clients, including athletes, musicians and athletes, try it out as part of a pilot program. The news on Wednesday expands that pool to high-profile creators. The tool surfaces AI-generated content featuring a famous person's likeness, and allows them to request removal. YouTube's pilot expansion was made in connection to the news that Senators Chris Coons (D-DE), Marsha Blackburn (R-TN), Amy Klobuchar (D-MN) and Thom Tillis (R-NC) had reintroduced the NO FAKES (Nurture Originals, Foster Art, and Keep Entertainment Safe) Act on Wednesday. YouTube is a supporter of the bill, alongside the Morton Picture Association, the RIAA, SAG-AFTRA and others.
[4]
With YouTube Backing, US Senators Bring Back Anti-Deepfake Bill
Disclaimer: This content generated by AI & may have errors or hallucinations. Edit before use. Read our Terms of use As a group of United States (US) Senators re-introduced an anti-deepfake regulation in the US Congress, they received YouTube's backing for the regulation. This anti-deepfake act, called the "Nurture Originals, Foster Art, and Keep Entertainment Safe Act" or the NO FAKES Act, seeks to protect people's voice and likeness from replication through artificial intelligence (AI) models without consent. In a blog post expressing support for the regulation, YouTube says that it is working closely with its partners, such as the Recording Industry Association of America (RIAA) and the Motion Picture Association (MPA), to push for a shared consensus on this legislation. Besides these three, Sony Music, Universal Music Group, and Walt Disney have also expressed support for the NO FAKES Act. "The NO FAKES Act provides a smart path forward because it focuses on the best way to balance protection with innovation: putting power directly in the hands of individuals to notify platforms of AI-generated likenesses they believe should come down," YouTube says, explaining the rationale behind supporting the legislation. It mentioned that this notification is important because it allows platforms to differentiate between authorised content and harmful fakes. What is the No FAKES Act? Initially introduced in 2024, this regulation gives people the right to authorise the use of their voice or visual likeness in a digital replica. Digital replica, under the regulation, means a "highly realistic electronic representation that is readily identifiable as the voice or visual likeness of an individual". A license to use someone's likeness will be valid only if: The license for an adult's likeness cannot exceed over 10 years and for someone under the age of 18 cannot exceed 5 years. If someone produces a digital replica of an individual without their consent, they will be liable for civil action. However, to be held liable under the NO FAKES Act, the person publishing/distributing the non-consensual digital replica must have "actual knowledge" that the content they posted is indeed a digital replica and that the applicable right holder did not authorise it. This actual knowledge can be in the form of a notification. Exclusions from the NO FAKES Act: The NO FAKES Act will not be applicable in the following cases: Rights of dead individuals: Under the Act, the rights to voice and likeness do not end when an individual dies. Once the individual dies, their executors/heirs/licensees have the right to license/transfer this right. The Act allows rights to be transferred through any legal means or through inheritance as personal property by the applicable laws of intestate succession (succession when the dead person has not left behind any will). In case of licensing agreements before the death of an individual, a right holder has the rights to a dead individual's voice and likeness: The rights will terminate after either the 10-year period, the five-year extension, or after 70 years of the individual's death. Are platforms liable for digital replicas? The NO FAKES Act says that websites or platforms hosting digital replicas that a user uploaded will not be automatically liable for violating the act as long as they: Why it matters: Securing personality rights has become a significant issue since the advent of deepfakes. This regulation could set a precedent for protecting personality rights in other parts of the world, including India. In India, courts have protected the personality rights of public figures such as journalist Rajat Sharma, and singer Arijit Singh from artificial intelligence (AI) misuse. Many other celebrities such as Jackie Schroff, Anil Kapoor, and Amitabh Bachchan have knocked the judiciary's door to protect their personality rights from misuse. In Kapoor and Shroff's cases, their lawyers specifically argued misuse through AI models. While such eminent individuals have been successful in protecting their personality rights in India, there is no clear regulation for the average person to argue for the same. Also read:
Share
Share
Copy Link
YouTube announces support for the NO FAKES Act and expands its AI-generated content detection technology to protect top creators' likenesses, as the debate over AI-generated replicas intensifies.
YouTube has announced its support for the reintroduction of the NO FAKES (Nurture Originals, Foster Art, and Keep Entertainment Safe) Act, a bipartisan legislation aimed at tackling the growing concern of AI-generated replicas 1. The bill, sponsored by Senators Chris Coons (D-DE) and Marsha Blackburn (R-TN), seeks to standardize rules around the creation and use of AI-generated copies of individuals' faces, names, and voices 2.
In conjunction with its support for the NO FAKES Act, YouTube is expanding its pilot program for "likeness detection technology" 1. This technology, initially introduced in partnership with the Creative Artists Agency (CAA) in December 2024, aims to automatically detect AI-generated content that simulates the faces or voices of creators, artists, and other influential figures 3.
Several high-profile YouTube creators have signed on to test the likeness detection tool, including:
This expansion allows these creators to detect AI-generated content featuring their likeness and submit requests for removal 3.
The NO FAKES Act includes several important provisions:
The NO FAKES Act has garnered support from various industry players, including:
However, civil liberties groups like the Electronic Frontier Foundation (EFF) have criticized previous drafts of the bill as being too broad 2.
The NO FAKES Act could set a precedent for protecting personality rights in other parts of the world. In India, for example, courts have already protected the personality rights of public figures from AI misuse, but there is no clear regulation for the average person 4.
Reference
[1]
[3]
The Hollywood Reporter
|Top YouTube Creators Sign On to Use Tool Meant to Protect Their Likenesses on the PlatformA bipartisan group of U.S. senators has introduced legislation aimed at protecting individuals and artists from AI-generated deepfakes. The bill seeks to establish legal safeguards and address concerns about AI exploitation in various sectors.
5 Sources
5 Sources
YouTube is collaborating with Creative Artists Agency (CAA) to test new technology that will help celebrities and athletes identify and manage AI-generated content using their likeness on the platform.
11 Sources
11 Sources
YouTube is creating new tools to identify AI-generated content, including deepfake voices and faces. This move aims to protect creators and maintain trust on the platform amid growing concerns about AI-generated misinformation.
4 Sources
4 Sources
A new bill introduced in the US Senate aims to safeguard the creations of artists and journalists from unauthorized use by AI systems. The legislation proposes measures to protect copyrighted works and ensure fair compensation for creators.
7 Sources
7 Sources
YouTube announces the development of AI detection tools and creator controls to address concerns about AI-generated content. These tools aim to safeguard creators' work and provide more control over AI training data.
5 Sources
5 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved