2 Sources
2 Sources
[1]
YouTube expands its AI likeness detection technology to celebrities
YouTube is expanding its new "likeness detection" technology, which identifies AI-generated content, such as deepfakes, to people within the entertainment industry, the company announced on Tuesday. The technology works similarly to YouTube's existing Content ID system, which detects copyright-protected material in users' uploaded videos, allowing rights owners to request removal or to share in the video's revenue. Likeness detection does the same, but for simulated faces. The feature is meant to help creators and other public figures from having their identity used without their permission -- something that's often a problem for celebs who find their likeness has been hijacked for scam advertisements. The technology was first made available to a subset of YouTube creators in a pilot program last year before expanding more broadly, including to politicians, government officials, and journalists this spring. Now, YouTube says the technology is being made available to those in the entertainment industry, including talent agencies, management companies, and the celebrities they represent. The company has support from major agencies like CAA, UTA, WME, and Untitled Management, which offered feedback on the new tool. Use of the likeness technology tool does not require the entertainer to have their own YouTube channel. Instead, the feature scans for AI-generated content to detect any visual matches of the enrolled participant's face. They can then choose to request removal of the video for privacy policy violations, submit a copyright removal request, or do nothing. YouTube notes that it won't remove all content, as it permits parody and satire content under its rules. Further down the road, the technology will support audio as well, the company says. Related to this, YouTube has also been advocating for similar protections at a federal level, with its support for the NO FAKES Act in Washington D.C. This would regulate the use of AI to create unauthorized recreations of an individual's voice and visual likeness. The company hasn't yet said how many removals of AI deepfakes have been managed by the tool so far, but noted in March that the amount of removals was still "very small."
[2]
YouTube Opens Up AI Deepfake Detection Tool to All of Hollywood (Exclusive)
Apple CEO Tim Cook Will Step Down, John Ternus Named New Leader Hollywood is an industry built on likeness and fame. But consumers are entering a world in which anyone's likeness can be co-opted, as AI-generated deepfakes proliferate. YouTube, the world's largest video platform, has developed a solution. And now it is opening it up to Hollywood. Executives at the Google-owned platform tell The Hollywood Reporter that their proprietary deepfake detection tool, years in the making, is now open to anyone at high risk of having their likeness abused: Actors, athletes, creators and musicians, whether they have a YouTube channel or not, can sign up to identify and request removal of deepfakes on its platform. "I would think of it as a foundational layer of responsibility," says Mary Ellen Coe, YouTube's chief business officer, in an interview with THR. "We've been working on this for quite some time since the genesis of thinking through AI tools and the implications on the platform ... frankly, we have not seen the vectors that are even possible, and we are working very closely with talent agencies and third-party management companies to make sure that public figures can actually get ahead of this before something negative happens." YouTube first began testing the tool nearly a year and a half ago, then expanded it a few months later to some of the most prominent creators on its platform, and earlier this year to selected politicians and public officials, but the doors are now officially wide open for those most at risk of having their livelihoods damaged by the technology. "What YouTube's offering is -- and I don't say this about many tech companies -- but out of the graciousness of their hearts, they are doing the right thing by providing these tools at no cost to the talent, so they can protect their real estate," says Jason Newman, a partner at the management and production firm Untitled Entertainment. "Their real estate is their face. Their real estate is their body. Their real estate is who they are, what they do, how they say it." The timing of the tool's expansion comes as the industry grapples with the continued growth of deepfakes across platforms, and with video models quickly turning hypothetical worst-case scenarios into reality for many stars. While everyone remembers when they first saw the deepfake of Pope Francis wearing a puffy coat (one of the first photo deepfakes to capture the public's imagination), the past six months alone have delivered what one high-level source calls two "oh shit moments" for Hollywood. Last fall, OpenAI launched the Sora app, and a barrage of popular characters and IP quickly flooded it, including familiar faces from actors playing film and TV characters, and historic figures like Martin Luther King Jr., who had their AI likenesses puppeteered by its users. OpenAI ultimately put a stop to the MLK and IP-driven deepfakes (and of course it shut down Sora altogether last month), but the damage was done. Then in February, videos created by Seedance 2.0 featuring Brad Pitt fighting Tom Cruise spread across the internet like wildfire. As one source notes, that was a wake-up call for Hollywood: Not only are deepfakes here, but they are progressing at lightning speed. "In a single day, the Chinese AI service Seedance 2.0 has engaged in unauthorized use of U.S. copyrighted works on a massive scale," MPA president and CEO Charles Rivkin said as videos of the faux battle between the movie stars spread. "If you think about public figures, famous figures, your image and your reputation are paramount to your livelihood," says Coe. "And the idea that that could be corrupted in some manner is really an important concept, because there have been instances of this that I think people have talked about, it's really important that they can have a semblance of control and ability to manage that." YouTube's chief business officer Coe began thinking about developing the deepfake detection tool more than three years ago, when the company, under CEO Neal Mohan, began to lean into the potential of generative AI on the platform. Mohan authored a blog post at the time titled "our principles for partnering with the music industry on AI technology," writing that "At this critical inflection point, it's clear that we need to boldly embrace this technology with a continued commitment to responsibility." Of course it didn't stop with the music industry. At the time, AI-generated audio was proliferating, and the state-of-the-art video models were limited to the likes of that original (and terrifying) "Will Smith eating spaghetti" example. YouTube added its first scaled gen AI tool, "Dream Screen," later that year. But as the tech progressed, so did YouTube's efforts. And it took a cue from Content ID, the platform's system to identify copyrighted content that is uploaded to its servers. "It's the same concept, the ability for them to manage their identity and how that appears in the public marketplace," Coe says. Here's how it works: A celebrity, creator or public figure (or more likely their agent, manager or another member of their team) opts in, and uploads their likeness into the system. The celebrity can have their own YouTube channel, or importantly they may not have a public-facing YouTube page, but can still opt-in. The system then scans YouTube and flags potential replicas for that celebrity's team to review. They can choose to leave it be, or request removal. That being said, a request for removal does not guarantee that YouTube will pull the video. "There are a number of cases like parody and satire where our community guidelines would allow that to remain on the platform," Coe says. Indeed, YouTube has a long history of allowing parody content, even if it features the likeness of known people (one could expect that any parody videos that use deepfake tech would be labeled to acknowledge its use). Videos that feature "realistic and consequential disparagement" and perhaps more importantly "content replacement" would be removed if requested. "If someone is doing an exact replica of something that would limit the livelihood of a celebrity, an actor or a creator, because it's literal content replacement, that would be included in a takedown," she adds. In other words, if someone uses deepfake technology to create a video very similar to the content that the person is known for, it would be eligible for removal. Whether that applies to, say, fan-made trailers for movies is somewhat ambiguous. YouTube began testing the tool in late 2024 through a pilot program with CAA. "Our intention with YouTube was to ensure that we could get ahead of this as much as possible and do right by protecting our clients, ensuring at the same time that fans and people consuming content were protected from misleading content," says Alex Shannon, CAA's head of strategic development. "Frankly, by the time most of this AI-generated content featuring our clients like this is found, oftentimes it's by happenstance. And then on top of that, by the time it's discovered, a lot of the reputational damage has already been done." "It's their job to protect the reputation and the livelihood of their talent, reputation is everything in the industry," Coe says, adding that the pilot proved itself early on: "Frankly, they had a couple public figures whose likeness was co-opted, and that raised alarm bells and a sense of urgency in some cases." The platform subsequently expanded the pilot, adding more eligible people, leading up to today's news. "We view YouTube's likeness protection as a constructive early step, primarily because it just gives our clients visibility on one of the largest search engines in the world, and also within a system that they already understand," says Lesley Silverman, a partner at UTA. "And I think the big thing about this program is that it's free, it's opt-in, and it can be implemented even for public figures who aren't actively posting on YouTube." But Hollywood's relationship with deepfakes and AI is more complex than it may seem on the surface. While casual observers may assume disdain for the technology, talent and executives appear more open-minded than it may seem. CAA, for example, has invested in two companies in the deepfake business, Metaphysic and Deep Voodoo, betting on its creative use cases. But Disney's ill-fated deal with OpenAI was an early signal that entertainment companies view the tech as more than just a tool to bolster production capabilities. It can be a fan engagement tool. Pam Abdy, the co-CEO of Warner Bros. Pictures, captured that perspective when asked at a CNBC conference last week about the proliferation of AI-generated fan trailers for Practical Magic 2. "I know it's not great, but it's also exciting, because that means that there's a desire for it and that means that people want to come and play with the movie," Abdy said. "There could be worse with my image," quipped Practical Magic 2 star Sandra Bullock, who was interviewed alongside Abdy. "It's here. We have to observe it. We have to understand it. We have to lean into it ... I mean, we have to be incredibly cautious and aware of it because there are people who will use it for evil and not good." In fact, YouTube executives say that during the pilot program many creators only requested that a small percentage of flagged content be removed. One large YouTube creator says that most of the AI-generated content they have seen of themselves is benign, or even positive and supportive. That seems to be more widespread than the viral examples of Pitt vs. Cruise may suggest, with fan-made faux trailers more common than disparaging videos. "I think there are certainly individuals who would believe that any sort of use of likeness should be consented to, and I think that is also a fair argument," Shannon says. "But on average, we have seen more folks be excited about fans engaging and wanting to celebrate them in some form or another." "The reality is this content is getting made, it's out there," Silverman says. "And so the question is really whether talent has visibility into that, and whether there is a mechanism to respond. And I think given this feature, there now is this ability to respond." And that, in turn, leads to the even more complicated question of monetization. With Content ID, copyright holders can request that infringing videos be removed, demonetized... or monetized by sharing revenue with the uploader. In a world where likeness is the IP ... will celebrities soon be able to make money from deepfakes of themselves created by fans? YouTube's tool, notably, does not have that feature as of now, though Coe frames it as something they are thinking about longer-term. "We need to really focus on this foundational layer of responsibility and protection, and then we will think about rightsholders, and how do we think about monetization," Coe says. "But right now, it is really about that layer of responsibility." Agents, managers, and lawyers, of course, are already thinking a bit farther ahead, not least because of the complexities involved in the very idea of monetizing one's likeness through deepfakes. CAA even has a product called the "CAA Vault," which houses the likenesses of its clients for potential future monetization opportunities. "That is, of course, an extremely complicated problem to try and solve, which we are actively thinking about," Shannon says. "It's one thing if you're talking about one video featuring one particular talent, it's another thing if you're talking about a video that has two different talents, and one is okay with those, and one's not, or one is an established star and one is an up-and-coming star. I think there's a lot of complexity." But AI and deepfakes have been moved to the forefront of YouTube's thinking, betting that it can help its creators manage it, even as risks from synthetic content and deepfakes pose their own risks. In his annual letter to YouTube creators earlier this year, Mohan listed it as one of his four priorities, calling out deepfakes as a focus point. "Ultimately, we're focused on ensuring AI serves the people who make YouTube great: the creators, artists, partners, and billions of viewers looking to capture, experience and share a deeper connection to the world around them," he wrote. But even if monetization doesn't ultimately come to fruition, the tech YouTube created to identify deepfakes on its platform is necessary, prudent, and as multiple sources stressed, incredibly important to an industry in the midst of technological disruption. Ultimately, it may be helpful to anyone at risk of being targeted. "It's like fire insurance, right? You don't think it's going to happen to you until it does, and then it's really disastrous, and you are grateful that you have it," Coe says. "I think peace of mind and control are really important benefits to ascribe to this. The peace of mind that we're hearing afterwards is quite real." Or as Newman frames it: "When Kim Kardashian travels, she has security around her all the time. Why wouldn't you have security around you in the digital world?"
Share
Share
Copy Link
YouTube is expanding its likeness detection technology to the entertainment industry, allowing celebrities to identify and request removal of AI deepfakes. The tool, developed over three years, works like Content ID but scans for simulated faces instead of copyrighted material. Major talent agencies including CAA, UTA, and WME support the initiative as deepfakes proliferate across platforms.
YouTube has opened its AI deepfake detection tool to anyone in the entertainment industry at risk of having their identity misused, the company announced this week
1
. The technology, which identifies AI-generated content featuring simulated faces, is now available to actors, musicians, athletes, and other public figures, regardless of whether they maintain a YouTube channel2
.
Source: TechCrunch
The expansion represents a significant step in addressing the proliferation of AI deepfakes across the platform. Mary Ellen Coe, YouTube's chief business officer, describes it as "a foundational layer of responsibility," noting that the company has been working closely with talent agencies and management companies to help celebrities protect celebrity likeness before damage occurs
2
.Likeness detection operates similarly to YouTube's existing Content ID system, which identifies copyright-protected material in uploaded videos
1
. Instead of scanning for copyrighted content, the AI deepfake detection tool scans for visual matches of enrolled participants' faces. Once AI-generated content is detected, affected individuals can choose to request the removal of videos for privacy policy violations, submit a copyright removal request, or take no action1
.The platform maintains that it won't remove all flagged content, as satire and parody remain protected under YouTube's rules
1
. This balance aims to protect creators while preventing misuse of likeness that could damage reputations or enable scams.The timing of this expansion addresses mounting concerns about unauthorized AI likeness use in the entertainment world. Jason Newman, a partner at Untitled Entertainment, emphasizes the stakes: "Their real estate is their face. Their real estate is their body. Their real estate is who they are, what they do, how they say it"
2
.Recent incidents have accelerated Hollywood's awareness of the threat. Last fall, OpenAI's Sora app saw users create deepfakes of popular characters and historic figures like Martin Luther King Jr. In February, videos from Seedance 2.0 featuring fabricated fights between Brad Pitt and Tom Cruise spread rapidly online, prompting MPA president Charles Rivkin to call it "unauthorized use of U.S. copyrighted works on a massive scale"
2
.
Source: THR
Related Stories
Major talent agencies including CAA, UTA, WME, and Untitled Management provided feedback during the tool's development and support its rollout
1
. The Google-owned platform offers the service at no cost to talent, according to industry representatives2
.YouTube first tested likeness detection nearly a year and a half ago with a subset of creators before expanding to politicians, government officials, and journalists earlier this year
1
. While the platform noted in March that removals remained "very small," the tool's broader availability may change that metric1
.Looking ahead, YouTube plans to extend the technology to support audio detection for voice recreations
1
. The company is also advocating for federal protections through its support of the NO FAKES Act, which would regulate AI-created unauthorized recreations of individuals' voices and visual appearances1
.The initiative reflects responsible AI practices as video generation models advance rapidly. Coe began developing the concept over three years ago when YouTube, under CEO Neal Mohan, started exploring generative AI's potential on the platform
2
. As deepfake technology continues evolving, the Content ID for likeness approach provides a framework for managing identity in an era where anyone's face can be digitally replicated.Summarized by
Navi
1
Policy and Regulation

2
Policy and Regulation

3
Technology
