9 Sources
[1]
YouTube expands its AI likeness detection technology to celebrities
YouTube is expanding its new "likeness detection" technology, which identifies AI-generated content, such as deepfakes, to people within the entertainment industry, the company announced on Tuesday. The technology works similarly to YouTube's existing Content ID system, which detects copyright-protected material in users' uploaded videos, allowing rights owners to request removal or to share in the video's revenue. Likeness detection does the same, but for simulated faces. The feature is meant to help creators and other public figures from having their identity used without their permission -- something that's often a problem for celebs who find their likeness has been hijacked for scam advertisements. The technology was first made available to a subset of YouTube creators in a pilot program last year before expanding more broadly, including to politicians, government officials, and journalists this spring. Now, YouTube says the technology is being made available to those in the entertainment industry, including talent agencies, management companies, and the celebrities they represent. The company has support from major agencies like CAA, UTA, WME, and Untitled Management, which offered feedback on the new tool. Use of the likeness technology tool does not require the entertainer to have their own YouTube channel. Instead, the feature scans for AI-generated content to detect any visual matches of the enrolled participant's face. They can then choose to request removal of the video for privacy policy violations, submit a copyright removal request, or do nothing. YouTube notes that it won't remove all content, as it permits parody and satire content under its rules. Further down the road, the technology will support audio as well, the company says. Related to this, YouTube has also been advocating for similar protections at a federal level, with its support for the NO FAKES Act in Washington D.C. This would regulate the use of AI to create unauthorized recreations of an individual's voice and visual likeness. The company hasn't yet said how many removals of AI deepfakes have been managed by the tool so far, but noted in March that the amount of removals was still "very small."
[2]
YouTube's AI Deepfake Detector Now Lets Any Celebrity Take Down Infringing Videos
Expertise Video gaming, computer hardware, laptops, home energy, home internet YouTube, the world's largest video-sharing platform, is ready to help celebrities crack down on AI-generated deepfake videos, according to The Hollywood Reporter. The Google-owned website is sharing a deepfake detection tool it has been fine-tuning over the past two years, granting access to celebrities at high risk of having their likenesses copied in AI-generated media. A Google representative did not immediately respond to a request for comment. As AI tools have made it increasingly easy to use famous likenesses in user-generated videos, Hollywood has waged war on the biggest video generators. Actors and major studios have aligned against major offenders, like OpenAI's recently deceased Sora and ByteDance's SeeDance 2.0 app. But despite increasing pressure from the rich and famous, deepfakes continue to proliferate through AI video-generation prompts. (Disclosure: Ziff Davis, CNET's parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) The deepfake-detecting tool from YouTube aims to curb this trend, at least on its own video platform. The tool works similarly to YouTube's Content ID, which automatically identifies and flags copyrighted content uploaded to the website's servers. To opt in to the program, a celebrity (or their agent) must upload their likeness to the deepfake detector tool, which scans the site's content and flags potentially offensive AI-generated material for review. Affected individuals won't need a YouTube account to take action if they find unauthorized deepfake videos using their likenesses. Though the company may remove offending content from the website if asked to do so, there's no guarantee that every flagged video will be taken down. "There are a number of cases, like parody and satire, where our community guidelines would allow that to remain on the platform," Mary Ellen Coe, YouTube's chief business officer, told The Hollywood Reporter. "If someone is doing an exact replica of something that would limit the livelihood of a celebrity, an actor or a creator, because it's literal content replacement, that would be included in a takedown." This tool isn't completely new -- YouTube began its rollout last year, testing its implementation with some of the biggest creators on the website. A couple of months ago, the tool became available to politicians. This is the widest rollout yet, as YouTube expands its user base to include actors, athletes, musicians and other celebrities whose likenesses are used in AI-generated videos. According to The Hollywood Reporter, YouTube executives said that many creators removed a small portion of flagged content during the pilot program for the deepfake detection tool, predominantly focusing on negative or disparaging media. Coe hinted at a future in which rightsholders may choose to monetize AI-generated media rather than take it down, but said that isn't currently planned for the YouTube platform. The company's current focus is the "foundational layer of responsibility and protection" for celebrities and their likeness, she said.
[3]
Celebrities will be able to find and request removal of AI deepfakes on YouTube
YouTube is expanding its AI deepfake monitoring feature to Hollywood -- meaning some celebrity AI videos could soon disappear. The platform's likeness detection feature searches YouTube for AI deepfake content and flags it for public figures enrolled in the program. Public figures can use it to keep track of AI content on YouTube of themselves or request removal (takedowns are evaluated against YouTube's privacy policy, and not every request will be approved). YouTube began testing the feature with content creators last fall; in March, the company expanded the program to politicians and journalists. YouTube says the tool will cover celebrities regardless of whether they have a YouTube account.
[4]
YouTube expands AI deepfake detection to Hollywood's biggest stars
Timi is the news and deals reporter for Android Police, who has been reporting on technology since 2008. He has worked in tech retail and also the IT space, providing hardware and software support, which gives him a unique perspective on the tech that he covers. This allows him to effectively break down complex subjects into easy-to-read pieces that even casual readers can enjoy. Before joining Android Police, he was a news writer for XDA, where he eventually transitioned to covering deals. He also worked as an editor and reporter for Neowin, where he covered news and attended major tech events like CES. He also reviewed phones, tablets, PC products, and other devices. In addition, he also created video content for the Neowin YouTube channel. While things were blatantly obvious at first, there are now some very good AI tools that are generating pretty convincing images and videos that make it hard to separate them from reality. Of course, this just means that it's getting harder and harder to tell what's real and what's fake on the internet. Sometimes, there can be some concerning content out there, and in a world where people's livelihoods are sometimes at stake, it's important to be able to protect oneself if there's something that's not quite accurate. YouTube rolled out 'Likeness detection' for creators on its platform, and now these tools are being made available to celebrities (via TechCrunch). We need more of this The great part about the Likeness detection tool, is that YouTube does all the hard work for you. If you're not a creator and don't have access to these tools, it's quite simple to access. Just head into the YouTube Creator hub, there's an option for content detection, along with the Likeness tab that will show you whether your face is being used in AI content. From this section, you'll be able to report anything that isn't genuine content. While this is a huge deal for content creators, it can be an even bigger deal for those in the entertainment industry. YouTube shares that "with support from leading talent agencies and management companies, including CAA, UTA, WME, and Untitled Management, we've worked to refine how likeness detection can best serve talent." It goes on to state that "celebrities and entertainers are now eligible to access this tool, regardless of whether they have a YouTube channel." For the most part, it seems pretty seamless and should allow celebrities to navigate AI content on YouTube. The hard fight ahead YouTube is just one facet of Google's giant operation, but it's a huge part of it. So, it's important that it keeps evolving its tools to protect creators. We've seen Google make it harder for AI-generated content to spread by offering tools for its most popular platforms. Perhaps one of the most important is the ability to report AI content on Google Search. Putting these types of protections and tools in place early on is important and even vital. The technology will only get better, which means that the tools to fight such content will also need to be at its best as well. Hopefully, we'll see more of this in the future, making it easier to understand what's real and what's not.
[5]
YouTube is coming for celebrity fakes with new AI likeness detection tech
Celebrity deepfakes are in YouTube's crosshairs with new AI detection tools YouTube is cracking down on celebrity deepfakes, and this time around, it is not just talking about the problem in vague platform-safety terms. In a new blog post, YouTube announced that it is expanding its likeness detection technology to the entertainment industry. So now, the tools will be accessible to talent agencies and management companies for the celebrities they represent. This tool works in a way that is similar to Content ID, but rather than matching copyrighted media, it looks for AI-generated content using a person's likeness and gives eligible participants the ability to find that content and request removal. Why this is YouTube's answer to AI celebrity fakes Recommended Videos The Content AI comparison here is key, since that is exactly how YouTube wants people to think about this. If the system works well, it could give high-profile people a much faster way to spot fake videos using their face before those clips spread too far. And yes, this is clearly about celebrity fakes first. YouTube's expanded program is aimed at the entertainment industry right now, with support from major talent agencies and management companies, including CAA, UTA, WME, and Untitled Management. The company has worked with those groups to refine how the tool should serve talent, which suggests this has been shaped around the practical needs of public figures rather than launched as a generic moderation experiment. One notable detail in the announcement is that celebrities and entertainers are eligible to access the tool even if they do not have a YouTube channel. In other words, it isn't just a creator perk and functions more like a platform-wide control system. Deepfake scams, fake endorsements, and manipulated celebrity clips are no longer fringe internet weirdness. They're a real part of online dangers. How far is YouTube taking this As of right now, the announcement is focused on the entertainment industry. YouTube did not announce a broad public rollout that protects regular users. We also have no details regarding how fast the detection system is or how proactive the company will be against these deepfakes.
[6]
YouTube offers deepfake detection to Hollywood
Washington (United States) (AFP) - YouTube is offering Hollywood celebrities and entertainers a free detection tool to help combat their deepfakes, expanding the Google-owned video platform's efforts to guard against AI-driven impersonations. Last month, YouTube introduced its likeness protection tool -- which helps identify content in which a person's face appears altered or generated using AI technology -- to government officials, journalists, and political candidates. The platform is now extending access to entertainers including actors and musicians, who face a heightened risk of having their likeness misused -- potentially harming their careers and distorting shared realities. "We're expanding our likeness detection technology to the entertainment industry: talent agencies, management companies, and the celebrities they represent," YouTube said earlier this week. Likeness detection "looks for AI-generated content with a participant's likeness, like a deepfake of their face, and gives them the power to find it and request removal." The video giant added that celebrities and entertainers were eligible to access the tool regardless of whether they have a YouTube channel. "YouTube opening its deepfake detection capabilities to public figures reflects a turning point in how platforms approach identity protection in the age of generative AI," Alon Yamin, chief executive and co-founder of AI content detection platform Copyleaks, told AFP. "The technology to replicate a person's face, voice, and mannerisms has advanced faster than the safeguards around it, creating a gap that bad actors are already exploiting." High stakes The move comes after hyper-realistic AI videos of dead celebrities -- created with apps such as OpenAI's easy-to-use Sora -- rapidly spread online, prompting debate over the control of deceased people's likenesses. OpenAI's app also unleashed a flood of videos of celebrities such as Michael Jackson and Elvis Presley. Last month, OpenAI said it was shutting down its Sora app. In February, Irish director Ruairí Robinson created a stunningly realistic clip featuring Brad Pitt fighting Tom Cruise on a rooftop using a two-sentence prompt. The widely circulated clip, which sparked alarm across Hollywood, was generated with Seedance 2.0, an AI video generation tool owned by the Chinese technology company ByteDance. Robinson also created other videos depicting Pitt battling a sword-wielding "zombie ninja," and another showing him teaming up with Cruise to fight a robot. Charles Rivkin, the chairman and chief executive of the Motion Picture Association, called on ByteDance to "immediately cease its infringing activity," accusing it of disregarding copyright law that protects creators and underpins millions of jobs. YouTube said it was working with leading talent agencies to refine how likeness detection can protect entertainers. The video giant is "doing the right thing by providing these tools at no cost to the talent, so they can protect their real estate," Jason Newman of the management and production firm Untitled Entertainment told The Hollywood Reporter. "Their real estate is their face. Their real estate is their body. Their real estate is who they are, what they do, how they say it." The expansion of the detection tool follows complaints from high-profile Americans about YouTube's cumbersome process for flagging and removing deepfakes from the platform -- especially as AI accelerates the creation of fabricated content. "For celebrities, executives, and other high-profile individuals, the stakes are especially high as deepfakes can be used to spread misinformation, manipulate markets, damage reputations, or falsely imply endorsement. Robust detection is no longer optional," said Yamin. "Detection systems must be highly accurate, continuously updated, and paired with clear policies and swift takedown processes to be effective. "This won't eliminate deepfakes entirely, but it can significantly reduce their reach and impact by making it harder for manipulated content to go undetected or unchallenged," he added.
[7]
YouTube expands deepfake detection to wider group of users
YouTube announced an expansion of its likeness detection tools, allowing users at risk of impersonation to upload images of their faces for cross-checking against other uploads for potential imposters and deepfakes. This development significantly broadens the platform's safety measures, now open to all actors, athletes, creators, and musicians, regardless of whether they have a YouTube channel, according to The Hollywood Reporter. YouTube started developing its likeness detection tools in September 2024. The technology utilizes face scans and government IDs to verify uploaded content across the platform. The platform can alert users if their images appear in others' uploads, helping them identify misuse and take action if necessary. Previously, this feature was restricted to select creators, government officials, journalists, and political candidates. Now, the tool is available for individuals most at risk of having their livelihoods affected by deepfakes, a concern that is growing as artificial intelligence technology advances. Various deepfake trends have surfaced, including popular depictions like the "Pope in a Puffer Jacket" and fan-generated scenes meant for entertainment. YouTube will still allow certain benign depictions, provided they do not infringe on user rights. However, harmful deepfakes remain a key issue. YouTube's expanded detection process allows more individuals to remain informed and request the removal of misleading content that could harm their interests. The platform acknowledged that deepfakes are likely to increase in prevalence. YouTube stated that as AI tools develop, new forms of misrepresentation will become more complex, complicating efforts to manage misuse.
[8]
YouTube offers deepfake detection to Hollywood - The Economic Times
YouTube is now offering Hollywood celebrities and entertainers a free deepfake detection tool to combat AI-driven impersonations. This expansion of their likeness protection technology aims to help artists identify and request removal of manipulated content, safeguarding their careers and public image in the face of advancing AI.YouTube is offering Hollywood celebrities and entertainers a free detection tool to help combat their deepfakes, expanding the Google-owned video platform's efforts to guard against AI-driven impersonations. Last month, YouTube introduced its likeness protection tool -- which helps identify content in which a person's face appears altered or generated using AI technology -- to government officials, journalists, and political candidates. The platform is now extending access to entertainers including actors and musicians, who face a heightened risk of having their likeness misused -- potentially harming their careers and distorting shared realities. "We're expanding our likeness detection technology to the entertainment industry: talent agencies, management companies, and the celebrities they represent," YouTube said earlier this week. Likeness detection "looks for AI-generated content with a participant's likeness, like a deepfake of their face, and gives them the power to find it and request removal." The video giant added that celebrities and entertainers were eligible to access the tool regardless of whether they have a YouTube channel. "YouTube opening its deepfake detection capabilities to public figures reflects a turning point in how platforms approach identity protection in the age of generative AI," Alon Yamin, chief executive and co-founder of AI content detection platform Copyleaks, told AFP. "The technology to replicate a person's face, voice, and mannerisms has advanced faster than the safeguards around it, creating a gap that bad actors are already exploiting." High stakes The move comes after hyper-realistic AI videos of dead celebrities -- created with apps such as OpenAI's easy-to-use Sora -- rapidly spread online, prompting debate over the control of deceased people's likenesses. OpenAI's app also unleashed a flood of videos of celebrities such as Michael Jackson and Elvis Presley. Last month, OpenAI said it was shutting down its Sora app. In February, Irish director Ruairi Robinson created a stunningly realistic clip featuring Brad Pitt fighting Tom Cruise on a rooftop using a two-sentence prompt. The widely circulated clip, which sparked alarm across Hollywood, was generated with Seedance 2.0, an AI video generation tool owned by the Chinese technology company ByteDance. Robinson also created other videos depicting Pitt battling a sword-wielding "zombie ninja," and another showing him teaming up with Cruise to fight a robot. Charles Rivkin, the chairman and chief executive of the Motion Picture Association, called on ByteDance to "immediately cease its infringing activity," accusing it of disregarding copyright law that protects creators and underpins millions of jobs. YouTube said it was working with leading talent agencies to refine how likeness detection can protect entertainers. The video giant is "doing the right thing by providing these tools at no cost to the talent, so they can protect their real estate," Jason Newman of the management and production firm Untitled Entertainment told The Hollywood Reporter. "Their real estate is their face. Their real estate is their body. Their real estate is who they are, what they do, how they say it." The expansion of the detection tool follows complaints from high-profile Americans about YouTube's cumbersome process for flagging and removing deepfakes from the platform -- especially as AI accelerates the creation of fabricated content. "For celebrities, executives, and other high-profile individuals, the stakes are especially high as deepfakes can be used to spread misinformation, manipulate markets, damage reputations, or falsely imply endorsement. Robust detection is no longer optional," said Yamin. "Detection systems must be highly accurate, continuously updated, and paired with clear policies and swift takedown processes to be effective. "This won't eliminate deepfakes entirely, but it can significantly reduce their reach and impact by making it harder for manipulated content to go undetected or unchallenged," he added.
[9]
YouTube Opens Up AI Deepfake Detection Tool to All of Hollywood (Exclusive)
Apple CEO Tim Cook Will Step Down, John Ternus Named New Leader Hollywood is an industry built on likeness and fame. But consumers are entering a world in which anyone's likeness can be co-opted, as AI-generated deepfakes proliferate. YouTube, the world's largest video platform, has developed a solution. And now it is opening it up to Hollywood. Executives at the Google-owned platform tell The Hollywood Reporter that their proprietary deepfake detection tool, years in the making, is now open to anyone at high risk of having their likeness abused: Actors, athletes, creators and musicians, whether they have a YouTube channel or not, can sign up to identify and request removal of deepfakes on its platform. "I would think of it as a foundational layer of responsibility," says Mary Ellen Coe, YouTube's chief business officer, in an interview with THR. "We've been working on this for quite some time since the genesis of thinking through AI tools and the implications on the platform ... frankly, we have not seen the vectors that are even possible, and we are working very closely with talent agencies and third-party management companies to make sure that public figures can actually get ahead of this before something negative happens." YouTube first began testing the tool nearly a year and a half ago, then expanded it a few months later to some of the most prominent creators on its platform, and earlier this year to selected politicians and public officials, but the doors are now officially wide open for those most at risk of having their livelihoods damaged by the technology. "What YouTube's offering is -- and I don't say this about many tech companies -- but out of the graciousness of their hearts, they are doing the right thing by providing these tools at no cost to the talent, so they can protect their real estate," says Jason Newman, a partner at the management and production firm Untitled Entertainment. "Their real estate is their face. Their real estate is their body. Their real estate is who they are, what they do, how they say it." The timing of the tool's expansion comes as the industry grapples with the continued growth of deepfakes across platforms, and with video models quickly turning hypothetical worst-case scenarios into reality for many stars. While everyone remembers when they first saw the deepfake of Pope Francis wearing a puffy coat (one of the first photo deepfakes to capture the public's imagination), the past six months alone have delivered what one high-level source calls two "oh shit moments" for Hollywood. Last fall, OpenAI launched the Sora app, and a barrage of popular characters and IP quickly flooded it, including familiar faces from actors playing film and TV characters, and historic figures like Martin Luther King Jr., who had their AI likenesses puppeteered by its users. OpenAI ultimately put a stop to the MLK and IP-driven deepfakes (and of course it shut down Sora altogether last month), but the damage was done. Then in February, videos created by Seedance 2.0 featuring Brad Pitt fighting Tom Cruise spread across the internet like wildfire. As one source notes, that was a wake-up call for Hollywood: Not only are deepfakes here, but they are progressing at lightning speed. "In a single day, the Chinese AI service Seedance 2.0 has engaged in unauthorized use of U.S. copyrighted works on a massive scale," MPA president and CEO Charles Rivkin said as videos of the faux battle between the movie stars spread. "If you think about public figures, famous figures, your image and your reputation are paramount to your livelihood," says Coe. "And the idea that that could be corrupted in some manner is really an important concept, because there have been instances of this that I think people have talked about, it's really important that they can have a semblance of control and ability to manage that." YouTube's chief business officer Coe began thinking about developing the deepfake detection tool more than three years ago, when the company, under CEO Neal Mohan, began to lean into the potential of generative AI on the platform. Mohan authored a blog post at the time titled "our principles for partnering with the music industry on AI technology," writing that "At this critical inflection point, it's clear that we need to boldly embrace this technology with a continued commitment to responsibility." Of course it didn't stop with the music industry. At the time, AI-generated audio was proliferating, and the state-of-the-art video models were limited to the likes of that original (and terrifying) "Will Smith eating spaghetti" example. YouTube added its first scaled gen AI tool, "Dream Screen," later that year. But as the tech progressed, so did YouTube's efforts. And it took a cue from Content ID, the platform's system to identify copyrighted content that is uploaded to its servers. "It's the same concept, the ability for them to manage their identity and how that appears in the public marketplace," Coe says. Here's how it works: A celebrity, creator or public figure (or more likely their agent, manager or another member of their team) opts in, and uploads their likeness into the system. The celebrity can have their own YouTube channel, or importantly they may not have a public-facing YouTube page, but can still opt-in. The system then scans YouTube and flags potential replicas for that celebrity's team to review. They can choose to leave it be, or request removal. That being said, a request for removal does not guarantee that YouTube will pull the video. "There are a number of cases like parody and satire where our community guidelines would allow that to remain on the platform," Coe says. Indeed, YouTube has a long history of allowing parody content, even if it features the likeness of known people (one could expect that any parody videos that use deepfake tech would be labeled to acknowledge its use). Videos that feature "realistic and consequential disparagement" and perhaps more importantly "content replacement" would be removed if requested. "If someone is doing an exact replica of something that would limit the livelihood of a celebrity, an actor or a creator, because it's literal content replacement, that would be included in a takedown," she adds. In other words, if someone uses deepfake technology to create a video very similar to the content that the person is known for, it would be eligible for removal. Whether that applies to, say, fan-made trailers for movies is somewhat ambiguous. YouTube began testing the tool in late 2024 through a pilot program with CAA. "Our intention with YouTube was to ensure that we could get ahead of this as much as possible and do right by protecting our clients, ensuring at the same time that fans and people consuming content were protected from misleading content," says Alex Shannon, CAA's head of strategic development. "Frankly, by the time most of this AI-generated content featuring our clients like this is found, oftentimes it's by happenstance. And then on top of that, by the time it's discovered, a lot of the reputational damage has already been done." "It's their job to protect the reputation and the livelihood of their talent, reputation is everything in the industry," Coe says, adding that the pilot proved itself early on: "Frankly, they had a couple public figures whose likeness was co-opted, and that raised alarm bells and a sense of urgency in some cases." The platform subsequently expanded the pilot, adding more eligible people, leading up to today's news. "We view YouTube's likeness protection as a constructive early step, primarily because it just gives our clients visibility on one of the largest search engines in the world, and also within a system that they already understand," says Lesley Silverman, a partner at UTA. "And I think the big thing about this program is that it's free, it's opt-in, and it can be implemented even for public figures who aren't actively posting on YouTube." But Hollywood's relationship with deepfakes and AI is more complex than it may seem on the surface. While casual observers may assume disdain for the technology, talent and executives appear more open-minded than it may seem. CAA, for example, has invested in two companies in the deepfake business, Metaphysic and Deep Voodoo, betting on its creative use cases. But Disney's ill-fated deal with OpenAI was an early signal that entertainment companies view the tech as more than just a tool to bolster production capabilities. It can be a fan engagement tool. Pam Abdy, the co-CEO of Warner Bros. Pictures, captured that perspective when asked at a CNBC conference last week about the proliferation of AI-generated fan trailers for Practical Magic 2. "I know it's not great, but it's also exciting, because that means that there's a desire for it and that means that people want to come and play with the movie," Abdy said. "There could be worse with my image," quipped Practical Magic 2 star Sandra Bullock, who was interviewed alongside Abdy. "It's here. We have to observe it. We have to understand it. We have to lean into it ... I mean, we have to be incredibly cautious and aware of it because there are people who will use it for evil and not good." In fact, YouTube executives say that during the pilot program many creators only requested that a small percentage of flagged content be removed. One large YouTube creator says that most of the AI-generated content they have seen of themselves is benign, or even positive and supportive. That seems to be more widespread than the viral examples of Pitt vs. Cruise may suggest, with fan-made faux trailers more common than disparaging videos. "I think there are certainly individuals who would believe that any sort of use of likeness should be consented to, and I think that is also a fair argument," Shannon says. "But on average, we have seen more folks be excited about fans engaging and wanting to celebrate them in some form or another." "The reality is this content is getting made, it's out there," Silverman says. "And so the question is really whether talent has visibility into that, and whether there is a mechanism to respond. And I think given this feature, there now is this ability to respond." And that, in turn, leads to the even more complicated question of monetization. With Content ID, copyright holders can request that infringing videos be removed, demonetized... or monetized by sharing revenue with the uploader. In a world where likeness is the IP ... will celebrities soon be able to make money from deepfakes of themselves created by fans? YouTube's tool, notably, does not have that feature as of now, though Coe frames it as something they are thinking about longer-term. "We need to really focus on this foundational layer of responsibility and protection, and then we will think about rightsholders, and how do we think about monetization," Coe says. "But right now, it is really about that layer of responsibility." Agents, managers, and lawyers, of course, are already thinking a bit farther ahead, not least because of the complexities involved in the very idea of monetizing one's likeness through deepfakes. CAA even has a product called the "CAA Vault," which houses the likenesses of its clients for potential future monetization opportunities. "That is, of course, an extremely complicated problem to try and solve, which we are actively thinking about," Shannon says. "It's one thing if you're talking about one video featuring one particular talent, it's another thing if you're talking about a video that has two different talents, and one is okay with those, and one's not, or one is an established star and one is an up-and-coming star. I think there's a lot of complexity." But AI and deepfakes have been moved to the forefront of YouTube's thinking, betting that it can help its creators manage it, even as risks from synthetic content and deepfakes pose their own risks. In his annual letter to YouTube creators earlier this year, Mohan listed it as one of his four priorities, calling out deepfakes as a focus point. "Ultimately, we're focused on ensuring AI serves the people who make YouTube great: the creators, artists, partners, and billions of viewers looking to capture, experience and share a deeper connection to the world around them," he wrote. But even if monetization doesn't ultimately come to fruition, the tech YouTube created to identify deepfakes on its platform is necessary, prudent, and as multiple sources stressed, incredibly important to an industry in the midst of technological disruption. Ultimately, it may be helpful to anyone at risk of being targeted. "It's like fire insurance, right? You don't think it's going to happen to you until it does, and then it's really disastrous, and you are grateful that you have it," Coe says. "I think peace of mind and control are really important benefits to ascribe to this. The peace of mind that we're hearing afterwards is quite real." Or as Newman frames it: "When Kim Kardashian travels, she has security around her all the time. Why wouldn't you have security around you in the digital world?"
Share
Copy Link
YouTube is rolling out its AI likeness detection technology to celebrities, talent agencies, and management companies. The tool scans for AI-generated content featuring unauthorized use of faces, allowing public figures to request removal. Major agencies like CAA, UTA, and WME have supported the expansion, which doesn't require celebrities to have YouTube channels.
YouTube announced it is expanding its AI likeness detection technology to the entertainment industry, giving celebrities and their representatives new tools to combat AI deepfakes on the platform
1
. The move addresses growing concerns about unauthorized AI likeness use, particularly in scam advertisements and fake endorsements that have plagued public figures across the internet.
Source: CNET
The deepfake detection tool works similarly to YouTube's existing Content ID system, which identifies copyright-protected material in uploaded videos
1
. However, instead of matching copyrighted media, the technology scans for AI-generated content featuring simulated faces of enrolled participants. This AI deepfake monitoring feature allows celebrities to track how their likeness appears in videos across the platform and decide whether to request removal of AI deepfakes, submit a copyright removal request, or take no action3
.
Source: The Verge
The technology is now available to talent agencies, management companies, and the celebrities they represent, with support from major agencies including CAA, UTA, WME, and Untitled Management
1
. These organizations provided feedback to help refine how the tool serves talent in the entertainment industry4
.Notably, celebrities and entertainers can access this tool regardless of whether they have a YouTube channel
2
. The feature scans for visual matches of enrolled participants' faces across the platform, functioning as a platform-wide control system rather than just a creator perk5
.While the tool empowers public figures to protect individuals in the entertainment industry from celebrity deepfakes, YouTube won't remove all flagged content. The platform permits parody and satire content under its rules
1
. Mary Ellen Coe, YouTube's chief business officer, explained that exact replicas limiting a celebrity's livelihood through content replacement would qualify for takedown, while parody and satire would remain allowed2
.Takedown requests are evaluated against YouTube's privacy policy, and not every request will be approved
3
. During the pilot program with creators, many removed only a small portion of flagged content, predominantly focusing on negative or disparaging media2
.Related Stories
The likeness detection technology first launched in a pilot program with YouTube creators last year before expanding to politicians, government officials, and journalists this spring
1
. The current expansion to celebrities represents the widest rollout yet2
.
Source: Android Police
Google, YouTube's parent company, has been working on this technology for the past two years as AI tools have made it increasingly easy to create convincing deepfakes
2
. The company noted in March that the amount of removals remained "very small"1
, though specific numbers haven't been disclosed.YouTube plans to expand the technology to support audio detection for voice recreations in addition to visual likeness
1
. Coe hinted at a future where rightsholders might monetize AI-generated content rather than remove it, though this isn't currently planned for the platform2
.Beyond its platform, YouTube has advocated for federal protections through its support for the NO FAKES Act in Washington D.C., which would regulate unauthorized AI recreations of individuals' voices and visual likenesses
1
. This legislative push demonstrates the company's commitment to establishing what Coe called a "foundational layer of responsibility and protection" for celebrities and their likeness2
.Summarized by
Navi
[4]
1
Technology

2
Policy and Regulation

3
Policy and Regulation
