Curated by THEOUTPOST
On Fri, 6 Sept, 12:02 AM UTC
5 Sources
[1]
YouTube announces tools to safeguard creators against AI generated copies
Google owned YouTube has announced that it is developing new likeness management technology to help protect creators and artists against copying using generative artificial intelligence. In a post, the video streaming platform talked about equipping creators and artists with tools to utilise artificial intelligence while maintaining control over the representation of their likeness such as face and voice.
[2]
YouTube is developing AI detection tools for music and faces, plus creator controls for AI training
YouTube on Thursday announced a new set of AI detection tools to protect creators, including artists, actors, musicians and athletes from having their likeness, including their face and voice, copied and used in other videos. One key component of the new detection technology involved the expansion of YouTube's existing Content ID system, which today identifies copyright-protected material. This system will be expanded to include new synthetic-singing identification technology to identify AI content that simulates someone's singing voice. Other detection technologies will be developed to identify when someone's face is simulated with AI, the company says. Also of note, YouTube is in the early stages of coming up with a solution to address the use of its content to train AI models. This is an issue for some time, leading creators to complain that companies like Apple, Nvidia, Anthropic, OpenAI, and Google, among others, have trained on their material without their consent or compensation. YouTube hasn't yet revealed its plan to help protect creators (or generate additional revenue of its own from AI training), only says that it has something in the works. "...we're developing new ways to give YouTube creators choice over how third parties might use their content on our platform. We'll have more to share later this year," the announcement briefly states. Meanwhile, the company appears to be moving forward with its promise from last year when it said it would come up with a way to compensate artists whose work was used to create AI music. At the time, YouTube began working with Universal Music Group (UMG) and its roster of talent on a solution. It also said it would work on an expansion of its Content ID system that would be able to identify which rightsholders should be paid when their works were used by AI music. The Content ID system currently processes billions of claims per year, and generates billions in revenue for creators and artists, YouTube notes. In today's announcement, YouTube doesn't tackle the compensation component to AI music but does say it is nearing a pilot of the Content ID system's expansion with a focus on this area. Starting early next year, YouTube will begin to test the synthetic-singing identification technology with its partners, it says. Another solution in earlier stages of development will allow high-profile figures -- like actors, musicians, creators, athletes, and others -- to detect and manage AI-generated work that shows their faces on YouTube. This would go a long way to help prevent people from having their likeness used to mislead YouTube viewers, whether it's for endorsing products and services they never agreed to support, or to help spread misinformation, for instance. YouTube did not say when this system would be ready to test, only that it's in active development. "As AI evolves, we believe it should enhance human creativity, not replace it. We're committed to working with our partners to ensure future advancements amplify their voices, and we'll continue to develop guardrails to address concerns and achieve our common goals," YouTube's announcement said.
[3]
YouTube works to address AI-generated content management for creators with new tools - SiliconANGLE
YouTube works to address AI-generated content management for creators with new tools To safeguard creators on its platform and maintain the integrity of their content, Google LLC-owned YouTube today unveiled upcoming tools to detect and manage content generated through artificial intelligence. The company said it developed a new synthetic singing detection technology within its automated content identification system, Content ID, capable of labeling artificially generated voices. This new tool will permit partners to track and manage videos that mimic their singing voices - it will be ready for prime time sometime in early 2025. Content ID is an automatic system that tracks and manages copyright violations on YouTube and allows rightsholders to request takedowns or receive revenue from the reuse of their work. The company said that the automated system had brought in billions of claims and billions in new revenue for artists through the unauthorized reuse of their work. "We're committed to bringing this same level of protection and empowerment into the AI age," YouTube said in the announcement. For example, if a video mimicked a singer's voice to produce a song, AI detection could be used to take advertisement revenue from that video as if it was copied wholesale and send it to a rightsholder. YouTube also said it is developing a new technology that can detect deepfakes of faces that will be coupled with the company's recent updates to its privacy guidelines. The tech is aimed at celebrity users such as musicians, actors and others who might have their likenesses taken and used to produce fake videos. As text-to-image AI models have become more sophisticated so has the ease of creating deepfakes and video models have begun to follow suit. As more AI-generated content has proliferated on the internet, AI content generators have increasingly worked to add content labels to improve transparency. For example, Google said that it was working on ways to watermark and detect AI-generated images using Google DeepMind's SynthID, which is embedded in content created by its Gemini AI chatbot. Similarly, Meta Platforms Inc. labels AI-generated content uploaded to its social media networks using open-source technology classifiers developed by the Coalition for Content Provenance and Authenticity and the International Press Telecommunications Council. Social media video networking app TikTok started flagging AI-generated content in May, becoming one of the first video apps to do so. The AI tools on the platform already automatically flag content but users are expected to add labels themselves if it is AI-generated. Finally, YouTube noted that creators also may want more control over how their content might be used to train AI models. AI models require large amounts of data to build and train, including text and video; sites such as YouTube are often scraped for content. "When it comes to other parties, such as those who may try to scrape YouTube content, we've been clear that accessing creator content in unauthorized ways violates our Terms of Service," YouTube said. The unauthorized use of copyrighted content has plagued model development with lawsuits from industry interests and creators. AI music generators Suno Inc. and Uncharted Labs Inc., better known as Udio, were sued in June by three major record labels in two separate lawsuits alleging massive music copyright infringement. Universal Music Group N.V. filed a lawsuit against AI startup Anthropic PBC alleging widespread scraping of its client's song lyrics to train the company's chatbot Claude. YouTube said that it will continue to invest in better ways to block unauthorized access to protect creators from having their content misused by generative AI model developers. "That said, as the generative AI landscape continues to evolve, we recognize creators may want more control over how they collaborate with third-party companies to develop AI tools," the company said. There was no comment about revenue sharing or what this eventual collaborative effort with third-party generative AI platforms might look like. The company said that more details would be forthcoming later this year.
[4]
YouTube Develops Tool to Allow Creators to Detect AI-Generated Content Using Their Likeness
'Hogwarts Legacy' Sequel One of the "Biggest Priorities" for Warner Bros. Discovery, CFO Says One of the side effects of generative artificial intelligence tools proliferating is a surge of misuse. Actors, musicians, athletes, digital creators and others are seeing their likenesses digitally copied or altered, sometimes for less-than-noble reasons. The video platform YouTube says that it is developing new tools to tackle those problems, as well as the issue of AI companies attempting to scrape its content. In a blog post published Thursday morning, YouTube announced a pair of tools meant to detect and manage AI-generated content that uses their voice or likeness. The first tool, a "synthetic-singing identification technology" that will live within its existing Content ID system, and will "allow partners to automatically detect and manage AI-generated content on YouTube that simulates their singing voices." The company says that it is refining the tech, with a pilot program planned for early 2025. The second tool, which is still in development, "will enable people from a variety of industries -- from creators and actors to musicians and athletes -- to detect and manage AI-generated content showing their faces on YouTube." The company did not indicate when it thinks it will be ready to roll out. It is not immediately clear what creators will be able to do with the new tools, though Content ID gives rightsholders a menu of options, from pulling it down, removing rights-impacted content, or splitting ad revenue. YouTube is leaning into AI, releasing new tools like Dream Screen and a tool that uses AI to help creators come up with ideas for videos, but is is also leaning into the tech to help identify misuses. And the platform is also grappling with the insatiable demand for training data from AI firms like OpenAI and Anthropic. YouTube notes that it is a violation of its terms to scrape its data, and will fight efforts to do so. However, it also adds that some of its creators may have their own views on the subject, and is planning tools that would let creators have more say in how third-parties use their data. "As AI evolves, we believe it should enhance human creativity, not replace it," the company wrote in the blog post. "We're committed to working with our partners to ensure future advancements amplify their voices, and we'll continue to develop guardrails to address concerns and achieve our common goals."
[5]
YouTube launches two new tools for AI content management (NASDAQ:GOOGL)
YouTube has introduced new tools for creators and artists to manage and identify AI content that may have used their image or music, among other things, the company said in its blogpost on Thursday. YouTube said it has developed a new synthetic-singing identification technology within Content ID that will allow partners to automatically detect and manage AI-generated content on the platform that simulates their singing voices. The video platform said it is also actively developing new technology that will enable people, from creators and actors to musicians and athletes, to detect and manage AI-generated content showing their faces on YouTube. "As the generative AI landscape continues to evolve, we recognize creators may want more control over how they collaborate with third-party companies to develop AI tools," YouTube said in a statement.
Share
Share
Copy Link
YouTube announces the development of AI detection tools and creator controls to address concerns about AI-generated content. These tools aim to safeguard creators' work and provide more control over AI training data.
In a significant move to address the growing concerns surrounding artificial intelligence (AI) in content creation, YouTube has announced the development of new tools designed to protect creators from AI-generated copies of their work 1. This initiative comes as part of YouTube's ongoing efforts to balance the potential of AI technology with the rights and interests of its creator community.
One of the key features in development is an AI detection tool specifically targeting music and faces 2. This tool aims to identify AI-generated content that mimics existing creators or musicians. By implementing such technology, YouTube hopes to maintain the authenticity of content on its platform and protect creators from unauthorized AI-generated replicas.
In addition to detection tools, YouTube is working on providing creators with more control over how their content is used for AI training 3. This feature will allow creators to opt out of having their content used to train AI models, giving them greater autonomy over their intellectual property.
The introduction of these tools could have far-reaching implications for the entertainment industry, particularly for actors and musicians 4. As AI technology becomes more sophisticated in replicating human performances, these protective measures may become crucial in preserving the value of original creative work.
YouTube's strategy in addressing AI-generated content management involves two main components 5:
This two-pronged approach demonstrates YouTube's commitment to adapting to the evolving landscape of content creation while prioritizing the interests of its creator community.
While these tools represent a significant step forward, the rapidly evolving nature of AI technology presents ongoing challenges. YouTube acknowledges that this is an area of active development, and the effectiveness of these tools will likely require continuous refinement and updates to keep pace with advancements in AI capabilities.
As the platform continues to develop and implement these new features, creators and users alike will be watching closely to see how they impact the YouTube ecosystem and the broader conversation around AI in content creation.
Reference
[1]
[2]
[3]
[4]
The Hollywood Reporter
|YouTube Develops Tool to Allow Creators to Detect AI-Generated Content Using Their LikenessYouTube is creating new tools to identify AI-generated content, including deepfake voices and faces. This move aims to protect creators and maintain trust on the platform amid growing concerns about AI-generated misinformation.
4 Sources
4 Sources
YouTube unveils new AI detection tools to help creators identify AI-generated content, including singing deepfakes. The platform aims to balance innovation with transparency and creator rights.
2 Sources
2 Sources
YouTube is collaborating with Creative Artists Agency (CAA) to test new technology that will help celebrities and athletes identify and manage AI-generated content using their likeness on the platform.
11 Sources
11 Sources
YouTube introduces a suite of AI-powered tools to assist creators in producing Shorts and long-form content. These features aim to streamline the content creation process and enhance user engagement.
19 Sources
19 Sources
YouTube's introduction of AI-generated content tools sparks debate on creativity, authenticity, and potential risks. While offering new opportunities for creators, concerns arise about content quality and the platform's ecosystem.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved