Curated by THEOUTPOST
On Mon, 15 Jul, 8:01 AM UTC
2 Sources
[1]
Proposed new law would make it illegal to remove AI watermarks from content
The U.S. Senate has unveiled yet another AI protections bill among a series of similar initiatives, this time aimed at safeguarding the work of artists and other creatives. Introduced as the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act), the new legislation would require more precise authentication of digital content and make the removal or tampering of watermarks illegal, the Verge reported, under new AI standards developed by National Institute of Standards and Technology (NIST). The bill specifically requires generative AI developers to add content provenance information (identification data embedded within digital content, like watermarks) to their outputs, or allow individuals to attach such information themselves. More standardized access to such information may help the detection of synthetic, AI generated content like deepfakes, and curb the use of data and other IP without consent. It would also authorize the Federal Trade Commission (FTC) and state attorneys general to enforce the new regulations. A regulatory pathway such as this could effectively help artists, musicians, and even journalists keep their original works out of the data sets used to train AI models -- a growing public accessibility issue that's only been exacerbated by recent collaborations between AI giants like OpenAI and media companies. Organizations like artist union SAG-AFTRA, the Recording Industry Association of America, the News/Media Alliance, and Artist Rights Alliance have come out in favor of the legislation. "We need a fully transparent and accountable supply chain for generative Artificial Intelligence and the content it creates in order to protect everyone's basic right to control the use of their face, voice, and persona," said SAG-AFTRA national executive director Duncan Crabtree-Ireland. Should it pass, the bill would make it easier for such creatives and media owners to set terms for content use, and provide a legal pathway should their work be used without consent or attribution.
[2]
Proposed New Law Would Make It Illegal To Remove AI Watermarks From Content
The U.S. Senate has unveiled yet another AI protections bill among a series of similar initiatives, this time aimed at safeguarding the work of artists and other creatives. Introduced as the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act), the new legislation would require more precise authentication of digital content and make the removal or tampering of watermarks illegal, the Verge reported, under new AI standards developed by National Institute of Standards and Technology (NIST). The bill specifically requires generative AI developers to add content provenance information (identification data embedded within digital content, like watermarks) to their outputs, or allow individuals to attach such information themselves. More standardized access to such information may help the detection of synthetic, AI generated content like deepfakes, and curb the use of data and other IP without consent. It would also authorize the Federal Trade Commission (FTC) and state attorneys general to enforce the new regulations. A regulatory pathway such as this could effectively help artists, musicians, and even journalists keep their original works out of the data sets used to train AI models -- a growing public accessibility issue that's only been exacerbated by recent collaborations between AI giants like OpenAI and media companies. Organizations like artist union SAG-AFTRA, the Recording Industry Association of America, the News/Media Alliance, and Artist Rights Alliance have come out in favor of the legislation. "We need a fully transparent and accountable supply chain for generative Artificial Intelligence and the content it creates in order to protect everyone's basic right to control the use of their face, voice, and persona," said SAG-AFTRA national executive director Duncan Crabtree-Ireland. Should it pass, the bill would make it easier for such creatives and media owners to set terms for content use, and provide a legal pathway should their work be used without consent or attribution.
Share
Share
Copy Link
A new bill introduced in the US Senate seeks to make it illegal to remove or alter AI-generated content watermarks. The legislation aims to combat the spread of AI-generated disinformation and protect content creators.
In a significant move to address the growing concerns surrounding artificial intelligence (AI) generated content, US Senators Ben Ray Luján and Thom Tillis have introduced a new bill that would make it illegal to remove or alter watermarks from AI-generated content 1. The proposed legislation, known as the "Nurture Originals, Foster Art, and Keep Entertainment Safe Act" or the "NO FAKES Act," aims to combat the spread of AI-generated disinformation and protect content creators.
The bill proposes to make it a criminal offense to knowingly remove or alter watermarks, metadata, or other technical identifiers that indicate content was created using AI. This includes both visual and audio content. Violators could face fines of up to $150,000 per infraction, demonstrating the seriousness with which lawmakers are approaching this issue 2.
If passed, this legislation would have far-reaching implications for the AI industry and content creators. It would require AI companies to implement robust watermarking systems for their generated content. This move is seen as a step towards greater transparency in the age of AI-generated media, helping users distinguish between human-created and AI-generated content.
While the bill has garnered support from various quarters, including some AI companies and content creators, it also faces potential challenges. Critics argue that the legislation might be difficult to enforce, given the rapid advancements in AI technology. There are also concerns about how this law would interact with existing copyright laws and fair use doctrines.
The NO FAKES Act is part of a broader global conversation about regulating AI-generated content. As AI technology continues to evolve, lawmakers worldwide are grappling with how to balance innovation with the need to protect against misinformation and content manipulation. This US legislation could set a precedent for similar laws in other countries.
The proposed law could have significant implications for creative industries, including film, music, and visual arts. By making it illegal to remove AI watermarks, the legislation aims to protect human artists from unfair competition and ensure proper attribution of AI-generated works. However, it also raises questions about the future of creativity and the role of AI in artistic expression.
As this bill moves through the legislative process, it will undoubtedly spark further debate about the ethical use of AI in content creation and the best ways to regulate this rapidly evolving technology. The outcome of this legislation could shape the future landscape of digital content and AI regulation for years to come.
Reference
A new bill introduced in the US Senate aims to safeguard the creations of artists and journalists from unauthorized use by AI systems. The legislation proposes measures to protect copyrighted works and ensure fair compensation for creators.
7 Sources
7 Sources
A bipartisan group of U.S. senators has introduced legislation aimed at protecting individuals and artists from AI-generated deepfakes. The bill seeks to establish legal safeguards and address concerns about AI exploitation in various sectors.
5 Sources
5 Sources
OpenAI, the creator of ChatGPT, has expressed support for a California bill that would require companies to watermark AI-generated content. This move aims to increase transparency and combat misinformation in the rapidly evolving field of artificial intelligence.
12 Sources
12 Sources
India emphasizes the need for a robust content tracking system to manage AI-generated content. The proposal includes watermarking and other technologies to ensure transparency and accountability in the era of generative AI.
2 Sources
2 Sources
The U.S. Copyright Office has called for urgent legislation to address the growing concerns surrounding AI-generated deepfakes and impersonation. The office emphasizes the need for a new federal law to protect individuals' rights and regulate the use of AI in content creation.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved