Curated by THEOUTPOST
On Sat, 13 Jul, 12:02 AM UTC
7 Sources
[1]
Senate Introduces Bill to Guard Artists' and Journalists' Creations Against AI
A new bill in the United States Senate hopes to protect artists and journalists from having their work used to train AI models or to generate AI content without their consent. The bill, called the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act), was authored by Senate Commerce Committee Chair Maria Cantwell (D-WA), Senate AI Working Group member Martin Heinrich (D-NM) and Commerce Committee member Marsha Blackburn (R-TN). "Artificial intelligence has given bad actors the ability to create deepfakes of every individual, including those in the creative community, to imitate their likeness without their consent and profit off of counterfeit content," Senator Blackburn said in a press release announcing the bill's introduction. "The COPIED Act takes an important step to better defend common targets like artists and performers against deepfakes and other inauthentic content." The bill comes at a time when many states are also looking at ways to combat specifically the use of AI to create false information. If passed, the bill would require companies to allow content owners to protect their work from being used to train AI models. It would also require the National Institute of Standards and Technology (NIST) to create guidelines and standards for adding content provenance information to content. Those standards would then be used to determine where the content came from and if it was generated or altered by AI. "The bipartisan COPIED Act I introduced with Senator Blackburn and Senator Heinrich will provide much-needed transparency around AI-generated content," said Senator Cantwell. "The COPIED Act will also put creators, including local journalists, artists, and musicians, back in control of their content with a provenance and watermark process that I think is very much needed." The bill would give individuals the right to sue for violations and authorize the Federal Trade Commission (FTC) and state attorneys to enforce its requirements. It would also make it illegal to remove, disable, or tamper with content provenance information. Several groups have already endorsed the bill including SAG-AFTRA, Nashville Songwriters Association International, Recording Academy, National Music Publishers' Association, Recording Industry Association of America, News/Media Alliance, National Newspaper Association, America's Newspapers, Rebuild Local News, Seattle Times, National Association of Broadcasters, Artist Rights Alliance, Human Artistry Campaign, Public Citizen, The Society of Composers & Lyricists, Songwriters Guild of America, and Music Creators North America. "Protecting the life's work and legacy of artists has never been more important as AI platforms copy and use recordings scraped off the internet at industrial scale and AI-generated deepfakes keep multiplying rapidly. RIAA strongly supports provenance requirements as a fundamental building block for accountability and enforcement of creators' rights, Mitch Glazier, Chairman and CEO of Recording Industry Association of America said in a statement. "Leading tech companies refuse to share basic data about the creation and training of their models as they profit from copying and using unlicensed copyrighted material to generate synthetic recordings that unfairly compete with original works. We appreciate Senators Cantwell, Blackburn, and Heinrich's leadership with the Content Origin Protection and Integrity from Edited and Deepfaked Media Act of 2024, which would grant much needed visibility into AI development and pave the way for more ethical innovation and fair and transparent competition in the digital marketplace."
[2]
Three senators introduce bill to protect artists and journalists from unauthorized AI use
The COPIED ACT is a bipartisan effort to combat and monitor the rise of AI content and deepfakes. Three US Senators introduced a bill that aims to rein in the rise and use of AI generated content and deepfakes by protecting the work of artists, songwriters and journalists. The Content Original Protection and Integrity from Edited and Deepfaked Media (COPIED) Act was introduced to the Senate Friday morning. The bill is a bipartisan effort authorized by Sen. Marsha Blackburn (R-Tenn.), Sen. Maria Cantwell (D-Wash.) and Sen. Martin Heinrich (D-N.M.), according to a press alert issued by Blackburn's office. The COPIED ACT would, if enacted, create transparency standards through the National Institutes of Standards and Technology (NIST) to set guidelines for "content provenance information, watermarking, and synthetic content detection," according to the press release. The bill would also prohibit the unauthorized use of creative or journalistic content to train AI models or created AI content. The Federal Trade Commission and state attorneys general would also gain the authority to enforce these guidelines and individuals who had their legally created content used by AI to create new content without their consent or proper compensation would also have the right to take those companies or entities to court. The bill would even expand the prohibition of tampering or removing content provenance information by internet platforms, search engines and social media companies. A slew of content and journalism advocacy groups are already voicing their support for the COPIED Act to become law. They include groups like SAG-AFTRA, the Recording Industry Association of America, the National Association of Broadcasters, the Songwriters Guild of America and the National Newspaper Association. This is not the Senate's first attempt to create guidelines and laws for the rising use of AI content and it certainly won't be the last. In April, Rep. Adam Schiff (D-Calif.) submitted a bill called the Generative AI Copyright Disclosure Act that would force AI companies to list their copyrighted sources in their datasets. The bill has not moved out of the House Committee on the Judiciary since its introduction, according to Senate records.
[3]
New Senate bill seeks to protect artists' and journalists' content from AI use | TechCrunch
A bipartisan group of senators has introduced a new bill that seeks to protect artists, songwriters, and journalists from having their content used to train AI models or generate AI content without their consent. The bill, called the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act), also seeks to make it easier to identify AI-generated content and combat the rise of harmful deepfakes. Senate Commerce Committee Chair Maria Cantwell (D-WA), Senate AI Working Group member Martin Heinrich (D-NM), and Commerce Committee member Marsha Blackburn (R-TN) authored the bill. The bill would require companies that develop AI tools to allow users to attach content provenance information to their content within two years. Content provenance information refers to machine-readable information that documents the origin of digital content, such as photos and news articles. According to the bill, works with content provenance information could not be used to train AI models or generate AI content. The bill is designed to give content owners, such as journalists, newspapers, artists, songwriters, and others the ability to protect their work, while also setting the terms of use for their content, including compensation. It also gives them the right to sue platforms that use their content without their permission or have tampered with content provenance information. The COPIED Act would require the National Institute of Standards and Technology (NIST) to create guidelines and standards for content provenance information, watermarking, and synthetic content detection. These standards would be used to determine if content has been generated or altered by AI, as well as where AI content originated from. "The bipartisan COPIED Act I introduced with Senator Blackburn and Senator Heinrich, will provide much-needed transparency around AI-generated content," said Senator Cantwell in a press release. "The COPIED Act will also put creators, including local journalists, artists and musicians, back in control of their content with a provenance and watermark process that I think is very much needed." The bill is backed by several artists' groups, including SAG-AFTRA, the National Music Publishers' Association, the Seattle Times, the Songwriters Guild of America, Artist Rights Alliance, among others. The introduction of the COPIED Act comes as there has been an influx of bills related to AI as lawmakers look to regulate the technology. Last month, Senator Ted Cruz introduced a bill that would hold social media companies like X and Instagram accountable for removing and policing deepfake porn. The Take It Down Act came amid the rise of AI-generated pornographic photos of celebrities like Taylor Swift have made the rounds on social media. In May, Senate Majority Leader Chuck Schumer introduced a "roadmap" for addressing AI that would boost funding for AI innovation, tackle the use of deepfakes in elections, use AI to strengthen national security and more. In addition, Axios reported earlier this year that state legislatures are introducing 50 AI-related bills per week. According to the report, there were 407 total AI-related bills across more than 40 states as of February, depicting a steep increase from the 67 related bills introduced a year ago. Amid the emergence and popularity of AI tools, President Joe Biden issued an executive order last October to set standards for AI safety and security. The standards would require developers of AI systems to share their safety test results and other critical information with the government before deploying their systems to the public. It's worth noting that former President Donald Trump has vowed to repeal the executive order if re-elected.
[4]
Proposed new law would make it illegal to remove AI watermarks from content
New Senate legislation outlines a path to standardized AI content detection. Credit: Allison Robbert/The Washington Post via Getty Images The U.S. Senate has unveiled yet another AI protections bill among a series of similar initiatives, this time aimed at safeguarding the work of artists and other creatives. Introduced as the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act), the new legislation would require more precise authentication of digital content and make the removal or tampering of watermarks illegal, the Verge reported, under new AI standards developed by National Institute of Standards and Technology (NIST). The bill specifically requires generative AI developers to add content provenance information (identification data embedded within digital content, like watermarks) to their outputs, or allow individuals to attach such information themselves. More standardized access to such information may help the detection of synthetic, AI generated content like deepfakes, and curb the use of data and other IP without consent. It would also authorize the Federal Trade Commission (FTC) and state attorneys general to enforce the new regulations. A regulatory pathway such as this could effectively help artists, musicians, and even journalists keep their original works out of the data sets used to train AI models -- a growing public accessibility issue that's only been exacerbated by recent collaborations between AI giants like OpenAI and media companies. Organizations like artist union SAG-AFTRA, the Recording Industry Association of America, the News/Media Alliance, and Artist Rights Alliance have come out in favor of the legislation. "We need a fully transparent and accountable supply chain for generative Artificial Intelligence and the content it creates in order to protect everyone's basic right to control the use of their face, voice, and persona," said SAG-AFTRA national executive director Duncan Crabtree-Ireland. Should it pass, the bill would make it easier for such creatives and media owners to set terms for content use, and provide a legal pathway should their work be used without consent or attribution.
[5]
Proposed new law would make it illegal to remove AI watermarks from content
The U.S. Senate has unveiled yet another AI protections bill among a series of similar initiatives, this time aimed at safeguarding the work of artists and other creatives. Introduced as the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act), the new legislation would require more precise authentication of digital content and make the removal or tampering of watermarks illegal, the Verge reported, under new AI standards developed by National Institute of Standards and Technology (NIST). The bill specifically requires generative AI developers to add content provenance information (identification data embedded within digital content, like watermarks) to their outputs, or allow individuals to attach such information themselves. More standardized access to such information may help the detection of synthetic, AI generated content like deepfakes, and curb the use of data and other IP without consent. It would also authorize the Federal Trade Commission (FTC) and state attorneys general to enforce the new regulations. A regulatory pathway such as this could effectively help artists, musicians, and even journalists keep their original works out of the data sets used to train AI models -- a growing public accessibility issue that's only been exacerbated by recent collaborations between AI giants like OpenAI and media companies. Organizations like artist union SAG-AFTRA, the Recording Industry Association of America, the News/Media Alliance, and Artist Rights Alliance have come out in favor of the legislation. "We need a fully transparent and accountable supply chain for generative Artificial Intelligence and the content it creates in order to protect everyone's basic right to control the use of their face, voice, and persona," said SAG-AFTRA national executive director Duncan Crabtree-Ireland. Should it pass, the bill would make it easier for such creatives and media owners to set terms for content use, and provide a legal pathway should their work be used without consent or attribution.
[6]
Bipartisan Senators Push For AI Content Authentication With COPIED Act
The bill directs standards be developed for verifying content origin and includes measures to prevent tampering and unauthorized training. A bipartisan group of U.S. senators introduced the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act). According to The Verge, the proposed legislation aims to authenticate and detect AI-generated content, providing a shield for journalists and artists against unauthorized use of their work. See Also: OpenAI Transcribes Over A Million YouTube Hours: Navigating The Gray Area Of AI Data Use The COPIED Act, if passed, will task the National Institute of Standards and Technology (NIST) with developing standards and guidelines to verify the origin of content and identify synthetic content through methods like watermarking. Additionally, the bill mandates security measures to prevent tampering and requires AI tools used in creative or journalistic endeavors to include origin information that cannot be removed. Such content would also be barred from being used to train AI models without explicit permission. Industry Leaders, Lawmakers Back COPIED Act Senate Commerce Committee Chair Maria Cantwell (D-Wash.), alongside Senate AI Working Group member Martin Heinrich (D-N.M.) and Commerce Committee member Marsha Blackburn (R-Tenn.), spearheaded the bill. It is part of a broader effort in the Senate to understand and regulate AI technologies. Senate Majority Leader Chuck Schumer (D-NY) previously emphasized the importance of creating an AI regulatory framework, with specific laws being deliberated in individual committees. Several organizations representing content creators have expressed strong support for the COPIED Act. The bill has garnered endorsements from SAG-AFTRA, the Recording Industry Association of America, the News/Media Alliance, and the Artist Rights Alliance. Duncan Crabtree-Ireland, SAG-AFTRA's national executive director and chief negotiator, stressed the urgency of the bill: "The capacity of AI to produce stunningly accurate digital representations of performers poses a real and present threat to the economic and reputational well-being and self-determination of our members." "We need a fully transparent and accountable supply chain for generative Artificial Intelligence and the content it creates in order to protect everyone's basic right to control the use of their face, voice, and persona," he added. Read Next: House Bill Aims To Enhance Transparency In AI Training Practices Will AI Disrupt Video Games? Someone Just Built Original 'Pong' Game In Less Than 60 Seconds Using GPT-4 How Israel's Military Uses AI System To Target In Gaza: Report Photo: Shutterstock. Market News and Data brought to you by Benzinga APIs
[7]
US senators introduce bipartisan bill to counter gen-AI deepfakes
A bipartisan group of U.S. senators has introduced legislation intended to counter the rise of deepfakes and protect creators from theft through generative artificial intelligence. "Artificial intelligence has given bad actors the ability to create deepfakes of every individual, including those in the creative community, to imitate their likeness without their consent and profit off of counterfeit content," said U.S. Senator Marsha Blackburn (R.-Tenn.). The bill, called the Content Origin Protection and Integrity from Edited and Deepfaked Media Act, or COPIED Act, is co-sponsored by Blackburn, Maria Cantwell (D-Wash.), and Martin Heinrich (D-N.M.), who is also a member of the Senate AI Working Group. "The COPIED Act will also put creators, including local journalists, artists, and musicians, back in control of their content with a provenance and watermark process that I think is very much needed," Cantwell said. The act, if passed, would require the National Institute of Standards and Technology to create guidelines and standards for "provenance information, watermarking, and synthetic content detection." It would also prohibit the unauthorized use of content by journalists, artists, and musicians to train AI models or generate AI content. The proposed law would give individuals the right to sue violators and authorize the Federal Trade Commission and state attorneys to enforce the bill. The law would prevent tampering or disabling AI provenance information as well. Content provenance information refers to "state-of-the-art, machine-readable information documenting the origin and history of a piece of digital content, such as an image, a video, audio, or text," according to the bill. "Deepfakes are a real threat to our democracy and to Americans' safety and well-being," Heinrich said. "I'm proud to support Senator Cantwell's COPIED Act that will provide the technical tools needed to help crack down on harmful and deceptive AI-generated content and better protect professional journalists and artists from having their content used by AI systems without their consent." A wide array of organizations endorsed the bill, including SAG-AFTRA, Nashville Songwriters Association International, Recording Academy, National Music Publishers' Association, Recording Industry Association of America, News/Media Alliance, and the National Newspaper Association, to name a few. Microsoft-backed (NASDAQ:MSFT) OpenAI has created partnerships with several major media companies to use archives to train AI models without infringing on copyrighted material. In May, OpenAI stopped using the voice Sky in an audible version of ChatGPT after it was pointed out that it sounded identical to actress Scarlett Johansson. Last year, OpenAI was sued by a number of authors, including John Grisham and George R.R. Martin, for using their work to train AI models. This is not the first U.S. Senate bill targeting the use of AI-generated deepfakes. Last month, U.S. Senator John Hickenlooper (D-Colo.) co-sponsored another bipartisan bill known as the Take It Down Act. This bill specifically targeted the use of AI-generated deepfake pornography. "AI innovation is going to change so much about our world, but it can't come at the cost of our children's privacy and safety," said Hickenlooper. "We have a narrow window to get out in front of this technology. We can't miss it." Dear readers: We recognize that politics often intersects with the financial news of the day, so we invite you to click here to join the separate political discussion.
Share
Share
Copy Link
A new bill introduced in the US Senate aims to safeguard the creations of artists and journalists from unauthorized use by AI systems. The legislation proposes measures to protect copyrighted works and ensure fair compensation for creators.
In a significant move to address the growing concerns surrounding artificial intelligence and its impact on creative industries, three US senators have introduced a new bill aimed at protecting the works of artists and journalists from unauthorized use by AI systems 1.
The proposed legislation, known as the "No AI THEFT Act" (Nurture Originals, Authoritative Information, and Trustworthy Headlines by Enhancing Protections Against Artificial Intelligence Theft), seeks to establish clear guidelines for the use of copyrighted material in AI training 2. The bill would require AI companies to obtain explicit permission from copyright holders before using their works to train large language models (LLMs) and other AI systems.
One of the primary objectives of the bill is to ensure that artists, writers, and other content creators are fairly compensated for the use of their work in AI training. This addresses a longstanding concern in the creative community about the potential exploitation of their intellectual property by AI companies 3.
The proposed legislation also includes provisions for the use of digital watermarks on AI-generated content. These watermarks would help identify content created by AI systems, promoting transparency and accountability. Importantly, the bill would make it illegal to remove or alter these AI watermarks, ensuring that users can distinguish between human-created and AI-generated content 5.
If passed, the bill would have significant implications for AI companies and users of AI-generated content. It would require AI developers to be more transparent about their data sources and potentially limit the scope of training data available to them. This could lead to changes in how AI models are developed and deployed 4.
The bill has garnered bipartisan support, with senators from both major parties backing the legislation. This reflects the growing consensus on the need to regulate AI's impact on creative industries. While many artists and content creators have welcomed the proposed measures, some AI companies and advocates for open-source AI development have expressed concerns about potential limitations on innovation 1.
As the bill moves through the legislative process, it is likely to face scrutiny and potential amendments. Critics argue that defining the boundaries of fair use in AI training could be challenging, and enforcing such regulations might prove complex in practice. Nevertheless, the introduction of this bill marks a significant step towards addressing the ethical and legal challenges posed by AI in the creative sector 3.
Reference
[2]
[3]
A new bill introduced in the US Senate seeks to make it illegal to remove or alter AI-generated content watermarks. The legislation aims to combat the spread of AI-generated disinformation and protect content creators.
2 Sources
A bipartisan group of U.S. senators has introduced legislation aimed at protecting individuals and artists from AI-generated deepfakes. The bill seeks to establish legal safeguards and address concerns about AI exploitation in various sectors.
5 Sources
California lawmakers are considering a bill to protect actors' likeness from unauthorized AI use. The legislation aims to require permission for creating AI deepfakes of deceased stars, addressing concerns raised by actors like Tom Hanks.
2 Sources
OpenAI, the creator of ChatGPT, has expressed support for a California bill that would require companies to watermark AI-generated content. This move aims to increase transparency and combat misinformation in the rapidly evolving field of artificial intelligence.
12 Sources
The U.S. Senate has unanimously passed the DEFIANCE Act, a bipartisan bill aimed at empowering victims of non-consensual deepfake pornography. The legislation allows victims to sue creators and distributors of such content.
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved