Curated by THEOUTPOST
On Fri, 3 Jan, 4:01 PM UTC
8 Sources
[1]
Anthropic Navigates Legal Hurdles While Pushing AI Innovation In Enterprise Sector
Anthropic's enterprise focus continues amid legal scrutiny over AI-generated content. Amazon-backed Anthropic, a rising player in artificial intelligence, faces legal scrutiny as Universal Music Group, Concord Music Group, and ABKCO secure court-approved 'guardrails' on Thursday to address copyright concerns. The lawsuit, filed in 2023, accuses Anthropic's Claude AI of training on lyrics from over 500 songs, including hits by Katy Perry, The Rolling Stones, and Beyoncé, with publishers seeking $150,000 per infringement. Read Also: OpenAI CEO Sam Altman Predicts AGI Will Arrive During The Trump Presidency, Says CHIPS Act Not Effective Enough 'Guardrails' To Protect Copyrights The court's stipulation requires Anthropic to implement measures in its current and future AI models to prevent unauthorized reproduction of copyrighted material. Universal Music Group welcomed the decision, stating the measures validate their claims against Claude's infringing outputs. However, the legal battle is far from over. The publishers' motion for a preliminary injunction, aiming to stop Anthropic from using copyrighted lyrics in future AI training, remains unresolved. Judge Eumi K. Lee on Thursday outlined that Anthropic must investigate any alleged breaches and provide detailed responses to publishers on how issues will be addressed. Claude's Enterprise Focus While navigating legal hurdles, Anthropic continues to innovate. At CES 2025, Panasonic unveiled Umi, a wellness coach powered by Claude AI, showcasing its potential beyond consumer chatbots. This highlights Anthropic's focus on enterprise applications and real-world use cases. The company's partnership with Amazon Web Services (AWS) strengthens this position. By leveraging AWS Trainium chips to train its Claude models, Anthropic aims to lead in enterprise-focused AI solutions. Balancing Growth And Compliance With a valuation now at $60 billion, backed by major investments from Amazon and Alphabet Inc, Anthropic's rapid ascent underscores confidence in its AI capabilities. However, its ability to align innovation with copyright safeguards will be critical. As the generative AI boom accelerates, Anthropic must navigate legal challenges to solidify its position as a leader in the evolving AI landscape. Read Next: NVIDIA Unveils AI-Powered Digital Twin Tech For Warehouse Operations Photo: Shutterstock Market News and Data brought to you by Benzinga APIs
[2]
Anthropic agrees to work with music publishers to prevent copyright infringement
In 2023, the company was accused of using unauthorized song lyrics to train its Claude AI model. Anthropic has partly resolved a legal disagreement that saw the AI startup draw the ire of the music industry. In October 2023, a group of music publishers, including Universal Music and ABKCO, filed a copyright infringement complaint against Anthropic. The group alleged that the company had trained its Claude AI model on at least 500 songs to which they held rights and that, when promoted, Claude could reproduce the lyrics of those tracks either partially or in full. Among the song lyrics the publishers said Anthropic had infringed on included Beyoncé's "Halo" and "Moves Like Jagger" by Maroon 5. In a court-approved stipulation the two sides came to on Thursday, Anthropic agreed to maintain its existing guardrails against outputs that reproduce, distribute or display copyright material owned by the publishers and implement those same measures when training its future AI models. At the same time, the company said it would respond "expeditiously" to any copyright concerns from the group and promised to provide written responses detailing how and when it plans to address their concerns. In cases where the company intends not to address an issue, it must clearly state its intent to do so. "Claude isn't designed to be used for copyright infringement, and we have numerous processes in place designed to prevent such infringement," an Anthropic spokesperson told Engadget. "Our decision to enter into this stipulation is consistent with those priorities. We continue to look forward to showing that, consistent with existing copyright law, using potentially copyrighted material in the training of generative AI models is a quintessential fair use." As mentioned, Thursday's pact doesn't fully resolve the original disagreement between Anthropic and the group of music publishers that sued the company. The latter party is still seeking an injunction against Anthropic to prevent it from using unauthorized copies of song lyrics to train future AI models. A ruling on that matter could arrive sometime in the next few months.
[3]
Anthropic AI Copyright Case Centers on 'Guardrails' for Song Lyrics
Samantha Kelly is a freelance writer with a focus on consumer technology, AI, social media, Big Tech, emerging trends and how they impact our everyday lives. Her work has been featured on CNN, NBC, NPR, the BBC, Mashable and more. A Tennessee judge has approved an interim agreement in a lawsuit against Anthropic alleging that the AI company used copyrighted song lyrics to train its system without authorization or payment. Music publishers, including Universal Music Group, Concord Music Group and ABKCO, filed a copyright infringement lawsuit in 2023. They claimed Anthropic's Claude AI chatbot was trained on and produces results to queries about lyrics from at least 500 songs by major artists, including Beyoncé and Katy Perry, without obtaining permission or compensating rights holders. On Thursday, US District Judge Eumi Lee signed off on an agreement with the publishers that requires Anthropic to maintain its current copyright guardrails - which means not providing lyrics to old or new song lyrics - throughout the litigation process. The agreement states that nothing prevents Anthropic from "expanding, improving, optimizing, or changing the implementation of such guardrails, provided that such changes do not materially diminish the efficacy of the guardrails." It also allows publishers to notify Anthropic at any time if they believe the guardrails are insufficient, at which point the company must address their concerns. The court is expected to decide in the coming months whether songs owned by music publishers can be used for training the AI systems. The case underscores the tension between generative AI developers and copyright holders, and not just in the music industry. Publishers including The New York Times have sued ChatGPT maker OpenAI and Microsoft, and The Wall Street Journal and Dow Jones have sued Perplexity, all alleging copyright infringement. Other publishers, meanwhile, have struck licensing deals with AI companies including OpenAI and Meta. Anthropic, backed by Amazon, Google, Salesforce and others, was founded by former OpenAI researchers in 2021. Its system is trained on datasets that may include copyrighted materials such as song lyrics, which has raised questions about fair use and proper licensing. In a statement shared with CNET, the company emphasized its commitment to copyright compliance, including by entering into the current agreement. "Claude isn't designed to be used for copyright infringement, and we have numerous processes in place designed to prevent such infringement," a spokesperson said. "We continue to look forward to showing that, consistent with existing copyright law, using potentially copyrighted material in the training of generative AI models is a quintessential fair use." Universal Music Group, Concord Music Group and ABKCO did not immediately respond to a request for comment, but the original lawsuit said Anthropic "unlawfully copies and disseminates vast amounts of copyrighted works." "Just like the developers of other technologies that have come before, from the printing press to the copy machine to the web-crawler, AI companies must follow the law," the lawsuit said. Anthropic continues to face other legal challenges. Earlier this year, the company was named in a class action copyright infringement lawsuit from authors alleging it "built a multibillion-dollar business by stealing hundreds of thousands of copyrighted books."
[4]
Anthropic Agrees to Stop Claude Chatbot's Ability to Spit Out Lyrics From Copyrighted Songs
The AI company still argues that training Claude on copyrighted material constitutes fair use. Anthropic has reached a settlement and agreed to put a stop to showing users music lyrics based on copyrighted songs from several music publishers. Back in 2023, the AI company was sued by Universal Music Group, Concord Music Group, and others after it was found that its Claude chatbot would return lyrics to songs like Beyoncé's "Halo" when prompted. The entertainment industry is one of the most litigious out there and fights vigorously to defend its copyrightsâ€"just look back at historic cases, from the destruction of Napster to the multi-year legal battle Viacom fought against YouTube. More recently, the popular lyric annotation website Rap Genius (now just called Genius) was sued by the National Music Publishers Association for reproducing lyrics of copyrighted songs. Music publishers suing Anthropic acknowledged that other websites like music annotation platform Genius distribute lyrics online, but noted that Genius eventually began to pay a license fee to publish them on its website. In this latest suit, the music publishers claimed that Anthropic scraped lyrics from the web and intentionally removed watermarks that are placed on lyric websites to help identify where the copyrighted material was published. After Genius began licensing song lyrics from music publishers, it cleverly inserted extra apostrophes into the lyrics so that, in the event the material was inappropriately copied, Genius would know the material it explicitly paid for had been stolen and be able to demand removal. Anthropic did not concede the claims, but as part of the settlement agreed to better maintain guardrails that prevent its AI models from infringing on copyrighted material. It will also work in good faith with music publishers when it is found that the guardrails are not working. Anthropic defended the act of using song lyrics and other copyrighted material for training AI models, telling The Hollywood Reporter, “Our decision to enter into this stipulation is consistent with those priorities. We continue to look forward to showing that, consistent with existing copyright law, using potentially copyrighted material in the training of generative AI models is a quintessential fair use.†This argument has been central to AI companies' defense of copyrighted material showing up in their models. Advocates claim that remixing copyrighted content from websites like the New York Times constitutes fair use so long as it has been materially altered through derivative works. News and music publishers disagree, and the lawsuit against Anthropic is not entirely over yet. The music publishers are still seeking a court injunction preventing Anthropic from training future models on any copyrighted music lyrics whatsoever. The concern about abuse stems from the potential for Anthropic’s models to be used in music generation that causes a musician to lose control of their artistry. It is not an unfounded concern, as it has been widely speculated that OpenAI imitated the voice of Scarlett Johansson after she declined to provide her voice for its AI voice model. Tech companies like OpenAI and Google make their money on platforms and network effects, not by selling copyrighted material, which has always led to this tension between Hollywood and Silicon Valley. Art is merely “content†meant to serve the greater purpose of generating engagement and selling ads. The AI slop that's filling Facebook today is representative of how tech companies see it all as interchangeable. Publishers like the Times have been fighting high-profile battles against the likes of OpenAI in court to stop them from hoovering up copyrighted material. OpenAI has tried to respond by licensing material from some companies, and another AI player, Perplexity, has begun testing a revenue-sharing model. But publishers want more control and not to be forced into these shaky deals that could end at any time and still drive people away from their websites. Which is all to say, this is far from the end of the story when it comes to disputes over copyrighted material in large language models.
[5]
Anthropic reaches deal with music publishers over lyric dispute
Anthropic has made a deal to settle parts of a copyright infringement lawsuit brought against the maker of the Claude AI model for allegedly distributing protected song lyrics. The agreement was signed off by US District Judge Eumi Lee on Thursday, requiring Anthropic to apply existing guardrails in the training of future AI models and to establish a procedure for music publishers to intervene when copyright infringement is suspected.
[6]
Anthropic Confirms Claude Chatbot Won't Use Copyrighted Song Lyrics
A judge has signed off on an agreement that partly resolves the lawsuit between AI firm Anthropic and Universal Music Group, Concord Music Group, and a slew of other record labels and subsidiaries, asserting Anthropic can't recite the publishers' copyrighted song lyrics or use them to make new ones. The legal stipulation The Hollywood Reporter shared on Thursday states that Anthropic will continue to block its AI products (like its Claude chatbots) from reproducing protected song lyrics and will apply these rules to any future products, as well. If the record labels aren't happy with the AI's responses or outputs, they can ask Anthropic to make additional changes to prevent "derivative works" or reproductions of lyrics. At time of writing, Claude refuses requests to recite lyrics from Universal Music artists like Taylor Swift as well as copyrighted text from well-known book authors outside of the lawsuit, suggesting Anthropic has also already applied these copyright-related restrictions to more categories beyond just music. Based on the model's responses in a quick test, however, it appears that Claude still uses the copyrighted material internally to offer its own analysis of the songs instead of reproducing them verbatim for a human user. Anthropic said in a statement that Claude was not "designed" for copyright infringement purposes. "We have numerous processes in place designed to prevent such infringement," it said, adding: "Our decision to enter into this stipulation is consistent with those priorities. We continue to look forward to showing that, consistent with existing copyright law, using potentially copyrighted material in the training of generative AI models is a quintessential fair use." The aforementioned music publishers sued Anthropic in 2023, alleging widespread copyright infringement. "A defendant cannot reproduce, distribute, and display someone else's copyrighted works to build its own business unless it secures permission from the rightsholder," the labels' attorneys wrote in the original complaint. But the music industry isn't the only one concerned about how AI models may be using their copyrighted content without permission or compensation. Last year, three authors sued Anthropic, alleging copyright infringement because they found their books in a large dataset known as "The Pile" on which AI models including the Claude models were reportedly trained. News outlets have also been fighting other AI firms over copyright concerns, like The New York Times' legal battle against Microsoft and OpenAI or Condé Nast's letters to Perplexity AI. Last month, Anthropic made its Claude 3.5 Haiku available via the Claude.ai website and its mobile app. The AI firm has previously claimed that its 3.5 Sonnet model is more powerful than OpenAI's GPT-4o.
[7]
Music Publishers Reach Deal With AI Giant Anthropic Over Copyrighted Song Lyrics
Justin Baldoni Files Libel Lawsuit Against New York Times Over Blake Lively Story A trio of major music publishers suing Anthropic over the use of lyrics to train its AI system have reached a deal with the Amazon-backed company to resolve some parts of a pending preliminary injunction. U.S. District Judge Eumi Lee on Thursday signed off on an agreement between the two sides mandating Anthropic to maintain existing guardrails that prevent its Claude AI chatbot from providing lyrics to songs owned by the publishers or create new song lyrics based on the copyrighted material. In a statement, Anthropic said Claude "isn't designed to be used for copyright infringement, and we have numerous processes in place designed to prevent such infringement." It added, "Our decision to enter into this stipulation is consistent with those priorities. We continue to look forward to showing that, consistent with existing copyright law, using potentially copyrighted material in the training of generative AI models is a quintessential fair use." Universal Music Group, Concord Music Group and ABKCO, among other publishers, sued Anthropic in Tennessee federal court in 2023, accusing it of copyright infringement for training its AI system on lyrics from at least 500 songs from artists such as Katy Perry, the Rolling Stones and Beyoncé. One example: When asked the lyrics to Perry's "Roar," which is owned by Concord, Claude provided an near-identical copy of the words in the song, according to the complaint. At the heart of the lawsuit were allegations that there's already an existing market that's being undercut by Anthropic pilfering lyrics without consent or payment. The publishers pointed to music lyric aggregators and websites that have licensed their works. The lawsuit marked the first legal action taken by a music publisher against an AI firm over the incorporation of lyrics in a large language model. Under the agreement, Anthropic will apply already-implemented guardrails in the training of new AI systems. The deal also provides an avenue for music publishers to intervene if the guardrails aren't working as intended. "Publishers may notify Anthropic in writing that its Guardrails are not effectively preventing output that reproduces, distributes, or displays, in whole or in part, the lyrics to compositions owned or controlled by Publishers, or creates derivative works based on those compositions," the filing states. "Anthropic will respond to Publishers expeditiously and undertake an investigation into those allegations, with which Publishers will cooperate in good faith." Anthropic has maintained in court filings that existing guardrails make it unlikely that any future user could prompt Claude to produce any material portion of the works-in-suit. They consist of a "range of technical and other measures -- at all levels in the development lifecycle -- that aim to prevent users from simply prompting Claude to regurgitate training data," said a company spokesperson. The court is expected to issue a ruling in the coming months on whether to issue preliminary injunction that would bar Anthropic from training future models on lyrics owned by the publishers.
[8]
Anthropic gives court authority to intervene if chatbot spits out song lyrics
On Thursday, music publishers got a small win in a copyright fight alleging that Anthropic's Claude chatbot regurgitates song lyrics without paying licensing fees to rights holders. In an order, US district judge Eumi Lee outlined the terms of a deal reached between Anthropic and publisher plaintiffs who license some of the most popular songs on the planet, which she said resolves one aspect of the dispute. Through the deal, Anthropic admitted no wrongdoing and agreed to maintain its current strong guardrails on its AI models and products throughout the litigation. These guardrails, Anthropic has repeatedly claimed in court filings, effectively work to prevent outputs containing actual song lyrics to hits like Beyonce's "Halo," Spice Girls' "Wannabe," Bob Dylan's "Like a Rolling Stone," or any of the 500 songs at the center of the suit. Perhaps more importantly, Anthropic also agreed to apply equally strong guardrails to any new products or offerings, granting the court authority to intervene should publishers discover more allegedly infringing outputs. Before seeking such an intervention, publishers may notify Anthropic of any allegedly harmful outputs. That includes any outputs that include partial or complete song lyrics, as well as any derivative works that the chatbot may produce mimicking the lyrical style of famous artists. After an expeditious review, Anthropic will provide a "detailed response" explaining any remedies or "clearly" stating "its intent not to address the issue." Although the deal does not settle publishers' more substantial complaint alleging that Anthropic training its AI models on works violates copyright law, it is likely a meaningful concession, as it potentially builds in more accountability.
Share
Share
Copy Link
Anthropic, the AI company behind Claude, has reached a partial settlement with music publishers over copyright infringement claims, agreeing to implement guardrails against unauthorized use of song lyrics in its AI models.
Anthropic, the AI company behind the Claude chatbot, has reached a partial settlement in a copyright infringement lawsuit filed by major music publishers. The lawsuit, initiated in 2023 by Universal Music Group, Concord Music Group, and ABKCO, accused Anthropic of using lyrics from over 500 copyrighted songs to train its AI model without authorization or compensation 12.
On Thursday, US District Judge Eumi K. Lee approved a stipulation requiring Anthropic to:
While this agreement marks progress, the legal battle is not fully resolved. The publishers are still seeking a preliminary injunction to prevent Anthropic from using unauthorized copies of song lyrics in future AI model training 23. This case highlights the growing tension between AI developers and copyright holders across various industries 3.
Anthropic maintains that its use of potentially copyrighted material for AI training constitutes fair use under existing copyright law 24. The company, valued at $60 billion and backed by major investors like Amazon and Alphabet Inc, continues to focus on enterprise applications and real-world use cases for its AI technology 1.
This case underscores the challenges faced by AI companies in navigating copyright issues:
As the AI industry continues to evolve, the resolution of this and similar cases will likely shape the future landscape of AI development and copyright law. The outcome may influence how AI companies approach training data acquisition and potentially lead to new standards for licensing and fair use in the context of machine learning 34.
Reference
[4]
A group of authors has filed a lawsuit against AI company Anthropic, alleging copyright infringement in the training of their AI chatbot Claude. The case highlights growing concerns over AI's use of copyrighted material.
14 Sources
14 Sources
Anthropic introduces Claude Enterprise to compete with OpenAI's ChatGPT Enterprise. Meanwhile, speculation arises about a potential partnership between Anthropic and Amazon to revitalize Alexa.
2 Sources
2 Sources
As 2025 approaches, the AI industry faces crucial legal battles over copyright infringement, with potential outcomes that could significantly impact its future development and business models.
2 Sources
2 Sources
Suno, an AI-powered music creation platform, is embroiled in a legal battle with major record labels over alleged copyright infringement. The startup defends its practices while raising concerns about innovation and competition in the music industry.
5 Sources
5 Sources
A federal judge has dismissed a copyright lawsuit against OpenAI, filed by news outlets Raw Story and AlterNet, citing lack of evidence of harm. The case centered on OpenAI's use of news articles for AI training without consent.
10 Sources
10 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved