14 Sources
[1]
Authors celebrate "historic" settlement coming soon in Anthropic class action
Authors are celebrating a "historic" settlement expected to be reached soon in a class-action lawsuit over Anthropic's AI training data. On Tuesday, US District Judge William Alsup confirmed that Anthropic and the authors "believe they have a settlement in principle" and will file a motion for preliminary approval of the settlement by September 5. The settlement announcement comes after Alsup certified what AI industry advocates criticized as the largest copyright class action of all time. Although the lawsuit was raised by three authors -- Andrea Bartz, Kirk Wallace Johnson, and Charles Graeber -- Alsup allowed up to 7 million claimants to join based on the large number of books that Anthropic may have illegally downloaded to train its AI models. If every author in the class filed a claim, industry advocates warned, it would "financially ruin" the entire AI industry. It's unclear if the class certification prompted the settlement or what terms authors agreed to, but according to court filings, the settlement terms are binding. A lawyer representing authors, Justin A. Nelson, told Ars that more details would be revealed soon, and he confirmed that the suing authors are claiming a win for possibly millions of class members. "This historic settlement will benefit all class members," Nelson said. "We look forward to announcing details of the settlement in the coming weeks." Ars could not immediately reach Anthropic for comment, but Anthropic had previously argued that the lawsuit could doom the emerging company, which was started by former OpenAI employees in 2021.
[2]
Anthropic settles AI book-training lawsuit with authors
Anthropic has settled a class action lawsuit with a group of fiction and non-fiction authors, as announced in a filing on Tuesday with the Ninth Circuit Court of Appeals. Anthropic had won a partial victory in a lower court ruling, and was in the process of appealing that ruling. No details of the settlement were made public, and Anthropic did not immediately respond to a request for comment. Called Bartz v. Anthropic, the case deals with Anthropic's use of books as training material for its large language models. The court had ruled that Anthropic's use of the books qualified as fair use, but because many of the books were pirated, Anthropic still faced significant financial penalties for its conduct connected to the case. Nonetheless, Anthropic had applauded the earlier ruling, framing it as a victory for generative AI models. "We believe it's clear that we acquired books for one purpose only -- building large language models -- and the court clearly held that use was fair," the company told NPR after the ruling in June.
[3]
Anthropic Settles High-Profile AI Copyright Lawsuit Brought By Book Authors
Anthropic faced the prospect of more than $1 trillion in damages, a sum that could have threatened the company's survival if the case went to trial. Anthropic has reached a preliminary settlement in a class action lawsuit brought by a group of prominent authors, marking a major turn in of the most significant ongoing AI copyright lawsuits in history. The move will allow Anthropic to avoid what may have been a financially devastating outcome in court. The settlement agreement is expected to be finalized September 3, with more details to follow, according to a legal filing published on Tuesday. Lawyers for the plaintiffs did not immediately respond to requests for comment. Anthropic declined to comment. In 2024, three book writers, Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, sued Anthropic, alleging the startup illegally used their work to train its artificial intelligence models. In June, California district court judge William Alsup issued a summary judgement in Bartz v. Anthropic largely siding with Anthropic, finding that the company's usage of the books was "fair use," and thus legal. But the judge ruled that the manner in which Anthropic had acquired some of the works, by downloading them through so-called "shadow libraries," including a notorious site called LibGen, constituted piracy. Alsup ruled that the book authors could still take Anthropic to trial in a class action suit for pirating their works; the legal showdown was slated to begin this December. Statutory damages for this kind of piracy start at $750 per infringed work, according to US copyright law. Because the library of books amassed by Anthropic was thought to contain approximately seven million works, the AI company was potentially facing court-imposed penalties amounting to billions, or even over $1 trillion dollars. "It's a stunning turn of events, given how Anthropic was fighting tooth and nail in two courts in this case. And the company recently hired a new trial team," says Edward Lee, a law professor at Santa Clara University who closely follows AI copyright litigation. "But they had few defenses at trial, given how Judge Alsup ruled. So Anthropic was starting at the risk of statutory damages in 'doomsday' amounts." Most authors who may have been part of the class action lawsuit were just starting to receive notice that they qualified to participate. The Authors Guild, a trade group representing professional writers, sent out a notice alerting authors that they might be eligible earlier this month, and lawyers for the plaintiffs were scheduled to submit a "list of affected works" to the court on September 1. This means that many of these writers were not privy to the negotiations that took place. "The big question is whether there is a significant revolt from within the author class after the settlement terms are unveiled," says James Grimmelmann, a professor of digital and internet law at Cornell University. "That will be a very important barometer of where copyright owner sentiment stands." Anthropic is still facing a number of other copyright-related legal challenges. One of the most high-profile disputes involves a group of major record labels, including Universal Music Group, which allege that the company illegally trained its AI programs on copyrighted lyrics. The plaintiffs recently filed to amend their case to allege that Anthropic had used the peer-to-peer file sharing service BitTorrent to download songs illegally. Settlements don't set legal precedent, but the details of this case will likely still be watched closely as dozens of other high-profile AI copyright cases continue to wind through the courts.
[4]
Anthropic Will Settle Lawsuit With Authors Over Pirated AI Training Materials
Anthropic agreed to settle a lawsuit brought by a group of authors alleging that the AI company illegally pirated their copyrighted books to use in training its AI models. The parties in the lawsuit filed a motion indicating the agreement with the Ninth Circuit of the US Court of Appeals on Tuesday. We don't yet know the terms of the settlement. Justin Nelson, lawyer for the authors, told CNET via email that more information will be announced soon. "This historic settlement will benefit all class members," he said. "We look forward to announcing details of the settlement in the coming weeks." Anthropic did not respond to a request for comment by the time of publication. This settlement is the latest update in a string of legal moves and rulings between the AI company and authors. Earlier this summer, US senior district judge William Alsup ruled that Anthropic's use of the copyrighted materials was justifiable as fair use. Fair use is a concept in copyright law that allows people to use copyrighted content without the rights holder's permission for specific purposes, like education. The ruling was the first time a court sided with an AI company and said its use of copyrighted material qualified as fair use, though Alsup took care to call out in his ruling that this may not always be true in future cases. Two days after Anthropic's victory, Meta won a similar case under fair use. Alsup's ruling also revealed that Anthropic systematically acquired and destroyed thousands of used books to scan them into a private, digitized library for AI training. It was this claim that was recommended for a secondary, separate trial that Anthropic has decided to settle out of court. Copyright cases like these highlight the tension between creators and AI companies. AI companies have been pushing hard for fair use exceptions as they gobble up huge swaths of data to train their AI models and don't want to pay or wait to license them. Without legislation guiding how AI companies can develop and train AI, court cases like these have become important in shaping the future of the companies and the products that people use daily. Just like how we saw Alsup's Anthropic analysis referenced in Meta's case, each case helps build precedent that guides the legal guardrails and green lights around this technology.
[5]
Anthropic settles AI book piracy lawsuit
The terms of the settlement still aren't clear, but it stems from a copyright lawsuit filed by writers Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson last year, which claimed Anthropic trained its Claude AI models on an open-source dataset filled with pirated materials. Anthropic scored a major victory in June when Judge William Alsup ruled that training AI models on legally purchased books counts as fair use, but he left the door open for further litigation. In July, Judge Alsup approved a class action lawsuit from US authors that accused Anthropic of violating copyright laws "by doing Napster-style downloading of millions of works." Anthropic was set to go to trial over the piracy claims in December, where it could've faced billions or more than $1 trillion in penalties, according to Wired.
[6]
Anthropic settles class action from US authors alleging copyright infringement
Aug 26 (Reuters) - Artificial intelligence company Anthropic said in a court filing on Tuesday it had settled a class action lawsuit from a group of U.S. authors who argued the company's AI training infringed their copyrights. Details of the settlement were not immediately available. A California judge said in a June ruling that Anthropic may have illegally downloaded as many as 7 million books from pirate websites, which could have made it liable for billions of dollars in damages if the authors' case was successful. Reporting by Blake Brittain; Editing by Chris Reese Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Boards, Policy & Regulation
[7]
Anthropic reaches a settlement over authors' class-action piracy lawsuit
Anthropic has a class-action lawsuit brought by a group of authors for an undisclosed sum. The move means the company will avoid a potentially more costly ruling if the case regarding its use of copyright materials to train artificial intelligence tools had moved forward. In June, Judge William Alsup handed down in the case, ruling that Anthropic's move to train LLMs on copyrighted materials constituted fair use. However the company's illegal and unpaid acquisition of those copyrighted materials was deemed available for the authors to pursue as a piracy case. With statutory damages for piracy beginning at $750 per infringed work and a library of pirated works estimated to number about 7 million, Anthropic could have been on the hook for billions of dollars. Litigation around AI and copyright is still shaking out, with no clear precedents emerging yet. This also isn't Anthropic's first foray into negotiating with creatives after using their work; it was sued by members of the music industry in 2023 and reached earlier this year. Plus, the details of Anthropic's settlement also have yet to be revealed. Depending on the number of authors who make a claim and the amount Anthropic agreed to pay out, either side could wind up feeling like the winner after the dust settles.
[8]
Book authors settle copyright lawsuit with AI company Anthropic
SAN FRANCISCO (AP) -- A group of book authors has reached a settlement agreement with artificial intelligence company Anthropic after suing the chatbot maker for copyright infringement. Both sides of the case have "negotiated a proposed class settlement," according to a federal appeals court filing Tuesday that said the terms will be finalized next week. Anthropic declined comment Tuesday. A lawyer for the authors, Justin Nelson, said the "historic settlement will benefit all class members." In a major test case for the AI industry, a federal judge ruled in June that Anthropic didn't break the law by training its chatbot Claude on millions of copyrighted books. But the company was still on the hook and was scheduled go to trial over how it acquired those books by downloading them from online "shadow libraries" of pirated copies. U.S. District Judge William Alsup of San Francisco said in his June ruling that the AI system's distilling from thousands of written works to be able to produce its own passages of text qualified as "fair use" under U.S. copyright law because it was "quintessentially transformative." "Like any reader aspiring to be a writer, Anthropic's (AI large language models) trained upon works not to race ahead and replicate or supplant them -- but to turn a hard corner and create something different," Alsup wrote. A trio of writers -- Andrea Bartz, Charles Graeber and Kirk Wallace Johnson -- alleged in their lawsuit last year that Anthropic's practices amounted to "large-scale theft," and that the San Francisco-based company "seeks to profit from strip-mining the human expression and ingenuity behind each one of those works."
[9]
Book authors settle copyright lawsuit with AI company Anthropic
SAN FRANCISCO (AP) -- A group of book authors has reached a settlement agreement with artificial intelligence company Anthropic after suing the chatbot maker for copyright infringement. Both sides of the case have "negotiated a proposed class settlement," according to a federal appeals court filing Tuesday that said the terms will be finalized next week. Anthropic declined comment Tuesday. A lawyer for the authors, Justin Nelson, said the "historic settlement will benefit all class members." In a major test case for the AI industry, a federal judge ruled in June that Anthropic didn't break the law by training its chatbot Claude on millions of copyrighted books. But the company was still on the hook and was scheduled go to trial over how it acquired those books by downloading them from online "shadow libraries" of pirated copies. U.S. District Judge William Alsup of San Francisco said in his June ruling that the AI system's distilling from thousands of written works to be able to produce its own passages of text qualified as "fair use" under U.S. copyright law because it was "quintessentially transformative." "Like any reader aspiring to be a writer, Anthropic's (AI large language models) trained upon works not to race ahead and replicate or supplant them -- but to turn a hard corner and create something different," Alsup wrote. A trio of writers -- Andrea Bartz, Charles Graeber and Kirk Wallace Johnson -- alleged in their lawsuit last year that Anthropic's practices amounted to "large-scale theft," and that the San Francisco-based company "seeks to profit from strip-mining the human expression and ingenuity behind each one of those works."
[10]
Book Authors Settle Copyright Lawsuit With AI Company Anthropic
SAN FRANCISCO (AP) -- A group of book authors has reached a settlement agreement with artificial intelligence company Anthropic after suing the chatbot maker for copyright infringement. Both sides of the case have "negotiated a proposed class settlement," according to a federal appeals court filing Tuesday that said the terms will be finalized next week. Anthropic declined comment Tuesday. A lawyer for the authors, Justin Nelson, said the "historic settlement will benefit all class members." In a major test case for the AI industry, a federal judge ruled in June that Anthropic didn't break the law by training its chatbot Claude on millions of copyrighted books. But the company was still on the hook and was scheduled go to trial over how it acquired those books by downloading them from online "shadow libraries" of pirated copies. U.S. District Judge William Alsup of San Francisco said in his June ruling that the AI system's distilling from thousands of written works to be able to produce its own passages of text qualified as "fair use" under U.S. copyright law because it was "quintessentially transformative." "Like any reader aspiring to be a writer, Anthropic's (AI large language models) trained upon works not to race ahead and replicate or supplant them -- but to turn a hard corner and create something different," Alsup wrote. A trio of writers -- Andrea Bartz, Charles Graeber and Kirk Wallace Johnson -- alleged in their lawsuit last year that Anthropic's practices amounted to "large-scale theft," and that the San Francisco-based company "seeks to profit from strip-mining the human expression and ingenuity behind each one of those works."
[11]
Book authors settle copyright lawsuit with AI company Anthropic - The Economic Times
Authors have reached a settlement with AI firm Anthropic after suing for copyright infringement over its chatbot training methods. Though a judge ruled the training was "fair use," the trial over how Anthropic acquired pirated books was pending. The settlement, called "historic", is expected to benefit all class members.A group of book authors has reached a settlement agreement with artificial intelligence company Anthropic after suing the chatbot maker for copyright infringement. Both sides of the case have "negotiated a proposed class settlement," according to a federal appeals court filing Tuesday that said the terms will be finalised next week. Anthropic declined to comment on Tuesday. A lawyer for the authors, Justin Nelson, said the "historic settlement will benefit all class members." In a major test case for the AI industry, a federal judge ruled in June that Anthropic didn't break the law by training its chatbot Claude on millions of copyrighted books. But the company was still on the hook and was scheduled go to trial over how it acquired those books by downloading them from online "shadow libraries" of pirated copies. US District Judge William Alsup of San Francisco said in his June ruling that the AI system's distilling from thousands of written works to be able to produce its own passages of text qualified as "fair use" under US copyright law because it was "quintessentially transformative." "Like any reader aspiring to be a writer, Anthropic's (AI large language models) trained upon works not to race ahead and replicate or supplant them - but to turn a hard corner and create something different," Alsup wrote. A trio of writers - Andrea Bartz, Charles Graeber and Kirk Wallace Johnson - alleged in their lawsuit last year that Anthropic's practices amounted to "large-scale theft," and that the San Francisco-based company "seeks to profit from strip-mining the human expression and ingenuity behind each one of those works."
[12]
Anthropic Settles AI Lawsuit From Authors
Anthropic has settled a lawsuit from authors, who accused the Amazon-backed company of illegally downloading and copying their books to teach its AI system, in among the first deals reached by creators over novel legal issues raised by the technology. "This historic settlement will benefit all class members," says Justin Nelson, a lawyer for the authors. "We look forward to announcing details of the settlement in the coming weeks." The deal was reached on Aug. 19 through a mediation. Details of the agreement are expected to be issued by Sept. 3. The thrust of the case -- and dozens of others involving AI companies -- was set to be decided by one question: Are AI companies covered by fair use, the legal doctrine in intellectual property law that allows creators to build upon copyrighted works without a license? On that issue, a judge found in June that Anthropic is on solid legal ground, at least with respect to training. Authors don't have the right to exclude the company from using their works to train its technology as long as they purchased the books, according to the June ruling. Like any reader who wants to be a writer, the AI tool learns and creates an entirely new work, the court reasoned. The technology is "among the most transformative many of us will see in our lifetimes," wrote U.S. District Judge William Alsup. Still, Anthropic was set to face a trial over illegally downloading seven million books to create a library that was used for training. That it later purchased copies of those books it stole off the internet earlier to cover its tracks doesn't absolve it of liability, the court concluded. The company faced massive damages stemming from the decision that could lead to Disney and Universal getting a similar payout depending on what they unearth in discovery over how Midjourney allegedly obtained copies of thousands of films that were repurposed to teach its image generator.
[13]
Anthropic and Authors Settle Copyright Infringement Lawsuit Targeting AI Training | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The settlement was disclosed in a Tuesday (Aug. 26) court filing by Anthropic and in a statement by the authors' attorney, Reuters reported Tuesday. Neither source described the terms of the settlement, according to the report. The authors' attorney, Justin Nelson, told Reuters, per the report: "This historic settlement will benefit all class members. We look forward to announcing details of the settlement in the coming weeks." The judge in the case gave the parties a Sept. 5 deadline to file requests for preliminary approval of the settlement, according to the report. The class action lawsuit against Anthropic was filed last year by writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson and alleged that Anthropic used pirated books without permission to train its AI assistant, Claude, per the report. The judge ruled in June that the company may have illegally downloaded as many as 7 million books, according to the report. It was reported June 24 that the judge found that Anthropic made "fair use" of the authors' books in training Claude but that Anthropic's copying and storage of the 7 million pirated books in a "central library" violated the authors' copyrights and was not fair use. The judge ordered a trial in December to decide how much Anthropic owes for the infringement. PYMNTS reported in July that this ruling and another one in a copyright battle involving Meta seemed to embolden tech firms. The settlement reported Tuesday is the first settlement in a series of cases that allege that companies in the AI industry infringed on copyrights by using material for AI training, the Reuters report said. It was reported Tuesday that Japanese newspaper publishers Nikkei and Asahi Shimbun filed lawsuits accusing AI search engine Perplexity of copyright infringement. In that case, the companies alleged that Perplexity has, without their consent, "copies and stored article content from the servers of Nikkei and Asahi" and ignored a "technical measure" created to keep this from happening. The companies also claimed that Perplexity attributed inaccurate information to the newspapers' articles, damaging their credibility.
[14]
Anthropic settles class action from US authors alleging copyright infringement
(Reuters) -Artificial intelligence company Anthropic said in a court filing on Tuesday it had settled a class action lawsuit from a group of U.S. authors who argued the company's AI training infringed their copyrights. Details of the settlement were not immediately available. A California judge said in a June ruling that Anthropic may have illegally downloaded as many as 7 million books from pirate websites, which could have made it liable for billions of dollars in damages if the authors' case was successful. (Reporting by Blake Brittain; Editing by Chris Reese)
Share
Copy Link
Anthropic has agreed to settle a class-action lawsuit brought by authors over the alleged use of pirated books to train its AI models, avoiding potentially devastating financial penalties.
In a significant development for the AI industry, Anthropic has agreed to settle a class-action lawsuit brought by authors over the alleged use of pirated books to train its AI models. The settlement, described as "historic" by the plaintiffs' lawyer, was announced in a filing with the Ninth Circuit Court of Appeals on Tuesday 1.
Source: PYMNTS
The lawsuit, known as Bartz v. Anthropic, was initially filed by three authors - Andrea Bartz, Kirk Wallace Johnson, and Charles Graeber. They accused Anthropic of illegally using their works and potentially millions of other books to train its large language models 2. The case gained significant attention when U.S. District Judge William Alsup certified it as a class action, potentially allowing up to 7 million claimants to join 1.
In a previous ruling, Judge Alsup had determined that Anthropic's use of legally acquired books for AI training qualified as fair use. However, he also ruled that the company's acquisition of some works through "shadow libraries" like LibGen constituted piracy 3. This decision left Anthropic vulnerable to potentially massive statutory damages, with estimates ranging from billions to over $1 trillion 3.
Source: engadget
While the specific terms of the settlement have not been disclosed, the agreement is expected to be finalized by September 5 1. Justin A. Nelson, a lawyer representing the authors, stated that the settlement would "benefit all class members" 1.
The settlement marks a significant turn in one of the most high-profile AI copyright lawsuits to date. It allows Anthropic to avoid what could have been a financially devastating trial outcome 3. However, the company still faces other copyright-related legal challenges, including a dispute with major record labels over the use of song lyrics in AI training 3.
Source: The Hollywood Reporter
This case highlights the ongoing tension between content creators and AI companies over the use of copyrighted materials in AI training. While settlements don't set legal precedents, the outcome of this case is likely to be closely watched by the industry 4.
The settlement raises questions about how AI companies will approach data acquisition for training in the future. It also underscores the need for clearer legislation guiding AI development and training practices 4.
As the AI industry continues to evolve rapidly, this case serves as a reminder of the complex legal and ethical challenges surrounding the use of copyrighted materials in AI development. The full implications of this settlement for Anthropic and the broader AI sector remain to be seen as more details emerge in the coming weeks.
Google DeepMind reveals its 'nano banana' AI model, now integrated into Gemini, offering advanced image editing capabilities with improved consistency and precision.
16 Sources
Technology
5 hrs ago
16 Sources
Technology
5 hrs ago
IBM and AMD announce a partnership to develop next-generation computing architectures that combine quantum computers with high-performance computing, aiming to solve complex problems beyond the reach of traditional computing methods.
4 Sources
Technology
21 hrs ago
4 Sources
Technology
21 hrs ago
Google introduces new AI-driven features in its Translate app, including personalized language learning tools and enhanced real-time translation capabilities, positioning itself as a potential competitor to language learning apps like Duolingo.
10 Sources
Technology
5 hrs ago
10 Sources
Technology
5 hrs ago
Perplexity AI, a leading AI-powered search engine, is sued by Japanese media groups Nikkei and Asahi Shimbun for copyright infringement, highlighting the ongoing tension between AI companies and news publishers over content usage and compensation.
8 Sources
Technology
21 hrs ago
8 Sources
Technology
21 hrs ago
Meta is establishing a new super PAC in California to support candidates favoring lighter AI regulation, potentially spending tens of millions of dollars to influence state-level politics and the 2026 governor's race.
8 Sources
Policy
5 hrs ago
8 Sources
Policy
5 hrs ago