26 Sources
26 Sources
[1]
Authors celebrate "historic" settlement coming soon in Anthropic class action
Authors are celebrating a "historic" settlement expected to be reached soon in a class-action lawsuit over Anthropic's AI training data. On Tuesday, US District Judge William Alsup confirmed that Anthropic and the authors "believe they have a settlement in principle" and will file a motion for preliminary approval of the settlement by September 5. The settlement announcement comes after Alsup certified what AI industry advocates criticized as the largest copyright class action of all time. Although the lawsuit was raised by three authors -- Andrea Bartz, Kirk Wallace Johnson, and Charles Graeber -- Alsup allowed up to 7 million claimants to join based on the large number of books that Anthropic may have illegally downloaded to train its AI models. If every author in the class filed a claim, industry advocates warned, it would "financially ruin" the entire AI industry. It's unclear if the class certification prompted the settlement or what terms authors agreed to, but according to court filings, the settlement terms are binding. A lawyer representing authors, Justin A. Nelson, told Ars that more details would be revealed soon, and he confirmed that the suing authors are claiming a win for possibly millions of class members. "This historic settlement will benefit all class members," Nelson said. "We look forward to announcing details of the settlement in the coming weeks." Ars could not immediately reach Anthropic for comment, but Anthropic had previously argued that the lawsuit could doom the emerging company, which was started by former OpenAI employees in 2021.
[2]
Anthropic settles AI book-training lawsuit with authors
Anthropic has settled a class action lawsuit with a group of fiction and non-fiction authors, as announced in a filing on Tuesday with the Ninth Circuit Court of Appeals. Anthropic had won a partial victory in a lower court ruling, and was in the process of appealing that ruling. No details of the settlement were made public, and Anthropic did not immediately respond to a request for comment. Called Bartz v. Anthropic, the case deals with Anthropic's use of books as training material for its large language models. The court had ruled that Anthropic's use of the books qualified as fair use, but because many of the books were pirated, Anthropic still faced significant financial penalties for its conduct connected to the case. Nonetheless, Anthropic had applauded the earlier ruling, framing it as a victory for generative AI models. "We believe it's clear that we acquired books for one purpose only -- building large language models -- and the court clearly held that use was fair," the company told NPR after the ruling in June.
[3]
Anthropic Settles High-Profile AI Copyright Lawsuit Brought By Book Authors
Anthropic faced the prospect of more than $1 trillion in damages, a sum that could have threatened the company's survival if the case went to trial. Anthropic has reached a preliminary settlement in a class action lawsuit brought by a group of prominent authors, marking a major turn in of the most significant ongoing AI copyright lawsuits in history. The move will allow Anthropic to avoid what may have been a financially devastating outcome in court. The settlement agreement is expected to be finalized September 3, with more details to follow, according to a legal filing published on Tuesday. Lawyers for the plaintiffs did not immediately respond to requests for comment. Anthropic declined to comment. In 2024, three book writers, Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, sued Anthropic, alleging the startup illegally used their work to train its artificial intelligence models. In June, California district court judge William Alsup issued a summary judgement in Bartz v. Anthropic largely siding with Anthropic, finding that the company's usage of the books was "fair use," and thus legal. But the judge ruled that the manner in which Anthropic had acquired some of the works, by downloading them through so-called "shadow libraries," including a notorious site called LibGen, constituted piracy. Alsup ruled that the book authors could still take Anthropic to trial in a class action suit for pirating their works; the legal showdown was slated to begin this December. Statutory damages for this kind of piracy start at $750 per infringed work, according to US copyright law. Because the library of books amassed by Anthropic was thought to contain approximately seven million works, the AI company was potentially facing court-imposed penalties amounting to billions, or even over $1 trillion dollars. "It's a stunning turn of events, given how Anthropic was fighting tooth and nail in two courts in this case. And the company recently hired a new trial team," says Edward Lee, a law professor at Santa Clara University who closely follows AI copyright litigation. "But they had few defenses at trial, given how Judge Alsup ruled. So Anthropic was starting at the risk of statutory damages in 'doomsday' amounts." Most authors who may have been part of the class action lawsuit were just starting to receive notice that they qualified to participate. The Authors Guild, a trade group representing professional writers, sent out a notice alerting authors that they might be eligible earlier this month, and lawyers for the plaintiffs were scheduled to submit a "list of affected works" to the court on September 1. This means that many of these writers were not privy to the negotiations that took place. "The big question is whether there is a significant revolt from within the author class after the settlement terms are unveiled," says James Grimmelmann, a professor of digital and internet law at Cornell University. "That will be a very important barometer of where copyright owner sentiment stands." Anthropic is still facing a number of other copyright-related legal challenges. One of the most high-profile disputes involves a group of major record labels, including Universal Music Group, which allege that the company illegally trained its AI programs on copyrighted lyrics. The plaintiffs recently filed to amend their case to allege that Anthropic had used the peer-to-peer file sharing service BitTorrent to download songs illegally. Settlements don't set legal precedent, but the details of this case will likely still be watched closely as dozens of other high-profile AI copyright cases continue to wind through the courts.
[4]
Anthropic Settles With Authors Over Pirated Material: What Does That Mean for Other AI Lawsuits?
Anthropic agreed to settle a lawsuit brought by a group of authors alleging that the AI company illegally pirated their copyrighted books to use in training its Claude AI models. On Tuesday, the parties in the lawsuit filed a motion indicating their agreement with the 9th US Circuit Court of Appeals. We don't yet know the terms of the settlement, but we could know more as soon as next week. Justin Nelson, lawyer for the authors, told CNET via email that more information will be announced soon. "This historic settlement will benefit all class members," he said. "We look forward to announcing details of the settlement in the coming weeks." Anthropic didn't respond to a request for comment by the time of publication. This settlement is the latest update in a string of legal moves and rulings between the AI company and authors. Earlier this summer, US Senior District Court Judge William Alsup ruled Anthropic's use of the copyrighted materials was justifiable as fair use -- a concept in copyright law that allows people to use copyrighted content without the rights holder's permission for specific purposes, like education. The ruling was the first time a court sided with an AI company and said its use of copyrighted material qualified as fair use, though Alsup said this may not always be true in future cases. Two days after Anthropic's victory, Meta won a similar case under fair use. Read more: We're All Copyright Owners. Why You Need to Care About AI and Copyright Alsup's ruling also revealed that Anthropic systematically acquired and destroyed thousands of used books to scan them into a private, digitized library for AI training. It was this claim that was recommended for a secondary, separate trial that Anthropic has decided to settle out of court. In class action suits, the terms of a settlement need to be reviewed and approved by the court. The settlement means both groups "avoid the cost, delay and uncertainty associated with further litigating the case," Christian Mammen, an intellectual property lawyer and San Francisco office managing partner at Womble Bond Dickinson, told CNET. "Anthropic can move forward with its business without being the first major AI platform to have one of these copyright cases go to trial," Mammen said. "And the plaintiffs can likely receive the benefit of any financial or non-financial settlement terms sooner. If the case were litigated through trial and appeal, it could last another two years or more." Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source. Copyright cases like these highlight the tension between creators and AI developers. AI companies have been pushing hard for fair use exceptions as they gobble up huge swaths of data to train their models and don't want to pay or wait to license them. Without legislation guiding how companies can develop and train AI, court cases like these have become important in shaping the future of the products people use daily. "The terms of this settlement will likely become a data point or benchmark for future negotiations and, possibly, settlements in other AI copyright cases," said Mammen. Every case is different and needs to be weighed on its merits, he added, but it still could be influential. There are still big questions about how copyright law should be applied in the age of AI. Just like how we saw Alsup's Anthropic analysis referenced in Meta's case, each case helps build precedent that guides the legal guardrails and green lights around this technology. The settlement will bring this specific case to an end, but it doesn't give any clarity to the underlying legal dilemmas that AI raises. "This remaining uncertainty in the law could open the door to a further round of litigation," Mammen said, "involving different plaintiffs and different defendants, with similar legal issues but different facts."
[5]
Anthropic Will Settle Lawsuit With Authors Over Pirated AI Training Materials
Anthropic agreed to settle a lawsuit brought by a group of authors alleging that the AI company illegally pirated their copyrighted books to use in training its AI models. The parties in the lawsuit filed a motion indicating the agreement with the Ninth Circuit of the US Court of Appeals on Tuesday. We don't yet know the terms of the settlement. Justin Nelson, lawyer for the authors, told CNET via email that more information will be announced soon. "This historic settlement will benefit all class members," he said. "We look forward to announcing details of the settlement in the coming weeks." Anthropic did not respond to a request for comment by the time of publication. This settlement is the latest update in a string of legal moves and rulings between the AI company and authors. Earlier this summer, US senior district judge William Alsup ruled that Anthropic's use of the copyrighted materials was justifiable as fair use. Fair use is a concept in copyright law that allows people to use copyrighted content without the rights holder's permission for specific purposes, like education. The ruling was the first time a court sided with an AI company and said its use of copyrighted material qualified as fair use, though Alsup took care to call out in his ruling that this may not always be true in future cases. Two days after Anthropic's victory, Meta won a similar case under fair use. Alsup's ruling also revealed that Anthropic systematically acquired and destroyed thousands of used books to scan them into a private, digitized library for AI training. It was this claim that was recommended for a secondary, separate trial that Anthropic has decided to settle out of court. Copyright cases like these highlight the tension between creators and AI companies. AI companies have been pushing hard for fair use exceptions as they gobble up huge swaths of data to train their AI models and don't want to pay or wait to license them. Without legislation guiding how AI companies can develop and train AI, court cases like these have become important in shaping the future of the companies and the products that people use daily. Just like how we saw Alsup's Anthropic analysis referenced in Meta's case, each case helps build precedent that guides the legal guardrails and green lights around this technology.
[6]
Anthropic agrees to settle copyright infringement class action suit - what it means
The authors claim Anthropic trained AI on their pirated work. AI startup Anthropic has agreed to settle a class action lawsuit against three authors for the tech company's misuse of their work to train its Claude chatbot. Also: Claude wins high praise from a Supreme Court justice - is AI's legal losing streak over? The writers claimed that Anthropic used the authors' pirated works to train Claude, its family of large language models (LLMs), on prompt generation. The AI startup negotiated a "proposed class settlement," Anthropic announced Tuesday, to forgo a trial determining how much it would owe for the infringement. The preliminary settlement's details are scarce. In June, a judge ruled that Anthropic's legal purchase of books to train its chatbot was fair use -- that is, free to use without payment or permission from the copyright holder. However, some of Anthropic's tactics, like using a website called LibGen, constituted piracy, the judge ruled. Anthropic could have been forced to pay over $1 trillion in damages over piracy claims, Wired reports. The settlement highlights one of the many dilemmas AI companies face as they train their models on material for prompt generation and query responses. To offer up succinct, helpful responses to a user, an AI chatbot must be trained on a multitude of data. GPT-4, for example, was trained on 1 trillion data parameters. Anthropic, on the other hand, is said to have accumulated a library of over 7 million works to train Claude, according to Wired's report. Also: Perplexity says Cloudflare's accusations of 'stealth' AI scraping are based on embarrassing errors But where does that data come from, who owns it, and do creators still have control over their work in a post-AI world? AI companies like ChatGPT have said that they only use publicly available content to train their models, i.e., material scraped from the internet -- everything from social media posts to blogs. Several authors, artists, and creators, however, have sued AI companies for misusing their work to train the LLMs behind chatbots like ChatGPT, Gemini, and Claude. The settlement hints at the future of copyright infringement in the age of AI and is expected to be reached by Sept. 3, according to court documents. In May, the Trump administration fired the head of the US Copyright Office, Shira Perlmutter, shortly after her office published its latest recommendation on AI training and fair use. In its AI Action Plan, released in July 2025, the Trump administration did not emphasize specific recommendations on AI and copyright law, despite ongoing debate about the topic.
[7]
Anthropic settles AI book piracy lawsuit
The terms of the settlement still aren't clear, but it stems from a copyright lawsuit filed by writers Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson last year, which claimed Anthropic trained its Claude AI models on an open-source dataset filled with pirated materials. Anthropic scored a major victory in June when Judge William Alsup ruled that training AI models on legally purchased books counts as fair use, but he left the door open for further litigation. In July, Judge Alsup approved a class action lawsuit from US authors that accused Anthropic of violating copyright laws "by doing Napster-style downloading of millions of works." Anthropic was set to go to trial over the piracy claims in December, where it could've faced billions or more than $1 trillion in penalties, according to Wired.
[8]
Anthropic Settles Lawsuit With Authors Over Use of Pirated Books for AI Training
Don't miss out on our latest stories. Add PCMag as a preferred source on Google. Anthropic has settled a class-action lawsuit brought by a group of authors over the AI company's alleged use of pirated books for AI training. The motion was filed in the 9th US Circuit Court of Appeals on Tuesday, but details of the settlement have yet to be made public. "This historic settlement will benefit all class members," Justin Nelson, lawyer for the authors, said in a statement. "We look forward to announcing details of the settlement in the coming weeks. Authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson sued Anthropic last year. They accused Anthropic of training its AI models on pirated versions of their copyrighted books. In June, Judge William Alsup partly sided with Anthropic, stating that its use of copyrighted books to train AI models qualified as fair use since the output was "exceedingly transformative" and not a substitute for the original works. However, he allowed the authors to pursue a trial on how Anthropic obtained the books. "Anthropic had no entitlement to use pirated copies," he said. Anthropic purchased paperback copies of books, scanned and digitized their contents, and then destroyed the printed originals. But it also downloaded millions of unauthorized copies of books from online troves of pirated works to speed up training Claude and retained those copies. As Wired reports, the trial was set to begin in December. Since Anthropic had downloaded pirated copies of millions of books, losing the trial would have cost them billions or maybe even a trillion dollars. The settlement agreement is now expected to be completed by Sept. 3. Anthropic is just one among the many AI companies accused of copyright violations. News Corp. is suing Perplexity for substituting and repackaging its work. PCMag's parent, Ziff Davis, has also filed a lawsuit against OpenAI after it found ChatGPT had "relentlessly reproduced exact copies and created derivatives" of its articles.
[9]
Anthropic's surprise settlement adds new wrinkle in AI copyright war
Aug 27 (Reuters) - Anthropic's class action settlement with a group of U.S. authors this week was a first, but legal experts said the case's distinct qualities complicate the deal's potential influence on a wave of ongoing copyright lawsuits against other artificial-intelligence focused companies like OpenAI, Microsoft (MSFT.O), opens new tab and Meta Platforms (META.O), opens new tab. Amazon (AMZN.O), opens new tab-backed Anthropic was under particular pressure, with a trial looming in December after a judge found it liable for pirating millions of copyrighted books. The terms of the settlement, which require a judge's approval, are not yet public. And U.S. courts have just begun to wrestle with novel copyright questions related to generative AI, which could prompt other defendants to hold out for favorable rulings. Anthropic was in "a unique situation," said Cornell Law School professor James Grimmelmann, with as much as $1 trillion in piracy damages at stake in its worst-case scenario. "It's possible that this settlement could be a model for other cases, but it really depends on the details," he said. The authors in Anthropic's case accused the AI company of using millions of pirated books without permission or compensation to teach its AI assistant Claude to respond to human prompts. Anthropic, like other AI copyright defendants, countered that its actions were legal under the doctrine of fair use, which allows the use of copyrighted works without permission in some circumstances. U.S. District Judge William Alsup in San Francisco ruled in June that Anthropic made fair use of the authors' work to train its AI, but said the company violated copyright law by saving pirated books to a "central library" that would not necessarily be used for AI training. That created potential liabilities of billions of dollars for Anthropic, which faced an upcoming trial in December. The two sides told the court on Tuesday that they had settled the case in principle, the first accord reached in a copyright lawsuit over generative AI training. Alsup, who must approve the settlement, ordered the parties to submit details to the court by Sept. 5. Chris Buccafusco, a law professor at Duke University, said he was surprised Anthropic chose to settle. Anthropic was "in a position of decent strength" because of Alsup's fair-use determination, Buccafusco said, despite the piracy decision. "Given their willingness to settle, you have to imagine the dollar signs are flashing in the eyes of plaintiffs' lawyers around the country," he said. Anthropic and an attorney for the authors did not immediately respond to requests for comment on Wednesday. Universal Music Group (UMG.AS), opens new tab, which has separately sued Anthropic over its alleged misuse of song lyrics to train Claude, also did not respond to a request for comment. OpenAI, Meta and Microsoft did not respond to questions about how the Anthropic settlement could shape their ongoing AI litigation. FAIR USE QUESTIONS The fate of the pending generative AI lawsuits could hinge on fair use, a still-evolving concept that no court had addressed in the cases until June. Grimmelmann said Anthropic's settlement removes an early opportunity for a federal appeals court to consider fair use and issue a decision that would be binding on other cases and likely tee up the issue for the U.S. Supreme Court. Two days after Alsup's ruling on fair use and piracy, another judge in the same San Francisco court took a different approach in a similar author lawsuit against Meta. The decision by U.S. District Judge Vince Chhabria mostly ignored piracy issues like those Alsup addressed, but found that Meta's conduct may not be protected because its AI could be used to "flood the market" with replacements for the authors' work. Chhabria ruled for Meta on fair use but said he did so only because "these plaintiffs made the wrong arguments." Alsup, meanwhile, downplayed fears of market displacement in the Anthropic case. An attorney for the authors suing Meta declined to comment on Anthropic's settlement. Reuters News has licensed its content to Meta for AI use, and its parent company Thomson Reuters (TRI.TO), opens new tab is involved in a lawsuit against Ross Intelligence that argues Ross misused copyrighted material to train an AI-powered legal search engine. Decisions on fair use in dozens of other AI cases are unlikely before next year. Given the stakes, unpredictability surrounding the rulings could provide an impetus to settle, experts said. But it could also encourage them to hold out in hopes of a sweeping win like Google (GOOGL.O), opens new tab obtained from the Supreme Court in 2016 in a fair-use dispute over its Google Books project. "The one thing that was clearly going to help was an across-the-board, as-a-matter-of-law fair use ruling" in the Anthropic case, Buccafusco said. "That would have been the real solution for all of the AI platforms." Reporting by Blake Brittain in Washington; Editing by David Bario Our Standards: The Thomson Reuters Trust Principles., opens new tab Blake Brittain Thomson Reuters Blake Brittain reports on intellectual property law, including patents, trademarks, copyrights and trade secrets, for Reuters Legal. He has previously written for Bloomberg Law and Thomson Reuters Practical Law and practiced as an attorney.
[10]
Anthropic settles class action from US authors alleging copyright infringement
Aug 26 (Reuters) - Artificial intelligence company Anthropic said in a court filing on Tuesday it had settled a class action lawsuit from a group of U.S. authors who argued the company's AI training infringed their copyrights. Details of the settlement were not immediately available. A California judge said in a June ruling that Anthropic may have illegally downloaded as many as 7 million books from pirate websites, which could have made it liable for billions of dollars in damages if the authors' case was successful. Reporting by Blake Brittain; Editing by Chris Reese Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Boards, Policy & Regulation
[11]
Book authors settle copyright lawsuit with AI company Anthropic
SAN FRANCISCO (AP) -- A group of book authors has reached a settlement agreement with artificial intelligence company Anthropic after suing the chatbot maker for copyright infringement. Both sides of the case have "negotiated a proposed class settlement," according to a federal appeals court filing Tuesday that said the terms will be finalized next week. Anthropic declined comment Tuesday. A lawyer for the authors, Justin Nelson, said the "historic settlement will benefit all class members." In a major test case for the AI industry, a federal judge ruled in June that Anthropic didn't break the law by training its chatbot Claude on millions of copyrighted books. But the company was still on the hook and was scheduled go to trial over how it acquired those books by downloading them from online "shadow libraries" of pirated copies. U.S. District Judge William Alsup of San Francisco said in his June ruling that the AI system's distilling from thousands of written works to be able to produce its own passages of text qualified as "fair use" under U.S. copyright law because it was "quintessentially transformative." "Like any reader aspiring to be a writer, Anthropic's (AI large language models) trained upon works not to race ahead and replicate or supplant them -- but to turn a hard corner and create something different," Alsup wrote. A trio of writers -- Andrea Bartz, Charles Graeber and Kirk Wallace Johnson -- alleged in their lawsuit last year that Anthropic's practices amounted to "large-scale theft," and that the San Francisco-based company "seeks to profit from strip-mining the human expression and ingenuity behind each one of those works."
[12]
Anthropic reaches a settlement over authors' class-action piracy lawsuit
Anthropic has a class-action lawsuit brought by a group of authors for an undisclosed sum. The move means the company will avoid a potentially more costly ruling if the case regarding its use of copyright materials to train artificial intelligence tools had moved forward. In June, Judge William Alsup handed down in the case, ruling that Anthropic's move to train LLMs on copyrighted materials constituted fair use. However the company's illegal and unpaid acquisition of those copyrighted materials was deemed available for the authors to pursue as a piracy case. With statutory damages for piracy beginning at $750 per infringed work and a library of pirated works estimated to number about 7 million, Anthropic could have been on the hook for billions of dollars. Litigation around AI and copyright is still shaking out, with no clear precedents emerging yet. This also isn't Anthropic's first foray into negotiating with creatives after using their work; it was sued by members of the music industry in 2023 and reached earlier this year. Plus, the details of Anthropic's settlement also have yet to be revealed. Depending on the number of authors who make a claim and the amount Anthropic agreed to pay out, either side could wind up feeling like the winner after the dust settles.
[13]
Anthropic Reaches Settlement in Landmark AI Copyright Case with US Authors
Artificial intelligence company Anthropic has reached a settlement in a high-profile class action lawsuit brought by a group of U.S. authors who alleged the company used pirated books to train its AI systems without permission or compensation. The agreement, disclosed in a court filing on Tuesday, avoids a potentially massive damages trial that had been scheduled for December and has a potential knock-on effect on the debate around whether human creatives should be compensated when their work is used to train AI. In June, U.S. District Judge William Alsup ruled that while Anthropic's use of the materials to train its AI assistant Claude qualified as "fair use" because it was "quintessentially transformative," the company violated authors' rights by creating a central library of pirated books. The judge ordered that the question of damages for this acquisition practice proceed to trial. The lawsuit, filed last year by writers Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, accused Anthropic of unlawfully downloading approximately seven million books from so-called "shadow libraries," including sites like LibGen. The plaintiffs argued that this constituted large-scale copyright infringement, with statutory damages under U.S. law potentially reaching billions of dollars. On Tuesday, both parties informed the court that they had negotiated a proposed class settlement, with requests for preliminary approval due by early September. "This historic settlement will benefit all class members," says Justin Nelson, an attorney for the authors. He added that details of the settlement would be announced "in the coming weeks." Anthropic has declined to comment. Legal experts tells Reuters the resolution could influence the growing number of copyright lawsuits involving AI companies. "The devil is in the details of the settlement and future litigation about the terms of the settlement," says Shubha Ghosh, a professor at Syracuse University College of Law. The Authors Guild recently began notifying writers who may qualify to participate in the settlement, with a list of affected works expected to be submitted to the court this week. Anthropic, which has faced other copyright-related legal challenges -- including a separate case brought by major record labels alleging illegal use of song lyric -- did not disclose any financial terms of the agreement. Settlements of this kind do not set legal precedent but may shape the course of ongoing litigation in the fast-evolving field of AI and copyright.
[14]
Book authors hail 'historic settlement' as Anthropic dodges trial on how it actually acquired millions of copyrighted works to ingest
A group of book authors has reached a settlement agreement with artificial intelligence company Anthropic after suing the chatbot maker for copyright infringement. Both sides of the case have "negotiated a proposed class settlement," according to a federal appeals court filing Tuesday that said the terms will be finalized next week. Anthropic declined comment Tuesday. A lawyer for the authors, Justin Nelson, said the "historic settlement will benefit all class members." In a major test case for the AI industry, a federal judge ruled in June that Anthropic didn't break the law by training its chatbot Claude on millions of copyrighted books. But the company was still on the hook and was scheduled go to trial over how it acquired those books by downloading them from online "shadow libraries" of pirated copies. U.S. District Judge William Alsup of San Francisco said in his June ruling that the AI system's distilling from thousands of written works to be able to produce its own passages of text qualified as "fair use" under U.S. copyright law because it was "quintessentially transformative." "Like any reader aspiring to be a writer, Anthropic's (AI large language models) trained upon works not to race ahead and replicate or supplant them -- but to turn a hard corner and create something different," Alsup wrote. A trio of writers -- Andrea Bartz, Charles Graeber and Kirk Wallace Johnson -- alleged in their lawsuit last year that Anthropic's practices amounted to "large-scale theft," and that the San Francisco-based company "seeks to profit from strip-mining the human expression and ingenuity behind each one of those works."
[15]
Anthropic landmark copyright settlement with authors may set a precedent for the whole industry
Anthropic has settled a copyright lawsuit that risked exposing the company to billions of dollars' worth of damages. The case, which concerns Anthropic's use of millions of pirated books to train its large language model, Claude, was due to go to trial on December 1. However, in a Tuesday court filing, Anthropic and the authors jointly informed a San Francisco federal court that they had reached a class-wide settlement. They requested that discovery be paused and existing deadlines vacated while the agreement is finalized. The amount of the settlement was not immediately disclosed. It's also still unclear how the settlement will be distributed between various copyright holders, which could include large publishing houses as well as individual authors. The case was the first certified class action against an AI company over the use of copyrighted materials, and the quick settlement, which came just one month after the judge ruled the case could proceed to trial as a class action, is a win for the authors, according to legal experts. "The authors' side will be reasonably happy with this result, because they've essentially forced Anthropic into making a settlement," Luke McDonagh, an associate professor of law at LSE, told Fortune. "This might be the first domino to fall." While the settlement doesn't set a legal precedent, it could serve to legitimize the authors' claims and may set a business precedent for similar cases, he said. Crucially, the case didn't rest on the use of copyrighted material for AI training generally -- and, in fact, the trial judge ruled that training AI models on legally acquired works would likely qualify as "fair use" -- but on how AI companies acquired the works they used for training. The plaintiffs, who include authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson, alleged that millions of these works were obtained from piracy websites and shadow libraries in direct and knowing violation of copyright law. The judge presiding over the case, William Alsup, had suggested in a ruling made in July that even if the AI training use might be considered fair use, this initial acquisition of works was illegitimate and would need compensation. If the court found that Anthropic willfully violated copyright law, statutory damages could have reached $750 to $150,000 per work, a scenario that Santa Clara law professor Ed Lee said could have "at least the potential for business-ending liability." "Even though they will be paying out a considerable amount of money, from Anthropic's point of view, they'll be seeing it as simply dealing with a legal problem that they weren't careful enough about the way they used copyright works," McDonagh said. He noted that Anthropic had, in addition to allegedly using libraries of pirated books, also purchased some copies of books that it later digitized and fed to its AI models. Alsup had ruled that the use of these books for AI training was "fair use" that didn't require additional licensing from the copyright holders in order to use. "If they bought the 7 million books, then they could have actually used them to train their programs, and that would have been a fair use. The judge is objecting to the fact that they didn't pay for the books in the first place," McDonagh said. But Anthropic wasn't the only company to acquire large numbers of books by using pirated book libraries. Meta has also been accused of using the same shadow library, Library Genesis (LibGen), to download books to train its AI models. If the settlement is approved and the risk to Anthropic of further legal action is sufficiently mitigated, it could create a roadmap for how other companies deal with similar cases. AI companies have not previously acknowledged they have any liability when it comes to the use of copyrighted materials, so the settlement could set a precedent for how much copyright owners are compensated for works used for AI training. "Some companies are already engaging in licensing content from publishers. So this may push us forward towards a situation in which the AI companies reach settlements that effectively retrospectively pay license fees for what they've already used," McDonagh said. "Perhaps the beginning of a licensing future for the AI industry where authors can be paid for the use of their copyrighted works in the training of AI." Representatives for Anthropic did not immediately respond to a request for comment from Fortune.
[16]
Anthropic settles AI copyright lawsuit with book authors
A settlement is coming next week in a class action copyright lawsuit of book authors against artificial intelligence company Anthropic. Artificial intelligence company Anthropic and a group of book authors settled a copyright infringement lawsuit. Justin Nelson, the lawyer representing the book authors, said it is a "historic settlement [that] will benefit all class members". The terms of the proposed class settlement will be finalised next week, according to a federal appeals court ruling filed on Tuesday. Anthropic declined to comment on the deal. US District Judge William Alsup ruled in June that Anthropic didn't break the law by training its chatbot Claude on millions of copyrighted books. However, the company was scheduled to go to trial over how it acquired those books by downloading them from online "shadow libraries" of pirated copies. Alsup said in the June ruling that the AI system's distilling of thousands of written works to be able to produce its own passages of text qualified as "fair use" under US copyright law because it was "quintessentially transformative". "Like any reader aspiring to be a writer, Anthropic's (AI large language models) trained upon works not to race ahead and replicate or supplant them -- but to turn a hard corner and create something different," Alsup wrote. Anthropic is facing other copyright-related legal challenges, including from Universal Music Group. It alleges that Anthropic illegally trained its AI programs on copyrighted lyrics.
[17]
Anthropic settles high-profile class-action lawsuit alleging copyright infringement - SiliconANGLE
Anthropic settles high-profile class-action lawsuit alleging copyright infringement Artificial intelligence startup Anthropic PBC said in a court filing on Tuesday that it reached a resolution in a class action lawsuit with a group of prominent United States authors, marking a turning point in one of the most significant AI copyright lawsuits in history. The move will allow the company to avoid being potentially crushed by a poor outcome in the case where the authors alleged the company infringed on their copyrights by downloading as many as 7 million books from pirate websites. According to Bloomberg Law, the settlement comes after Anthropic informed both the district court and the appeals court that the pursuit of billions of dollars in damages in the class action lawsuit posed a "death knell" for the company, compelling it to agree to a potentially unfair settlement. Santa Clara Law Professor Edward Lee estimated the potential damages Anthropic would have faced could reach upwards of $900 billion if a jury found the company's infringement willful. Compared to the company's own estimated $5 billion in revenue this year, while operating at a loss of billions of dollars, such a judgment would be devastating. Anthropic was last reported seeking $10 billion in a new funding round at a $170 billion valuation earlier this month. The settlement follows a win scored for Anthropic in June when U.S. District Judge William Alsup of the District Court for the Northern District of California ruled in favor of the company that it had not broken the law by using legally purchased books that were later digitized without the author's permission. However, in the same ruling, Alsup noted: "Anthropic had no entitlement to use pirated copies for its central library." The terms of the settlement were not described in the court filing. "This historic settlement will benefit all class members," the authors' attorney Justin Nelson said in a statement. "We look forward to announcing details of the settlement in the coming weeks." Alsup ordered the parties to file for preliminary approval of the settlement by Sep. 5. Although the settlement resolves this class-action, it does not clear the company of other pending copyright suits it is embroiled in. Reddit recently brought suit against Anthropic, accusing it of unauthorized scraping to train its AI model and Universal Music Group, among other music labels, also filed suit over song lyric infringement. "Recent revelations about the massive and deliberate exploitation of pirated content for LLM training by largest AI vendors -- are just the tip of the iceberg of the unexpectedly nasty and painful surprises, more are looming on the horizon," Dr. Ilia Kolochenko, chief executive of ImmuniWeb SA, a fellow at the British Computer Society and a practicing lawyer specializing in AI, data privacy and protection, told SiliconANGLE. Anthropic is not the only AI company in legal crosshairs. Dozens of similar lawsuits have peppered the AI copyright infringement landscape with filings against the company's competitors, including OpenAI, as well as Meta Platforms Inc. and the AI search engine Perplexity Inc. Infringement claims continue to roll in against AI developers from publishers, authors, media companies and music labels. "The current business model of many AI companies (i.e., grab everyone's intellectual property without paying, claim that you do this for the sustainable innovation and everyone's well-being, and then make billions for founders and shareholders) may pretty soon become economically unviable," added Kolochenko. The response to widespread scraping by AI companies from rights-holders has been lawsuits, while platforms have opted to protect their content with technical barriers. For instance, Reddit blocked its content from the Internet Archive, companies use CloudFlare to stop AI bots and some artists use image poisoning techniques such as Nightshade.
[18]
Anthropic settles with authors over class-action copyright lawsuit
The AI giant behind the Claude chatbot was potentially up for billions in damages. Generative AI giant Anthropic has decided to settle with authors over a copyright lawsuit they filed against the company last year. This comes a month after Anthropic told courts that a settlement could kill the company, claiming that the plaintiffs' demands created a "death knell" situation, regardless of the case's legal merit. In a joint filing yesterday (26 August) - a year after the sage began - Anthropic and the authors have decided to stay the discovery process, and said that they expect to finalise and execute a settlement agreement before 3 September. In their lawsuit filed in August 2024, the three plaintiffs - authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson - said Anthropic used pirated versions of their copyrighted material to train Claude. They claimed that "largescale theft of copyrighted works" is a key component of the company's business model. "Anthropic has not even attempted to compensate plaintiffs for the use of their material. In fact, Anthropic has taken multiple steps to hide the full extent of its copyright theft," they claimed at the time. The plaintiffs sought to permanently stop Anthropic from using their copyrighted work. The case was certified as a class action in July, the first in a copyright litigation against AI companies. The class included copyright owners whose works were in LibGen and PiLiMi - a shadow library of pirated materials - which were downloaded by Anthropic. It was estimated that damages could easily rake up in the hundreds of billions, depending on the number of works allegedly infringed by Anthropic in the class of copyright holders and the amount which could be awarded to each work potentially infringed. This comes as a reported fresh $10bn funding raise was set to more than triple the company's valuation. Anthropic made repeated attempts to avoid trial after the class action certification, whose results could potentially sink the company. However, that was rejected by a US district judge, who, in an August ruling said, "If Anthropic loses big it will be because what it did wrong was also big." However, Anthropic has won elsewhere in a copyright lawsuit. Earlier this year, a California judge sided with the AI company by denying a motion for injunction filed by Universal Music Group, Concord and Capitol CMG, among several other large music publishers that would have stopped the start-up from using their song lyrics to train its AI models. Copyright holders have been battling AI giants ever since GenAI became mainstream. In June, a group of Authors - including Pulitzer prize winner Kai Bird - sued Microsoft, accusing the tech giant of using copyrighted works to train its large language model. This, in addition to several news publications who are battling AI companies such as OpenAI and Cohere over the alleged theft of their work. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[19]
Book authors settle copyright lawsuit with AI company Anthropic
SAN FRANCISCO (AP) -- A group of book authors has reached a settlement agreement with artificial intelligence company Anthropic after suing the chatbot maker for copyright infringement. Both sides of the case have "negotiated a proposed class settlement," according to a federal appeals court filing Tuesday that said the terms will be finalized next week. Anthropic declined comment Tuesday. A lawyer for the authors, Justin Nelson, said the "historic settlement will benefit all class members." In a major test case for the AI industry, a federal judge ruled in June that Anthropic didn't break the law by training its chatbot Claude on millions of copyrighted books. But the company was still on the hook and was scheduled go to trial over how it acquired those books by downloading them from online "shadow libraries" of pirated copies. U.S. District Judge William Alsup of San Francisco said in his June ruling that the AI system's distilling from thousands of written works to be able to produce its own passages of text qualified as "fair use" under U.S. copyright law because it was "quintessentially transformative." "Like any reader aspiring to be a writer, Anthropic's (AI large language models) trained upon works not to race ahead and replicate or supplant them -- but to turn a hard corner and create something different," Alsup wrote. A trio of writers -- Andrea Bartz, Charles Graeber and Kirk Wallace Johnson -- alleged in their lawsuit last year that Anthropic's practices amounted to "large-scale theft," and that the San Francisco-based company "seeks to profit from strip-mining the human expression and ingenuity behind each one of those works."
[20]
Book authors settle copyright lawsuit with AI company Anthropic
SAN FRANCISCO -- A group of book authors has reached a settlement agreement with artificial intelligence company Anthropic after suing the chatbot maker for copyright infringement. Both sides of the case have "negotiated a proposed class settlement," according to a federal appeals court filing Tuesday that said the terms will be finalized next week. Anthropic declined comment Tuesday. A lawyer for the authors, Justin Nelson, said the "historic settlement will benefit all class members." In a major test case for the AI industry, a federal judge ruled in June that Anthropic didn't break the law by training its chatbot Claude on millions of copyrighted books. But the company was still on the hook and was scheduled go to trial over how it acquired those books by downloading them from online "shadow libraries" of pirated copies. U.S. District Judge William Alsup of San Francisco said in his June ruling that the AI system's distilling from thousands of written works to be able to produce its own passages of text qualified as "fair use" under U.S. copyright law because it was "quintessentially transformative." "Like any reader aspiring to be a writer, Anthropic's (AI large language models) trained upon works not to race ahead and replicate or supplant them -- but to turn a hard corner and create something different," Alsup wrote. A trio of writers -- Andrea Bartz, Charles Graeber and Kirk Wallace Johnson -- alleged in their lawsuit last year that Anthropic's practices amounted to "large-scale theft," and that the San Francisco-based company "seeks to profit from strip-mining the human expression and ingenuity behind each one of those works."
[21]
Book Authors Settle Copyright Lawsuit With AI Company Anthropic
SAN FRANCISCO (AP) -- A group of book authors has reached a settlement agreement with artificial intelligence company Anthropic after suing the chatbot maker for copyright infringement. Both sides of the case have "negotiated a proposed class settlement," according to a federal appeals court filing Tuesday that said the terms will be finalized next week. Anthropic declined comment Tuesday. A lawyer for the authors, Justin Nelson, said the "historic settlement will benefit all class members." In a major test case for the AI industry, a federal judge ruled in June that Anthropic didn't break the law by training its chatbot Claude on millions of copyrighted books. But the company was still on the hook and was scheduled go to trial over how it acquired those books by downloading them from online "shadow libraries" of pirated copies. U.S. District Judge William Alsup of San Francisco said in his June ruling that the AI system's distilling from thousands of written works to be able to produce its own passages of text qualified as "fair use" under U.S. copyright law because it was "quintessentially transformative." "Like any reader aspiring to be a writer, Anthropic's (AI large language models) trained upon works not to race ahead and replicate or supplant them -- but to turn a hard corner and create something different," Alsup wrote. A trio of writers -- Andrea Bartz, Charles Graeber and Kirk Wallace Johnson -- alleged in their lawsuit last year that Anthropic's practices amounted to "large-scale theft," and that the San Francisco-based company "seeks to profit from strip-mining the human expression and ingenuity behind each one of those works."
[22]
Book authors settle copyright lawsuit with AI company Anthropic - The Economic Times
Authors have reached a settlement with AI firm Anthropic after suing for copyright infringement over its chatbot training methods. Though a judge ruled the training was "fair use," the trial over how Anthropic acquired pirated books was pending. The settlement, called "historic", is expected to benefit all class members.A group of book authors has reached a settlement agreement with artificial intelligence company Anthropic after suing the chatbot maker for copyright infringement. Both sides of the case have "negotiated a proposed class settlement," according to a federal appeals court filing Tuesday that said the terms will be finalised next week. Anthropic declined to comment on Tuesday. A lawyer for the authors, Justin Nelson, said the "historic settlement will benefit all class members." In a major test case for the AI industry, a federal judge ruled in June that Anthropic didn't break the law by training its chatbot Claude on millions of copyrighted books. But the company was still on the hook and was scheduled go to trial over how it acquired those books by downloading them from online "shadow libraries" of pirated copies. US District Judge William Alsup of San Francisco said in his June ruling that the AI system's distilling from thousands of written works to be able to produce its own passages of text qualified as "fair use" under US copyright law because it was "quintessentially transformative." "Like any reader aspiring to be a writer, Anthropic's (AI large language models) trained upon works not to race ahead and replicate or supplant them - but to turn a hard corner and create something different," Alsup wrote. A trio of writers - Andrea Bartz, Charles Graeber and Kirk Wallace Johnson - alleged in their lawsuit last year that Anthropic's practices amounted to "large-scale theft," and that the San Francisco-based company "seeks to profit from strip-mining the human expression and ingenuity behind each one of those works."
[23]
Anthropic Settles AI Lawsuit From Authors
Anthropic has settled a lawsuit from authors, who accused the Amazon-backed company of illegally downloading and copying their books to teach its AI system, in among the first deals reached by creators over novel legal issues raised by the technology. "This historic settlement will benefit all class members," says Justin Nelson, a lawyer for the authors. "We look forward to announcing details of the settlement in the coming weeks." The deal was reached on Aug. 19 through a mediation. Details of the agreement are expected to be issued by Sept. 3. The thrust of the case -- and dozens of others involving AI companies -- was set to be decided by one question: Are AI companies covered by fair use, the legal doctrine in intellectual property law that allows creators to build upon copyrighted works without a license? On that issue, a judge found in June that Anthropic is on solid legal ground, at least with respect to training. Authors don't have the right to exclude the company from using their works to train its technology as long as they purchased the books, according to the June ruling. Like any reader who wants to be a writer, the AI tool learns and creates an entirely new work, the court reasoned. The technology is "among the most transformative many of us will see in our lifetimes," wrote U.S. District Judge William Alsup. Still, Anthropic was set to face a trial over illegally downloading seven million books to create a library that was used for training. That it later purchased copies of those books it stole off the internet earlier to cover its tracks doesn't absolve it of liability, the court concluded. The company faced massive damages stemming from the decision that could lead to Disney and Universal getting a similar payout depending on what they unearth in discovery over how Midjourney allegedly obtained copies of thousands of films that were repurposed to teach its image generator.
[24]
Anthropic and Authors Settle Copyright Infringement Lawsuit Targeting AI Training | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The settlement was disclosed in a Tuesday (Aug. 26) court filing by Anthropic and in a statement by the authors' attorney, Reuters reported Tuesday. Neither source described the terms of the settlement, according to the report. The authors' attorney, Justin Nelson, told Reuters, per the report: "This historic settlement will benefit all class members. We look forward to announcing details of the settlement in the coming weeks." The judge in the case gave the parties a Sept. 5 deadline to file requests for preliminary approval of the settlement, according to the report. The class action lawsuit against Anthropic was filed last year by writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson and alleged that Anthropic used pirated books without permission to train its AI assistant, Claude, per the report. The judge ruled in June that the company may have illegally downloaded as many as 7 million books, according to the report. It was reported June 24 that the judge found that Anthropic made "fair use" of the authors' books in training Claude but that Anthropic's copying and storage of the 7 million pirated books in a "central library" violated the authors' copyrights and was not fair use. The judge ordered a trial in December to decide how much Anthropic owes for the infringement. PYMNTS reported in July that this ruling and another one in a copyright battle involving Meta seemed to embolden tech firms. The settlement reported Tuesday is the first settlement in a series of cases that allege that companies in the AI industry infringed on copyrights by using material for AI training, the Reuters report said. It was reported Tuesday that Japanese newspaper publishers Nikkei and Asahi Shimbun filed lawsuits accusing AI search engine Perplexity of copyright infringement. In that case, the companies alleged that Perplexity has, without their consent, "copies and stored article content from the servers of Nikkei and Asahi" and ignored a "technical measure" created to keep this from happening. The companies also claimed that Perplexity attributed inaccurate information to the newspapers' articles, damaging their credibility.
[25]
Anthropic Settles US Lawsuit on Pirated Books' Use for AI Training
MediaNama's Take: Access to vast data for training large language models remains a polarising issue in the AI world. Under the doctrine of "fair use", the AI companies may argue that they need to access vast databases to train their LLMs. This may not only be limited to "publicly available" datasets but also datasets obtained through illegal means, such as Books3, a database of over 196,640 books, including pirated books. California's court findings show that Anthropic co-founder Ben Mann knew the database included pirated content, raising major ethical concerns for the industry. Therefore, prior consent and fair compensation must be non-negotiable in the AI model training process. Without these ethical guardrails, AI companies are not only violating the law but eroding the trust of creators, authors, artists, publishers, and researchers whose work stands as the backbone of these AI models. This calls for an ethical framework that respects intellectual property while supporting AI innovation. Anthropic has reached the final settlement case in the class action suit filed against it by a group of authors alleging that Anthropic has used the copyrighted works to train its AI model. The case, which is known as Bartz vs Anthropic, is expected to be finalised on September 3, 2025, and the hearing is likely to happen during the week of September 8. Two months ago, in June, a U.S. District Court in Northern California ruled that using purchased copyrighted works to train AI models qualifies as fair use under US copyright law. The court also found that Anthropic used copyrighted books downloaded from piracy websites and ordered a separate trial on the illegal use of pirated copies. The current settlement is part of the trial. In 2024, authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson sued Anthropic, alleging the AI company illegally used their work to train its large language models (LLMs). In June 2025, the US District Court in California, Judge William Alsup, issued a judgement that largely favoured Anthropic by ruling that its use of the purchased copyrighted books qualified as fair use as per the US's Copyright Law. The judge also noted that Anthropic committed piracy when it downloaded some of the works from shadow libraries, including the site Library Genesis, a.k.a. LibGen, and Pirate Library Mirror. It is important to note that Anthropic changed its perspective on using copyrighted works only in February 2024. Then it had purchased books from retailers and distributors in bulk to build its central research library. As part of the training process, the company removed the repetitive information from the copyrighted books, like headers, footers, page numbers, and multiple copies of the same book, before tokenising the books. In natural language processing (NLP), tokenisation splits input text into tokens, like words, subwords, or characters. Section 107 of the U.S. Copyright Act explains the fair use exemptions and limitations on exclusive rights to the copyrighted material. So, the courts have to weigh four factors to determine if Anthropic has used the books under fair use or not: Under this section, reproduction of the copyrighted works for criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research is not an infringement of copyright. This is akin to India's copyright exemptions under the fair use doctrine. However, the authors argued that the mere act of transforming the physical copies of the books into digital formats itself is a violation of copyright. The court found that the act of transforming the physical copies into digital copies would come under transformative work. The court also noted that the company has the right to store digital versions in a central library for ordinary use, and it did not create new copies but destroyed the print originals and replaced them with digital versions. No evidence shows that Anthropic shared these copies outside the organisation. [Read the court order here.] The Class Counsel appointed by the California court is supposed to publish the list of illegally downloaded works on September 1, 2025. The aggrieved authors can submit their information on Lieff Cabraser Heimann & Bernstein's website to get notified and be part of the class action suit in order to receive compensation from Anthropic. The Authors Guild said that authors whose books Anthropic had illegally downloaded from pirate sites qualify as potential members of the class action suit- if they meet certain criteria. They are: The Guild claimed that if the authors succeed in the trial, then they might be entitled to get a minimum of $750 or higher per work. According to Civil and Criminal Penalties for Violation of Federal Copyright Laws, courts may order a convicted infringer to pay actual damages or statutory damages ranging from $750 to $30,000 per infringed work. Similarly, if a copyright owner proves willful infringement, a court may award statutory damages of as much as $150,000 per infringed work. Additionally, Wilful copyright infringement carries criminal penalties, including up to five years in prison and fines of as much as $250,000 per offence.
[26]
Anthropic settles class action from US authors alleging copyright infringement
(Reuters) -Artificial intelligence company Anthropic said in a court filing on Tuesday it had settled a class action lawsuit from a group of U.S. authors who argued the company's AI training infringed their copyrights. Details of the settlement were not immediately available. A California judge said in a June ruling that Anthropic may have illegally downloaded as many as 7 million books from pirate websites, which could have made it liable for billions of dollars in damages if the authors' case was successful. (Reporting by Blake Brittain; Editing by Chris Reese)
Share
Share
Copy Link
Anthropic has agreed to settle a class-action lawsuit with authors over the use of copyrighted books in AI training, potentially setting a precedent for future AI copyright cases.
In a significant development for the AI industry, Anthropic has agreed to settle a class-action lawsuit brought by authors over the use of copyrighted books in training its AI models. The settlement, described as "historic" by the plaintiffs' lawyer, is expected to be finalized by September 5, 2025
1
2
.Source: CNET
The lawsuit, known as Bartz v. Anthropic, was initiated by three authors - Andrea Bartz, Kirk Wallace Johnson, and Charles Graeber. They alleged that Anthropic had illegally used their works and potentially millions of other books to train its AI models
1
. The case gained prominence when U.S. District Judge William Alsup certified it as a class action, potentially allowing up to 7 million claimants to join1
.In a previous ruling, Judge Alsup had determined that Anthropic's use of the books qualified as "fair use" under copyright law. However, he also found that the company had acquired some works through "shadow libraries" like LibGen, which constituted piracy
3
. This ruling left Anthropic vulnerable to significant financial penalties, with potential damages estimated at billions or even over a trillion dollars3
.Source: engadget
While the specific terms of the settlement remain undisclosed, Justin A. Nelson, a lawyer representing the authors, stated, "This historic settlement will benefit all class members"
1
4
. The settlement is binding, according to court filings, and more details are expected to be revealed in the coming weeks1
4
.The settlement marks a crucial moment in the ongoing debate over AI and copyright law. It could potentially serve as a benchmark for future negotiations and settlements in similar cases
4
. The AI industry had previously warned that if every author in the class filed a claim, it could "financially ruin" the entire sector1
.Related Stories
This case is part of a larger trend of legal challenges facing AI companies over copyright issues. Anthropic is still involved in other lawsuits, including one with major record labels over the use of copyrighted lyrics
3
. The outcome of these cases is shaping the legal landscape for AI development and use of copyrighted material5
.Source: SiliconANGLE
While settlements don't set legal precedents, the resolution of this case will likely be closely watched by both the AI industry and content creators. It highlights the ongoing tension between technological innovation and intellectual property rights
4
5
. The settlement may provide some clarity, but many legal questions remain about how copyright law should be applied in the age of AI4
.As the AI industry continues to evolve, these legal battles underscore the need for clearer legislation and guidelines governing the development and training of AI models using copyrighted materials
5
.Summarized by
Navi
[2]
[4]
26 Sept 2025•Policy and Regulation
18 Jul 2025•Policy and Regulation
20 Aug 2024