60 Sources
60 Sources
[1]
Judge: Anthropic's $1.5B settlement is being shoved "down the throat of authors
At a hearing Monday, US district judge William Alsup blasted a proposed $1.5 billion settlement over Anthropic's rampant piracy of books to train AI. The proposed settlement comes in a case where Anthropic could have owed more than $1 trillion in damages after Alsup certified a class that included up to 7 million claimants whose works were illegally downloaded by the AI company. Instead, critics fear Anthropic will get off cheaply, striking a deal with authors suing that covers less than 500,000 works and paying a small fraction of its total valuation (currently $183 billion) to get away with the massive theft. Defector noted that the settlement doesn't even require Anthropic to admit wrongdoing, while the company continues raising billions based on models trained on authors' works. Most recently, Anthropic raised $13 billion in a funding round, making back about 10 times the proposed settlement amount after announcing the deal. Alsup expressed grave concerns that lawyers rushed the deal, which he said now risks being shoved "down the throat of authors," Bloomberg Law reported. In an order, Alsup clarified why he thought the proposed settlement was a chaotic mess. The judge said he was "disappointed that counsel have left important questions to be answered in the future," seeking approval for the settlement despite the Works List, the Class List, the Claim Form, and the process for notification, allocation, and dispute resolution all remaining unresolved. Denying preliminary approval of the settlement, Alsup suggested that the agreement is "nowhere close to complete," forcing Anthropic and authors' lawyers to "recalibrate" the largest publicly reported copyright class-action settlement ever inked, Bloomberg reported. Of particular concern, the settlement failed to outline how disbursements would be managed for works with multiple claimants, Alsup noted. Until all these details are ironed out, Alsup intends to withhold approval, the order said. One big change the judge wants to see is the addition of instructions requiring "anyone with copyright ownership" to opt in, with the consequence that the work won't be covered if even one rights holder opts out, Bloomberg reported. There should also be instruction that any disputes over ownership or submitted claims should be settled in state court, Alsup said. To Alsup, the settlement seemingly risks setting up a future where courts are bogged down over disputes linked to the class action for years. That's perhaps a bigger concern if many authors and publishers miss out on filing claims or receiving payments, since the judge noted that class members frequently "get the shaft" in class actions where attorneys stop caring after monetary relief is granted, Bloomberg reported. Further, Alsup is worried that an improper notification scheme could leave Anthropic in a vulnerable position, facing future claimants "coming out of the woodwork later," Bloomberg reported, despite doling out more than $1 billion. "When they pay that kind of money, they're going to get the relief in the form of a clean bill of health going forward," Alsup said at the hearing, suggesting that the settlement must get Anthropic completely off the hook for future legal claims over the AI training piracy. Warning class counsel that he felt "misled," the judge asked for more information about the claims process, noting, "I have an uneasy feeling about hangers-on with all this money on the table." Following the hearing, the judge set a schedule to ensure that lists of covered works and class members would be finalized by September 15, followed by the claims process finalized by September 25. That schedule would position the court to potentially preliminarily approve the settlement by October 10, Alsup suggested. Why the deal only covers about 500,000 works As of this writing, the list of covered works spans about 465,000, Alsup said. That's a far cry from the 7 million works that he initially certified as covered in the class. A breakdown from the Authors Guild -- which consulted on the case and is part of a Working Group helping to allocate claims of $3,000 per work to authors and publishers -- explained that "after accounting for the many duplicates," foreign editions, unregistered works, and books missing class criteria, "only approximately 500,000 titles meet the definition required to be part of the class." But duplicate downloads and other missing criteria don't explain why the payout per work seems so small, and that's a problem for authors who want higher payouts since this settlement could become a template in other cases where AI companies are accused of pirating works for AI training. According to the Authors Guild, "the Copyright Act gives courts discretion to award statutory damages of at least $750 and no more than $150,000 per infringed work when the infringement is willful, as is the case here." However, "when there are a large number of works at issue," it's rare that courts award maximum damages, the group said. Hoping to avoid a dragged-out legal battle that "could tie up the case for years," authors suing Anthropic settled on "a strong payout without the risks of trial," the Authors Guild said. Going that route, they supposedly "avoided years of delay through appeals" and "achieved a certain, immediate result that sends a powerful signal to the industry that piracy will cost you a lot," the Authors Guild suggested. The settlement will also likely serve to push more AI companies to avoid piracy and actually pay to license content for training, the group said. The Authors Guild confirmed that once the list is finalized, likely by October 10, a searchable database will be created for authors to confirm if their works are covered. Until then, authors can submit contact information through a website set up to manage the settlement process. That will ensure that authors are notified when the claims process begins, which, if the settlement is ultimately approved, will likely happen this fall, the Authors Guild said. "If your book is included in the class list, you will receive a formal notice by mail or email from the settlement administrator," the group said. "The notice will explain the terms of the settlement, your rights, and next steps. The Authors Guild will also share information to help authors understand the process."
[2]
Screw the money -- Anthropic's $1.5B copyright settlement sucks for writers | TechCrunch
Around half a million writers will be eligible for a payday of at least $3,000, thanks to a historic $1.5 billion settlement in a class action lawsuit that a group of authors brought against Anthropic. This landmark settlement marks the largest payout in the history of U.S. copyright law, but this isn't a victory for authors -- it's yet another win for tech companies. Tech giants are racing to amass as much written material as possible to train their LLMs, which power groundbreaking AI chat products like ChatGPT and Claude -- the same products that are endangering the creative industries, even if their outputs are milquetoast. These AIs can become more sophisticated when they ingest more data, but after scraping basically the entire internet, these companies are literally running out of new information. That's why Anthropic, the company behind Claude, pirated millions of books from "shadow libraries" and fed them into its AI. This particular lawsuit, Bartz v. Anthropic, is one of dozens filed against companies like Meta, Google, OpenAI, and Midjourney over the legality of training AI on copyrighted works. But writers aren't getting this settlement because their work was fed to an AI -- this is just a costly slap on the wrist for Anthropic, a company that just raised another $13B, because it illegally downloaded books instead of buying them. In June, federal judge William Alsup sided with Anthropic and ruled that it is, indeed, legal to train AI on copyrighted material. The judge argues that this use case is "transformative" enough to be protected by the fair use doctrine, a carve-out of copyright law that hasn't been updated since 1976. "Like any reader aspiring to be a writer, Anthropic's LLMs trained upon works not to race ahead and replicate or supplant them -- but to turn a hard corner and create something different," the judge said. It was the piracy -- not the AI training -- that moved Judge Alsup to bring the case to trial, but with Anthropic's settlement, a trial is no longer necessary. "Today's settlement, if approved, will resolve the plaintiffs' remaining legacy claims," said Aparna Sridhar, deputy general counsel at Anthropic, in a statement. "We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems." As dozens more cases over the relationship between AI and copyrighted works go to court, judges now have Bartz v. Anthropic to reference as a precedent. But given the ramifications of these decisions, maybe another judge will arrive at a different conclusion.
[3]
"First of its kind" AI settlement: Anthropic to pay authors $1.5 billion
Authors revealed today that Anthropic agreed to pay $1.5 billion and destroy all copies of the books the AI company pirated to train its artificial intelligence models. In a press release provided to Ars, the authors confirmed that the settlement is "believed to be the largest publicly reported recovery in the history of US copyright litigation." Covering 500,000 works that Anthropic pirated for AI training, if a court approves the settlement, each author will receive $3,000 per work that Anthropic stole. "Depending on the number of claims submitted, the final figure per work could be higher," the press release noted. Anthropic has already agreed to the settlement terms, but a court must approve them before the settlement is finalized. Preliminary approval may be granted this week, while the ultimate decision may be delayed until 2026, the press release noted. Justin Nelson, a lawyer representing the three authors who initially sued to spark the class action -- Andrea Bartz, Kirk Wallace Johnson, and Charles Graeber -- confirmed that if the "first of its kind" settlement "in the AI era" is approved, the payouts will "far" surpass "any other known copyright recovery." "It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners," Nelson said. "This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong." Groups representing authors celebrated the settlement on Friday. The CEO of the Authors' Guild, Mary Rasenberger, said it was "an excellent result for authors, publishers, and rightsholders generally." Perhaps most critically, the settlement shows "there are serious consequences when" companies "pirate authors' works to train their AI, robbing those least able to afford it," Rasenberger said.
[4]
I Wasn't Sure I Wanted Anthropic to Pay Me for My Books -- I Do Now
A billion dollars isn't what it used to be -- but it still focuses the mind. At least it did for me when I heard that the AI company Anthropic agreed to an at least $1.5 billion settlement for authors and publishers whose books were used to train an early version of its large language model, Claude. This came after a judge issued a summary judgement that it had pirated the books it used. The proposed agreement -- which is still under scrutiny by the wary judge -- would reportedly grant authors a minimum $3,000 per book. I've written eight and my wife has notched five. We are talking bathroom-renovation dollars here! Since the settlement is based on pirated books, it doesn't really address the big issue of whether it's OK for AI companies to train their models on copyrighted works. But it's significant that real money is involved. Previously the argument over AI copyright was based on legal, moral, and even political hypotheticals. Now that things are getting real, it's time to tackle the fundamental issue: Since elite AI depends on book content, is it fair for companies to build trillion-dollar businesses without paying authors? Legalities aside, I have been struggling with the issue. But now that we're moving from the courthouse to the checkbook, the film has fallen from my eyes. I deserve those dollars! Paying authors feels like the right thing to do. Despite the powerful forces (including US president Donald Trump) arguing otherwise. Before I go farther, let me drop a whopper of a disclaimer. As I mentioned, I'm an author myself, and stand to gain or lose from the outcome of this argument. I'm also on the council of the Author's Guild, which is a strong advocate for authors and is suing OpenAI and Microsoft for including authors' works in their training runs. (Because I cover tech companies, I abstain on votes involving litigation with those firms.) Obviously, I'm speaking for myself today. In the past, I've been a secret outlier on the council, genuinely torn on the issue of whether companies have the right to train their models on legally purchased books. The argument that humanity is building a vast compendium of human knowledge genuinely resonates with me. When I interviewed the artist Grimes in 2023, she expressed enthusiasm over being a contributor to this experiment: "Oh, sick, I might get to live forever!" she said. That vibed with me, too. Spreading my consciousness widely is a big reason I love what I do. But embedding a book inside a large language model built by a giant corporation is something different. Keep in mind that books are arguably the most valuable corpus that an AI model can ingest. Their length and coherency are unique tutors of human thought. The subjects they cover are vast and comprehensive. They are much more reliable than social media and provide a deeper understanding than news articles. I would venture to say that without books, large language models would be immeasurably weaker. So one might argue that OpenAI, Google, Meta, Anthropic and the rest should pay handsomely for access to books. Late last month, at that shameful White House tech dinner, CEOs took turns impressing Donald Trump with the insane sums they were allegedly investing in US-based data centers to meet AI's computation demands. Apple promised $600 billion, and Meta said it would match that amount. OpenAI is part of a $500 billion joint venture called Stargate. Compared to those numbers, that $1.5 billion that Anthropic, as part of the settlement, agreed to distribute to authors and publishers as part of the infringement case doesn't sound so impressive. Nonetheless, it could well be that the law is on the side of those companies. Copyright law allows for something called "fair use," which permits the uncompensated exploitation of books and articles based on several criteria, one of which is whether the use is "transformational" -- meaning that it builds on the book's content in an innovative manner that doesn't compete with the original product. The judge in charge of the Anthropic infringement case has ruled that using legally obtained books in training is indeed protected by fair use. Determining this is an awkward exercise, since we are dealing with legal yardsticks drawn before the internet -- let alone AI. Obviously, there needs to be a solution based on contemporary circumstances. The White House's AI Action Plan announced this May didn't offer one. But in his remarks about the plan, Trump weighed in on the issue. In his view, authors shouldn't be paid -- because it's too hard to set up a system that would pay them fairly. "You can't be expected to have a successful AI program when every single article, book, or anything else that you've read or studied, you're supposed to pay for," Trump said. "We appreciate that, but just can't do it -- because it's not doable." (An administration source told me this week that the statement "sets the tone" for official policy.)
[5]
Judge in Anthropic AI Piracy Suit Worried Authors May 'Get the Shaft' in $1.5B Settlement
A federal judge on Monday ordered the court to slow-roll a proposed $1.5 billion settlement to authors whose copyrighted works Anthropic pirated to train its Claude AI models. Judge William Alsup, of the US District Court for the Northern District of California, said the deal is "nowhere close to complete," and he will hold off on approving it until more questions are answered. Alsup's concerns seem to be around making sure authors have enough notice to join the suit, according to Bloomberg. In class action settlements, members can "get the shaft" once the terms are announced, Alsup said, which is why he wants more information from the parties before approval. Alsup also called out the authors' attorneys for adding additional lawyers (and their subsequent legal fees) to the case, which they said in court was to deal with settlement claim submissions. Alsup set a deadline of Sept. 15 to submit a final list of works covered by the settlement. If you think your works may qualify as part of the lawsuit, you can learn more on the Bartz settlement website. Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source. The settlement terms were made public last week, but the agreement has to be approved by the court before any payments can be made. Lawyers for the plaintiffs told CNET at the time that they expected about 500,000 books or works to be included, with an estimated payout of $3,000 per work. This latest move may result in different settlement terms, but it will certainly drag out what has already been a long court case. The case originated from copyright concerns, an important legal issue for AI companies and creators. Alsup ruled in June that Anthropic's use of copyrighted material was fair use, meaning it wasn't illegal, but the way the company obtained the books warranted further scrutiny. In the ruling, it was revealed that Anthropic used shadow libraries like LibGen and then systematically acquired and destroyed thousands of used books to scan into its own digital library. The proposed settlement stems from those piracy claims. Without significant legislation and regulation, court cases like these have become very important checks on AI companies' power. Each case influences the next. Two days after Anthropic's fair use victory, Meta won a similar case. While many AI copyright cases are still winding their way through the courts, Anthropic's rulings and settlement terms will become an important benchmark for future cases.
[6]
Anthropic Agrees to Pay Authors at Least $1.5 Billion in AI Copyright Settlement
Anthropic will pay at least $3,000 for each copyrighted work that it pirated. The company downloaded unauthorized copies of books in early efforts to gather training data for its AI tools. Anthropic has agreed to pay at least $1.5 billion to settle a lawsuit brought by a group of book authors alleging copyright infringement, an estimated $3,000 per work. The amount is well below what Anthropic may have had to pay if it had lost the case at trial. Experts said the plaintiffs may have been awarded at least billions of dollars in damages, with some estimates placing the total figure over $1 trillion. This is the first class action legal settlement centered on AI and copyright in the United States, and the outcome may shape how regulators and creative industries approach the legal debate over generative AI and intellectual property. "This landmark settlement far surpasses any other known copyright recovery. It is the first of its kind in the AI era. It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners. This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong," says co-lead plaintiffs' counsel Justin Nelson of Susman Godfrey LLP. Anthropic is not admitting any wrongdoing or liability. "Today's settlement, if approved, will resolve the plaintiffs' remaining legacy claims. We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems," Anthropic deputy general counsel Aparna Sridhar said in a statement. The lawsuit, which was originally filed in 2024 in the US District Court for the Northern District of California, was part of a larger ongoing wave of copyright litigation brought against tech companies over the data they used to train artificial intelligence programs. Authors Andrea Bartz, Kirk Wallace Johnson, and Charles Graeber alleged that Anthropic trained its large language models on their work without permission, violating copyright law. This June, senior district judge William Alsup ruled that Anthropic's AI training was shielded by the "fair use" doctrine, which allows unauthorized use of copyrighted works under certain conditions. It was a win for the tech company, but came with a major caveat. Anthropic had relied on a corpus of books pirated from so-called "shadow libraries," including the notorious site LibGen, and Alsup determined that the authors should still be able to bring Anthropic to trial in a class action over pirating their work. "Anthropic downloaded over seven million pirated copies of books, paid nothing, and kept these pirated copies in its library even after deciding it would not use them to train its AI (at all or ever again). Authors argue Anthropic should have paid for these pirated library copies. This order agrees," Alsup wrote in his summary judgement. It's unclear how the literary world will respond to the terms of the settlement. Since this was an "opt-out" class action, authors who are eligible but dissatisfied with the terms will be able to request exclusion to file their own lawsuits. Notably, the plaintiffs filed a motion today to keep the "opt-out threshold" confidential, which means that the public will not have access to the exact number of class members who would need to opt out for the settlement to be terminated. This is not the end of Anthropic's copyright legal challenges. The company is also facing a lawsuit from a group of major record labels, including Universal Music Group, which alleges that the company used copyrighted lyrics to train its Claude chatbot. The plaintiffs are now attempting to amend their case to include allegations that Anthropic used the peer-to-peer file sharing service BitTorrent to illegally download songs, and their lawyers recently stated in court filings that they may file a new lawsuit about piracy if they are not permitted to amend the current complaint.
[7]
Judge puts Anthropic's $1.5 billion book piracy settlement on hold
Anthropic's $1.5 billion book piracy settlement has been put on pause after the federal judge overseeing the class action case raised concerns about the terms of the agreement. During a hearing this week, Judge William Alsup rejected the settlement over concerns that class action lawyers will create a deal behind closed doors that they will force "down the throats of authors," according to reports from Bloomberg Law and the Associated Press. Anthropic agreed to pay the landmark settlement last week, putting to rest a class action lawsuit from US authors that accused the AI company of training its models on hundreds of thousands of copyrighted books. Judge Alsup let the class action lawsuit move forward after ruling that Anthropic training its AI models on purchased books counts as fair use, but that it could be liable for training on illegally downloaded work. In addition to his concerns over authors being strong-armed into signing a deal, Alsup says he needs to review more information about the claims process outlined in the settlement. "I have an uneasy feeling about hangers on with all this money on the table," Alsup said, as reported by Bloomberg Law. Under the settlement, authors and publishers would receive about $3,000 for covered works. As noted by AP, an attorney for the authors said there are around 465,000 books that would be covered by the settlement, but Judge Alsup asked for a solid number to ensure Anthropic doesn't get hit with other lawsuits "coming out of the woodwork." He added that class members will need to be given "very good notice" to make sure they're aware of the case. Maria Pallante, CEO of the Association of American Publishers, an industry group backing the authors' lawsuit, told AP that Alsup "demonstrated a lack of understanding of how the publishing industry works." Pallante said that "class actions are supposed to resolve cases, not create new disputes, and certainly not between the class members who were harmed in the first place." The authors' attorney, Justin Nelson, said in a statement to Bloomberg Law that the lawyers "care deeply that every single proper claim gets compensation." Judge Alsup will revisit the settlement during another hearing on September 25th. "We'll see if I can hold my nose and approve it," Alsup said, according to the AP.
[8]
Judge Halts Anthropic's $1.5 Billion Author Deal Over Transparency Concerns
Don't miss out on our latest stories. Add PCMag as a preferred source on Google. Anthropic's $1.5 billion offer to settle a copyright lawsuit brought by authors has yet to be finalized. Judge William Alsup has denied the motion to approve the deal after finding some important claim details missing. "The district judge is disappointed that counsel have left important questions to be answered in the future, including respecting the Works List, the Class List, the Claim Form, and, particularly for works with multiple claimants, the processes for notification... allocation, and dispute resolution," Alsup wrote in an order filed before Monday's hearing. As Bloomberg Law reports, Alsup is concerned that the attorneys will stop caring once the monetary relief is approved. Therefore, he has asked the authors' counsel to provide a "very good notice" to the writers, give them the option to opt in or out, and ensure they know they can't go after Anthropic over the same issue again. In the class-action filed last year, authors claimed Anthropic used pirated versions of their books, without permission or compensation, to train the company's AI models. Anthropic decided to settle the case last month, with the final amount revealed last week. According to Bloomberg Law, Alsup has given the involved parties until Sept. 15 to submit a full list of works eligible for the $1.5 billion payout. That could be around 465,000 works, with each yielding close to $3,000. In addition to the works list, attorneys have been asked to get the class list and claim form for authors approved by the court by Oct. 10. Only after that will a preliminary approval of the settlement be sanctioned. At the hearing, Alsup also criticized the authors' counsel for assigning an "army" of lawyers to the case. The add-on lawyers won't get a piece of the settlement, and the attorneys' total payout will be decided based on the payout to class members, he added. Maria A. Pallante, president and CEO of the Association of American Publishers, says the court "demonstrated a lack of understanding of how the publishing industry works" and "seems to be envisioning a claims process that would be unworkable, and sees a world with collateral litigation between authors and publishers for years to come." A similar lawsuit alleging the use of pirated books for AI training was filed against Apple last week.
[9]
Anthropic to pay $1.5B over pirated books in Claude AI training -- payouts of roughly $3,000 per infringed work
A landmark settlement highlights the legal risks of using copyrighted materials without permission. Anthropic, the company behind Claude AI, has agreed to pay at least $1.5 billion to settle a class-action lawsuit brought by authors over the use of pirated books in training its large language models. The proposed settlement, filed September 5, comes after months of litigation that could change how AI companies acquire and manage data for model training. The class action was led by authors Andrea Barta, Charles Graeber, and Kirk Wallace Johnson, and accused Anthropic of downloading hundreds of thousands of copyrighted books from torrent-based sources like Library Genesis, Pirate Mirror, and Books3. They claim that doing so allowed the company to build Claude's underlying dataset. In June, a federal judge had allowed the case to proceed on the narrow issue of unauthorized digital copying, setting the stage for a trial in December. Instead, Anthropic has agreed to a non-reversionary settlement fund starting at $1.5 billion, with payouts of approximately $3,000 per infringed work. This figure may increase as more titles are identified. Anthropic will also be required to delete the infringing data, though there is currently no indication that the court will force the company to delete or retrain its models -- a process known as model disgorgement -- under the current agreement. This marks the largest publicly disclosed AI copyright settlement to date. OpenAI has also settled with publishers in a separate matter, but the specific details of these deals are confidential. While Anthropic's settlement is not an admission that they've done anything wrong, the sheer scale of the payout sets a new benchmark for data liability in generative AI development. It's worth noting that this case doesn't challenge the broader legality of training AI on public or lawfully obtained content -- a separate issue is still working its way through the courts -- but it does highlight the legal risk and potential financial cost of using pirated material even if the intent is research and the content is later purchased. As Judge Alsup put it in his June ruling, "That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft," adding that it might affect the extent of statutory damages owed to rights holders. If models trained on pirated data face lawsuits or potential forced retraining, developers may need to start over using clean, licensed datasets. That means redoing training runs that already consumed millions of GPU hours, lending a huge boost to compute demand. Nvidia's H100 and upcoming Blackwell GPUs, as well as AMD's MI300X and HBM3e providers, could all benefit as courts force labs to scramble and revalidate their models. That is just speculation for the moment, but it will be interesting to see how courts decide to rule on related matters in the future.
[10]
An AI startup has agreed to a $2.2 billion copyright settlement. But will Australian writers benefit?
University of Sydney provides funding as a member of The Conversation AU. Anthropic, an AI startup founded in 2021, has reached a groundbreaking US$1.5 billion settlement (AU$2.28 billion) in a class-action copyright lawsuit. The case was initiated in 2024 by novelist Andrea Bartz and non-fiction writers Charles Graeber and Kirk Wallace Johnson. If the settlement is approved by the presiding judge, the company will pay authors about US$3,000 for each of the estimated 500,000 books included in the agreement. It will destroy illegally downloaded books and refrain from using pirated books to train chatbots in the future. This is the largest copyright settlement in US history, establishing a crucial legal precedent for the evolving relationship between AI companies and content creators. It will have implications for numerous other copyright cases currently underway against AI companies OpenAI, Microsoft, Google, and most recently Apple. In June, Meta prevailed in a copyright case brought against it, though the ruling left open the possibility of other lawsuits. The settlement follows a landmark US ruling on AI development and copyright law, issued in June 2025, that separated legal AI training from illegal acquisition of content. Anthropic allegedly pirated over seven million books from two online "shadow libraries" in June 2021 and July 2022. The plaintiffs and Anthropic are expected to finalise a list of works to be compensated by September 15. Cautious optimism In Australia, the response to news of a potential settlement has been cautiously optimistic. Stuart Glover, head of policy at the Australian Publishers Association told me: We welcome these court-enforced steps towards accountability, but this settlement shows why AI companies must respect copyright and pay creators - not just see what they can get away with. And for the sake of Australian authors and publishers whose works have been unlawfully scraped without compensation under this action, it's a clear call for the Australian government to maintain copyright and mandate that AI companies pay for what they use. Lucy Hayward, CEO of the Australian Society of Authors, told me: While all of the details are yet to be revealed, this settlement could represent a very welcome acknowledgement that AI companies cannot just steal authors' and artists' work to build their large language models. However, in the Anthropic case, authors will be only compensated if their publishers have registered their work with the US copyright office within a certain timeframe. Hayward expressed concern about this, as the seven million works that are alleged to have been pirated were written by authors from around the world and "we suspect many international authors may miss out on settlement money." She has called on Australian the government to introduce new laws requiring tech companies to "pay ongoing compensation to creators where Australian books have been used to train models offshore". Legal risks In June, US judge William Alsup ruled that using books to train AI was not a violation of US copyright law. But he ruled Anthropic would still have to stand trial over its use of pirated copies to build its library of material. Judge Alsup has since criticised the settlement for its loopholes. He has scheduled another hearing for September 25. "We'll see if I can hold my nose and approve it," he said. If the settlement is not approved, Anthropic risks significantly greater financial repercussions. The trial is scheduled for December. If the company loses the case, US copyright law allows for statutory damages of up to $150,000 per infringed work in cases of wilful copyright infringement. William Long, a legal analyst at Wolters Kluwer, suggests potential damages in a trial could reach multiple billions of dollars, potentially jeopardising or even bankrupting the company. Anthropic recently secured new funding worth US$13 billion, bringing its total value to $183 billion. Keith Kupferschmid, president and CEO of the US-based Copyright Alliance, has argued that this is evidence "AI companies can afford to compensate copyright owners for their works without it undermining their ability to continue to innovate and compete". For Mary Rasenberger, CEO of the Authors Guild, the historic settlement is "an excellent result for authors, publishers, and rightsholders". Rasenberger expects "the settlement will lead to more licensing that gives author[s] both compensation and control over the use of their work by AI companies, as should be the case in a functioning free market society." A step forward While this particular settlement may offer little help to Australian writers and publishers whose works are not registered with the US Copyright Office, overall it is at least potentially good news for creators globally. It represents a step towards the establishment of a legitimate licensing scheme. Australian copyright law does not include a US-style "fair use" exception, which AI companies claim protects their training practices. There have been calls to change the law with major AI players, including Google and Microsoft, lobbying the Australian government for copyright exemptions. The recent Productivity Commission interim report proposed a text and data mining exception to the Australian Copyright Act, which would allow AI training on copyrighted Australian work. The proposal faced strong opposition from the Australian Society of Authors and the publishing industry. As Arts Minister Tony Burke stated in August 2025, the government has "no plans, no intention, no appetite to be weakening" our copyright laws. The Australian publishing industry is not entirely opposed to AI, but significant legal and ethical challenges remain. The Australian Publishers Association has advocated for government policies on AI that prioritise a clear ethical framework, transparency, appropriate incentives and protections for creators, and a balanced policy approach, so that "both AI development and cultural industries can flourish".
[11]
Anthropic to Pay $1.5 Billion to Settle Author Copyright Claims
By Shirin Ghaffary, Annelise Levy (Bloomberg Law) and Aruni Soni (Bloomberg Law) Anthropic PBC will pay $1.5 billion to resolve an authors' copyright lawsuit over the AI company's downloading of millions of pirated books, one of the largest settlements over artificial intelligence and intellectual property to date. A request for preliminary approval of the accord, involving one of the fastest-growing AI startups, was filed Friday with a San Francisco federal judge who had set the closely watched case for trial in December.
[12]
Anthropic coughst $1.5 to authors whose work it stole
AI upstart Anthropic has agreed to create a $1.5 billion fund it will use to compensate authors whose works it used to train its models without seeking or securing permission. News of the settlement emerged late last week in a filing [PDF] in the case filed by three authors - Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson - who claimed that Anthropic illegally used their works. We're going to see a lot more of this. AI companies wIll create 'slush funds' Anthropic admitted to having bought millions of physical books and then digitizing them. The company also downloaded millions of pirated books from the notorious Library Genesis and Pirate Library Mirror troves of stolen material. The company nonetheless won part of the case, on grounds that scanning books is fair use and using them to create "transformative works" - the output of an LLM that doesn't necessarily include excerpts from the books - was also OK. But the decision also found Anthropic broke the law by knowingly ingesting pirated books. Plaintiffs intended to pursue court action over those pirated works, but the filing details a proposed settlement under which Anthropic will create a $1.5 billion fund which values each pirated book it used for training at $3,000. Anthropic also agreed to destroy the pirated works. In the filing, counsel observes that this is the largest ever copyright recovery claim to succeed in the USA and suggest it "will set a precedent of AI companies paying for their use of alleged pirated websites." This settlement is indeed significant given that several other major AI companies - among them Perplexity AI and OpenAI - face similar suits. It may also set a precedent that matters in Anthropic's dispute with Reddit over having scraped the forum site's content to feed into its training corpus. The filing asks the court to approve the settlement, a request judges rarely overrule. "We're going to see a lot more of this," according to Daryl Plummer, a distinguished VP analyst at Gartner. Plummer predicted the AI industry will have to build "slush funds" to handle copyright and other legal claims. "Death by AI claims will rise one thousand percent," he told The Register, suggesting that as people turn to AI for advice and counsel, sometimes with disastrous outcomes, their loved ones will seek recompense. Anthropic appears not to have commented on the settlement. For what it's worth, it can cover the cost of the settlement with some of the $13 billion in funding it announced last week. ®
[13]
Anthropic Will Pay $1.5 Billion to Authors in Landmark AI Piracy Lawsuit
Anthropic will pay $1.5 billion to settle a lawsuit brought by a group of authors alleging that the AI company illegally pirated their copyrighted books to use in training its Claude AI models. The settlement was announced Aug. 29, as the parties in the lawsuit filed a motion with the 9th US Circuit Court of Appeals indicating they had reached an agreement. "This landmark settlement far surpasses any other known copyright recovery. It is the first of its kind in the AI era," Justin Nelson, lawyer for the authors, told CNET. "It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners. This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong." The settlement still needs to be approved by the court, which it could do at a hearing on Monday, Sept. 8. Authors in the class could receive approximately $3,000 per pirated work, according to their attorneys' estimates. They expect the case will include at least 500,000 works, with Anthropic paying an additional $3,000 for any materials added to the case. "In June, the District Court issued a landmark ruling on AI development and copyright law, finding that Anthropic's approach to training AI models constitutes fair use. Today's settlement, if approved, will resolve the plaintiffs' remaining legacy claims," Aparna Sridhar, Anthropic's deputy general counsel, told CNET. "We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems." This settlement is the latest update in a string of legal moves and rulings between the AI company and authors. Earlier this summer, US Senior District Court Judge William Alsup ruled Anthropic's use of the copyrighted materials was justifiable as fair use -- a concept in copyright law that allows people to use copyrighted content without the rights holder's permission for specific purposes, like education. The ruling was the first time a court sided with an AI company and said its use of copyrighted material qualified as fair use, though Alsup said this may not always be true in future cases. Two days after Anthropic's victory, Meta won a similar case under fair use. Read more: We're All Copyright Owners. Why You Need to Care About AI and Copyright Alsup's ruling also revealed that Anthropic systematically acquired and destroyed thousands of used books to scan them into a private, digitized library for AI training. It was this claim that was recommended for a secondary, separate trial that Anthropic has decided to settle out of court. In class action suits, the terms of a settlement need to be reviewed and approved by the court. The settlement means both groups "avoid the cost, delay and uncertainty associated with further litigating the case," Christian Mammen, an intellectual property lawyer and San Francisco office managing partner at Womble Bond Dickinson, told CNET. "Anthropic can move forward with its business without being the first major AI platform to have one of these copyright cases go to trial," Mammen said. "And the plaintiffs can likely receive the benefit of any financial or non-financial settlement terms sooner. If the case were litigated through trial and appeal, it could last another two years or more." Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source. Copyright cases like these highlight the tension between creators and AI developers. AI companies have been pushing hard for fair use exceptions as they gobble up huge swaths of data to train their models and don't want to pay or wait to license them. Without legislation guiding how companies can develop and train AI, court cases like these have become important in shaping the future of the products people use daily. "The terms of this settlement will likely become a data point or benchmark for future negotiations and, possibly, settlements in other AI copyright cases," said Mammen. Every case is different and needs to be weighed on its merits, he added, but it still could be influential. There are still big questions about how copyright law should be applied in the age of AI. Just like how we saw Alsup's Anthropic analysis referenced in Meta's case, each case helps build precedent that guides the legal guardrails and green lights around this technology. The settlement will bring this specific case to an end, but it doesn't give any clarity to the underlying legal dilemmas that AI raises. "This remaining uncertainty in the law could open the door to a further round of litigation," Mammen said, "involving different plaintiffs and different defendants, with similar legal issues but different facts."
[14]
Anthropic's $1.5 billion copyright settlement faces judge's scrutiny
Sept 9 (Reuters) - A federal judge in San Francisco has declined, for now, to approve a landmark $1.5 billion settlement announced Friday between artificial intelligence company Anthropic and a class of authors suing it for copyright infringement. U.S. District Judge William Alsup during a hearing on Monday ordered both sides to provide more details, a court filing said. The judge in a Sunday order, opens new tab had said he was "disappointed" the parties had left important questions about the settlement unanswered. The settlement fund amounts to $3,000 for 500,000 downloaded books. The parties said this could grow if more works are identified. Alsup ordered the parties on Monday to provide fuller lists of the works and authors affected by the settlement by September 15 and other clarifications by September 22. Leaders of the Association of American Publishers and the Authors Guild attended the Monday hearing. AAP's president, Maria Pallante, said in a statement that Alsup had "demonstrated a lack of understanding of how the publishing industry works." "It's critical that the number of works included in the settlement is complete, and the Court's reluctance to give the parties time to do that -- without any explanation -- is troubling," Pallante said. "Similarly, the Court seems to be envisioning an administratively challenging claims process that would be unworkable for the class members, and lead authors and publishers into collateral litigation for years to come." Authors Guild CEO Mary Rasenberger said in a separate statement that she was "shocked by the court's offhand suggestion" during the hearing that the groups were "working behind the scenes in ways that could pressure authors, when that is precisely the opposite of our proposed role as informational advisors" representing author and publisher interests in the settlement. Author Andi Bartz, one of the writers suing Anthropic, said in a statement on Tuesday that the settlement was "an initial, corrective step in a critical battle" against AI companies. Another plaintiff, Kirk Johnson, said the settlement marked "the beginning of a fight on behalf of humans that don't believe we have to sacrifice everything on the altar of AI." Spokespeople for Anthropic did not immediately respond to a request for comment on Tuesday. Bartz, Johnson and writer Charles Graeber filed the class action against Anthropic last year. They argued that the company, which is backed by Amazon (AMZN.O), opens new tab and Alphabet (GOOGL.O), opens new tab, unlawfully used millions of pirated books to teach its AI assistant Claude to respond to human prompts. The proposed deal announced Friday marked the first settlement in a string of lawsuits targeting tech companies including OpenAI, Microsoft (MSFT.O), opens new tab and Meta Platforms (META.O), opens new tab over their alleged misuse of copyrighted material to train generative AI systems. Reporting by Blake Brittain in Washington Our Standards: The Thomson Reuters Trust Principles., opens new tab Blake Brittain Thomson Reuters Blake Brittain reports on intellectual property law, including patents, trademarks, copyrights and trade secrets, for Reuters Legal. He has previously written for Bloomberg Law and Thomson Reuters Practical Law and practiced as an attorney.
[15]
AI start-up Anthropic settles landmark copyright suit for $1.5bn
AI start-up Anthropic has agreed to pay $1.5bn to settle a copyright lawsuit over its use of pirated texts, setting a precedent for tech companies facing intellectual property cases from authors and publishers. The settlement, which must be approved by the San Francisco federal judge overseeing the case, would be "the largest publicly reported copyright recovery in history", according to a court filing on Friday. The class action suit was brought by authors who claimed Anthropic had downloaded 465,000 books and other texts from "pirated websites" including Library Genesis and Pirate Library Mirror, which it then used to train its large language models. Failure to reach an agreement would have led to a trial, with the prospect of damages of up to $1tn. That would have bankrupted Anthropic, a four-year-old start-up backed by Amazon and Google that was recently valued at $170bn. The case is among several facing artificial intelligence companies, including OpenAI and Meta, alleging they have improperly used copyrighted works to train their models. The results will help determine how authors are compensated for the use of their works and could have significant ramifications for how AI companies train their models and the costs of developing them. Mary Rasenberger, chief executive of the Authors Guild, said Friday's settlement sent a "strong message to the AI industry that there are serious consequences when they pirate authors' works to train their AI, robbing those least able to afford it". Anthropic and other AI companies have claimed that training models on copyrighted books is fair use, arguing that their models transform the original work into something with a new meaning. In June, the California district court ruled that Anthropic's use of some copyrighted works in such a way was fair. But it determined that storing pirated works was "inherently, irredeemably infringing", teeing up Friday's settlement. Friday's ruling also means Anthropic will have to destroy the datasets it had downloaded from Library Genesis and Pirate Library Mirror. "In June, the district court issued a landmark ruling on AI development and copyright law, finding that Anthropic's approach to training AI models constitutes fair use. Today's settlement, if approved, will resolve the plaintiffs' remaining legacy claims," said Anthropic's deputy general counsel Aparna Sridhar in a statement. "We remain committed to developing safe AI systems that help people and organisations extend their capabilities, advance scientific discovery and solve complex problems," she added.
[16]
Claude Maker Anthropic Agrees to Pay $1.5 Billion to Authors
Don't miss out on our latest stories. Add PCMag as a preferred source on Google. Anthropic, the company behind chatbot Claude, has agreed to pay "at least" $1.5 billion to settle a class-action lawsuit by authors that accused the AI company of downloading millions of pirated books to train its AI. Amid numerous lawsuits accusing the world's largest AI firms of improperly using copyrighted data for training -- including ChatGPT-maker OpenAI and AI search engine Perplexity -- this represents one of the largest AI copyright settlements yet. The settlement works out to close to $3,000 for each of the 500,000 works mentioned in the class action. According to the filing, the settlement, if approved, "will be the largest publicly reported copyright recovery in history, larger than any other copyright class-action settlement or any individual copyright case litigated to final judgment." In addition to providing monetary compensation, the Amazon-backed AI startup will be required to destroy all remaining copies of the pirated books within 30 days of the judgment. Though Anthropic has agreed to the settlement terms, the court still needs to approve them. The final decision may take until 2026, according to the filing. In the case filed last year, the plaintiffs had alleged that Anthropic committed large-scale copyright infringement by downloading and commercially exploiting books that it obtained from pirated datasets online, including an online library called Books3. Mary Rasenberger, CEO of the Authors Guild and Authors Guild Foundation, praised the results for "authors, publishers, and rightsholders generally," stating that the settlement is a "strong message to the AI industry that there are serious consequences when they pirate authors' works to train their AI, robbing those least able to afford it." OpenAI is currently facing its own class-action lawsuit from authors and novelists, including Game of Thrones author George R.R. Martin, who accuse it of "systematic theft on a mass scale" for using their written works to train its systems. Disclosure: Ziff Davis, PCMag's parent company, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
[17]
Judge skewers $1.5 billion Anthropic settlement with authors in pirated books case over AI training
A federal judge on Monday skewered a $1.5 billion settlement between artificial intelligence company Anthropic and authors who allege nearly half million books had been illegally pirated to train chatbots, raising the specter that the case could still end up going to trial. After spending nearly an hour mostly lambasting a settlement that he believes is full of pitfalls, U.S. District Judge William Alsup scheduled another hearing in San Francisco on September 25 to review whether his concerns had been addressed. "We'll see if I can hold my nose and approve it" then, Alsup said before adjourning Monday's hearing. The judge's misgivings emerged just a few days after Anthropic and attorneys who filed the class-action lawsuit announced a $1.5 billion settlement that is designed to resolve the pirating claims and avert a trial that had been scheduled to begin in December. Alsup had dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn't illegal but that Anthropic wrongfully acquired millions of books through pirate websites to help improve its Claude chatbot. The proposed settlement would pay authors and publishers about $3,000 for each of the books covered by the agreement. Justin Nelson, an attorney for the authors, told Alsup that about 465,000 books are on the list of works pirated by Anthropic. The judge said he needed more ironclad assurances that number won't swell to ensure the company doesn't get blindsided by more lawsuits "coming out of the woodwork." The judge set a September 15 deadline for a "drop-dead list" of the total books that were pirated. Alsup's main concern centered on how the claims process will be handled in an effort to ensure everyone eligible knows about it so the authors don't "get the shaft." He set a September 22 deadline for submitting a claims form for him to review before the Sept. 25 hearing to review the settlement again. The judge also raised worries about two big groups connected to the case -- the Authors Guild and Association of American Publishers -- working "behind the scenes" in ways that could pressure some authors to accept the settlement without fully understanding it. Both Authors Guild CEO Mary Rasenberger and Association of American Publishers CEO Maria Pallante attended Monday's hearing, but didn't speak. The trio of authors -- thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson -- who sued last year also sat in the front row of the court gallery, but didn't address Alsup. Before the hearing Johnson, author of "The Feather Thief" and other books, described the settlement as the "beginning of a fight on behalf of humans that don't believe we have to sacrifice everything on the altar of AI." Nelson, the lawyer for the authors, sought to ensure Alsup that he and other lawyers in the case were confident the money will be fairly distributed because the case has been widely covered by the media, with some stories landing on the front pages of major newspapers. "This is not an under-the-radar warranty case," Nelson said. Alsup made it clear, though, that he was leery about the settlement and warned he may decide to let the case go to trial. "I have an uneasy feeling about all the hangers on in the shadows,'" the judge said.
[18]
Judge reviews $1.5B Anthropic settlement proposal with authors over pirated books for AI training
SAN FRANCISCO (AP) -- A federal judge has begun reviewing a landmark class-action settlement agreement between the artificial intelligence company Anthropic and book authors who say the company took pirated copies of their works to train its chatbot. The company has agreed to pay authors and publishers $1.5 billion, amounting to about $3,000 for each of an estimated 500,000 books covered by the settlement. But U.S. District Judge William Alsup has raised some questions about the details of the agreement and asked representatives of author and publisher groups to appear in court Monday to discuss. A trio of authors -- thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson -- sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude. Johnson, author of "The Feather Thief" and other books, said he planned to attend the hearing on Monday and described the settlement as the "beginning of a fight on behalf of humans that don't believe we have to sacrifice everything on the altar of AI." Alsup dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn't illegal but that Anthropic wrongfully acquired millions of books through pirate websites. Had Anthropic and the authors not agreed to settle, the case would have gone to trial in December.
[19]
Anthropic will pay a record-breaking $1.5 billion to settle copyright lawsuit with authors
Writers involved in the case will reportedly receive $3,000 per work. Anthropic will pay a record-breaking $1.5 billion to settle a class action lawsuit piracy lawsuit brought by authors and publishers. The settlement is the largest-ever payout for a copyright case in the United States. The AI company behind the Claude chatbot reached a settlement in the case last week, but terms of the agreement weren't disclosed at the time. Now, The New York Times that the 500,000 authors involved in the case will get $3,000 per work. "In June, the District Court issued a landmark ruling on AI development and copyright law, finding that Anthropic's approach to training AI models constitutes fair use," Anthropic's Deputy General Counsel Aparna Sridhar said in a statement. "Today's settlement, if approved, will resolve the plaintiffs' remaining legacy claims. We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems."
[20]
AI firm Anthropic agrees to pay authors $1.5bn for pirating work
Anthropic said on Friday that the settlement would "resolve the plaintiffs' remaining legacy claims." The settlement comes as other big tech companies including ChatGPT-maker OpenAI, Microsoft, and Instagram-parent Meta face lawsuits over similar alleged copyright violations. Anthropic, with its Claude chatbot, has long pitched itself as the ethical alternative among its competitors. "We remain committed to developing safe AI systems that help people and organisations extend their capabilities, advance scientific discovery, and solve complex problems," said Aparna Sridhar, Deputy General Counsel at Anthropic which is backed by both Amazon and Google-parent Alphabet. The lawsuit was filed against Anthropic last year by best-selling mystery thriller writer Andrea Bartz, whose novels include We Were Never Here, along with The Good Nurse author Charles Graeber and The Feather Thief author Kirk Wallace Johnson. They accused the company of stealing their work to train its Claude AI chatbot in order to build a multi-billion dollar business. The company holds more than seven million pirated books in a central library, according to Judge Alsup's June decision, and faced up to $150,000 in damages per copyrighted work. His ruling was among the first to weigh in on how Large Language Models (LLMs) can legitimately learn from existing material. It found that Anthropic's use of the authors' books was "exceedingly transformative" and therefore allowed under US law. But he rejected Anthropic's request to dismiss the case. Anthropic was set to stand trial in December over its use of pirated copies to build its library of material. Plaintiffs lawyers called the settlement announced Friday "the first of its kind in the AI era." "It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners," said lawyer Justin Nelson representing the authors. "This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong." The settlement could encourage more cooperation between AI developers and creators, according to Alex Yang, Professor of Management Science and Operations at London Business School. "You need that fresh training data from human beings," Mr Yang said. "If you want to grant more copyright to AI-created content, you must also strengthen mechanisms that compensate humans for their original contributions."
[21]
Anthropic Agrees to $1.5 Billion Settlement for Downloading Pirated Books to Train AI
Authors sued after it was revealed Anthropic downloaded the books from Library Genesis. Anthropic has agreed to pay $1.5 billion to settle a lawsuit brought by authors and publishers over its use of millions of copyrighted books to train the models for its AI chatbot Claude, according to a legal filing posted online. A federal judge found in June that Anthropic's use of 7 million pirated books was protected under fair use but that holding the digital works in a "central library" violated copyright law. The judge ruled that executives at the company knew they were downloading pirated works, and a trial was scheduled for December. The settlement, which was presented to a federal judge on Friday, still needs final approval but would pay $3,000 per book to hundreds of thousands of authors, according to the New York Times. The $1.5 billion settlement would be the largest payout in the history of U.S. copyright law, though the amount paid per work has often been higher. For example, in 2012, a woman in Minnesota paid about $9,000 per song downloaded, a figure brought down after she was initially ordered to pay over $60,000 per song. In a statement to Gizmodo on Friday, Anthropic touted the earlier ruling from June that it was engaging in fair use by training models with millions of books. “In June, the District Court issued a landmark ruling on AI development and copyright law, finding that Anthropic's approach to training AI models constitutes fair use," Aparna Sridhar, deputy general counsel at Anthropic, said in a statement by email. "Today's settlement, if approved, will resolve the plaintiffs' remaining legacy claims. We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems," Sridhar continued. According to the legal filing, Anthropic says the payments will go out in four tranches tied to court-approved milestones. The first payment would be $300 million within five days after the court's preliminary approval of the settlement, and another $300 million within five days of the final approval order. Then $450 million would be due, with interest, within 12 months of the preliminary order. And finally $450 million within the year after that. Anthropic, which was recently valued at $183 billion, is still facing lawsuits from companies like Reddit, which struck a deal in early 2024 to let Google train its AI models on the platform's content. And authors still have active lawsuits against the other big tech firms like OpenAI, Microsoft, and Meta. The ruling from June explained that Anthropic's training of AI models with copyrighted books would be considered fair use under U.S. copyright law because theoretically someone could read "all the modern-day classics" and emulate them, which would be protected: â€|not reproduced to the public a given work’s creative elements, nor even one author’s identifiable expressive styleâ€|Yes, Claude has outputted grammar, composition, and style that the underlying LLM distilled from thousands of works. But if someone were to read all the modern-day classics because of their exceptional expression, memorize them, and then emulate a blend of their best writing, would that violate the Copyright Act? Of course not. "Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant themâ€"but to turn a hard corner and create something different," the ruling said. Under this legal theory, all the company needed to do was buy every book it pirated to lawfully train its models, something that certainly costs less than $3,000 per book. But as the New York Times notes, this settlement won't set any legal precedent that could determine future cases because it isn't going to trial.
[22]
Anthropic agrees to pay $1.5 billion to settle author class action
Sept 5 (Reuters) - Anthropic told a San Francisco federal judge on Friday that it has agreed to pay $1.5 billion to settle a class-action lawsuit from a group of authors who accused the artificial intelligence company of using their books to train its AI chatbot Claude without permission. Anthropic and the plaintiffs in a court filing asked U.S. District Judge William Alsup to approve the settlement, after announcing the agreement in August without disclosing the terms or amount. "If approved, this landmark settlement will be the largest publicly reported copyright recovery in history, larger than any other copyright class action settlement or any individual copyright case litigated to final judgment," the plaintiffs said in the filing. The proposed deal marks the first settlement in a string of lawsuits against tech companies including OpenAI, Microsoft (MSFT.O), opens new tab and Meta Platforms (META.O), opens new tab over their use of copyrighted material to train generative AI systems. Anthropic as part of the settlement said it will destroy downloaded copies of books the authors accused it of pirating, and under the deal it could still face infringement claims related to material produced by the company's AI models. In a statement, Anthropic said the company is "committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems." The agreement does not include an admission of liability. Writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson filed the class action against Anthropic last year. They argued that the company, which is backed by Amazon (AMZN.O), opens new tab and Alphabet (GOOGL.O), opens new tab, unlawfully used millions of pirated books to teach its AI assistant Claude to respond to human prompts. The writers' allegations echoed dozens of other lawsuits brought by authors, news outlets, visual artists and others who say that tech companies stole their work to use in AI training. The companies have argued their systems make fair use of copyrighted material to create new, transformative content. Alsup ruled in June that Anthropic made fair use of the authors' work to train Claude, but found that the company violated their rights by saving more than 7 million pirated books to a "central library" that would not necessarily be used for that purpose. A trial was scheduled to begin in December to determine how much Anthropic owed for the alleged piracy, with potential damages ranging into the hundreds of billions of dollars. The pivotal fair-use question is still being debated in other AI copyright cases. Another San Francisco judge hearing a similar ongoing lawsuit against Meta ruled shortly after Alsup's decision that using copyrighted work without permission to train AI would be unlawful in "many circumstances." Reporting by Blake Brittain and Mike Scarcella in Washington; Editing by David Bario, Lisa Shumaker and Matthew Lewis Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Artificial Intelligence * Consumer Protection Blake Brittain Thomson Reuters Blake Brittain reports on intellectual property law, including patents, trademarks, copyrights and trade secrets, for Reuters Legal. He has previously written for Bloomberg Law and Thomson Reuters Practical Law and practiced as an attorney.
[23]
Anthropic agrees to pay $1.5 billion to settle authors' copyright lawsuit
If Anthropic's settlement is approved, it will be the largest publicly reported copyright recovery in history, the filing said. Anthropic has agreed to pay at least $1.5 billion to settle a class action lawsuit with a group of authors, who claimed the artificial intelligence startup had illegally accessed their books. The company will pay roughly $3,000 per book plus interest, and agreed to destroy the datasets containing the allegedly pirated material, according to a filing on Friday. The lawsuit against Anthropic has been closely watched by AI startups and media companies that have been trying to determine what copyright infringement means in the AI era. If Anthropic's settlement is approved, it will be the largest publicly reported copyright recovery in history, according to the filing. "This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong," Justin Nelson, the attorney for the plaintiffs, told CNBC in a statement. Anthropic didn't immediately respond to CNBC's request for comment. The lawsuit, filed in the U.S. District Court for the Northern District of California, was brought last year by authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson. The suit alleged that Anthropic had carried out "largescale copyright infringement by downloading and commercially exploiting books that it obtained from allegedly pirated datasets," the filing said. In June, a judge ruled that Anthropic's use of books to train its AI models was "fair use," but ordered a trial to assess whether the company infringed on copyright by obtaining works from the databases Library Genesis and Pirate Library Mirror. The case was slated to proceed to trial in December, according to Friday's filing. Earlier this week, Anthropic said it closed a $13 billion funding round that valued the company at $183 billion. The financing was led by Iconiq, Fidelity Management and Lightspeed Venture Partners.
[24]
Anthropic to pay authors $1.5 billion to settle lawsuit over pirated chatbot training material
NEW YORK (AP) -- Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot. The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement. The company has agreed to pay authors about $3,000 for each of an estimated 500,000 books covered by the settlement. "As best as we can tell, it's the largest copyright recovery ever," said Justin Nelson, a lawyer for the authors. "It is the first of its kind in the AI era." A trio of authors -- thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson -- sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude. A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn't illegal but that Anthropic wrongfully acquired millions of books through pirate websites. If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money. "We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business," said William Long, a legal analyst for Wolters Kluwer. U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms. Books are known to be important sources of data -- in essence, billions of words carefully strung together -- that are needed to build the AI large language models behind chatbots like Anthropic's Claude and its chief rival, OpenAI's ChatGPT. Alsup's June ruling found that Anthropic had downloaded more than 7 million digitized books that it "knew had been pirated." It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained. Debut thriller novel "The Lost Night" by Bartz, a lead plaintiff in the case, was among those found in the Books3 dataset. Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote. The Authors Guild told its thousands of members last month that it expected "damages will be minimally $750 per work and could be much higher" if Anthropic was found at trial to have willfully infringed their copyrights. The settlement's higher award -- approximately $3,000 per work -- likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright. On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement "an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors' works to train their AI, robbing those least able to afford it." The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren't registered with the U.S. Copyright Office. "On the one hand, it's comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price," said Thomas Heldrup, the group's head of content protection and enforcement. On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules. "It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space," Heldrup said. The privately held Anthropic, founded by ex-OpenAI leaders in 2021, said Tuesday that it had raised another $13 billion in investments, putting its value at $183 billion. Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs.
[25]
Anthropic settles with authors in first-of-its-kind AI copyright infringement lawsuit
A case against Anthropic AI brought by a group of authors was settled on Friday. Riccardo Milani/Hans Lucas/AFP via Getty Images hide caption In one of the largest copyright settlements involving generative artificial intelligence, Anthropic AI, a leading company in the generative AI space, has agreed to pay $1.5 billion to settle a copyright infringement lawsuit brought by a group of authors. If the court approves the settlement, Anthropic will compensate authors around $3,000 for each of the estimated 500,000 books covered by the settlement. The settlement, which U.S. Senior District Judge William Alsup in San Francisco will consider approving next week, is in a case that involved the first substantive decision on how fair use applies to generative AI systems. It also suggests an inflection point in the ongoing legal fights between the creative industries and the AI companies accused of illegally using artistic works to train the large language models that underpin their widely-used AI systems. The fair use doctrine enables copyrighted works to be used by third parties without the copyright holder's consent in some circumstances, such as when illustrating a point in a news article. AI companies trying to make the case for the use of copyrighted works to train their generative AI models commonly invoke fair use. But authors and other creative industry plaintiffs have been pushing back. "This landmark settlement will be the largest publicly reported copyright recovery in history," the settlement motion states, arguing that it will "provide meaningful compensation" to authors and "set a precedent of AI companies paying for their use of pirated websites." "This settlement marks the beginning of a necessary evolution toward a legitimate, market-based licensing scheme for training data," said Cecilia Ziniti, a tech industry lawyer and former Ninth Circuit clerk who is not involved in this specific case but has been following it closely. "It's not the end of AI, but the start of a more mature, sustainable ecosystem where creators are compensated, much like how the music industry adapted to digital distribution." Authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson filed their complaint against Anthropic for copyright infringement in 2024. The class action lawsuit alleged Anthropic AI used the contents of millions of digitized copyrighted books to train the large language models behind their chatbot, Claude, including at least two works by each plaintiff. The company also bought some hard copy books and scanned them before ingesting them into its model. The company has admitted to doing as much, a fact that the plaintiffs raise their complaint. "Anthropic has admitted to using The Pile to train Claude," the complaint states. (The Pile is a big, open-source dataset created for large language model training.) "Rather than obtaining permission and paying a fair price for the creations it exploits, Anthropic pirated them," the authors' complaint states. In his June ruling, Judge Alsup agreed with Anthropic's argument, stating the company's use of books by the plaintiffs to train their AI model was acceptable. "The training use was a fair use," he wrote. "The use of the books at issue to train Claude and its precursors was exceedingly transformative." However, the judge ruled that Anthropic's use of millions of pirated books to build its models - books that websites such as Library Genesis (LibGen) and Pirate Library Mirror (PiLiMi) copied without getting the authors' consent or giving them compensation - was not. He ordered this part of the case to go to trial. "We will have a trial on the pirated copies used to create Anthropic's central library and the resulting damages, actual or statutory (including for willfulness)," the judge wrote in the conclusion to his ruling. Last week, the parties announced they had reached a settlement. U.S. copyright law states that willful copyright infringement can lead to statutory damages of up to $150,000 per infringed work. The judge's order asserts that Anthropic pirated more than 7 million copies of books. So the damages resulting from a trial, if it had gone ahead, could have been enormous. However, Ziniti said that regardless of the settlement, the judge's ruling effectively means that at least in Northern California, AI companies now have the legal right to train their large language models on copyrighted works -- as long as they obtain copies of those works legally. In statements to NPR, both sides appear satisfied with the outcome of the case. "Today's settlement, if approved, will resolve the plaintiffs' remaining legacy claims," said Anthropic Deputy General Counsel Aparna Sridhar. "We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems." "This landmark settlement is the first of its kind in the AI era," said Justin Nelson, an attorney on the team representing the authors. "It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners. This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong." The settlement also met with approval from the creative community. "This historic settlement is a vital step in acknowledging that AI companies cannot simply steal authors' creative work to build their AI just because they need books to develop quality large language models," said Authors Guild CEO Mary Rasenberger. "We expect that the settlement will lead to more licensing that gives authors both compensation and control over the use of their work by AI companies, as should be the case in a functioning free market society." "While the settlement amount is very significant and represents a clear victory for the publishers and authors in the class, it also proves what we have been saying all along -- that AI companies can afford to compensate copyright owners for their works without it undermining their ability to continue to innovate and compete," added Keith Kupferschmid, president and CEO of the Copyright Alliance. Anthropic is in a good position to handle the sizable compensation. On Tuesday, the company announced the completion of a new funding round worth $13 billion, bringing its total value to $183 billion. Meanwhile, the literary world and other parts of the creative sector continue to fight against AI companies. There have been a slew of literary AI copyright infringement lawsuits launched by prominent authors, including Ta-Nehisi Coates and the comedian Sarah Silverman in recent years. In June, U.S. District Judge Vince Chhabria granted Meta's request for a summary judgment in Coates and Silverman's case against the tech corporation, which effectively put an end to that lawsuit. Other cases are ongoing. And in the latest in a string of legal actions involving major entertainment corporations, on Friday, Warner Bros. Discovery filed a lawsuit in California federal court against AI image generator Midjourney for copyright infringement. NPR has reached out to Midjourney for comment.
[26]
Anthropic to pay $1.5 billion in copyright settlement
Why it matters: The settlement marks a turning point in the clash between AI companies and content owners, which could alter how training data is sourced and inspire more licensing deals. Zoom in: The judge ruled that Anthropic's approach to buying physical books and making digital copies for training its large language models was fair use, but identified that the company had illegally acquired millions of copyrighted books. * Anthropic will pay an estimated $3,000 per work to roughly 500,000 authors. * The company will also delete the pirated works it downloaded from shadow libraries like like Library Genesis and Pirate Library Mirror. * "We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems," Anthropic deputy general counsel Aparna Sridhar said in a statement. Between the lines: The case spotlights a tension in the AI era with courts ruling that training on copyrighted material can qualify as fair use but how companies obtain that data still carries legal ramifications. Zoom out: Since the lawsuit was settled instead of going to trial, it will not set a legal precedent. * But it raises the stakes for dozens of similar lawsuits and could push more AI companies toward licensing, much like the battle between the music streaming service and record labels after Napster. What's next: The judge must approve the settlement.
[27]
AI startup Anthropic agrees to pay $1.5bn to settle book piracy lawsuit
Settlement could be pivotal after authors claimed company took pirated copies of their work to train chatbots Artificial intelligence company Anthropic has agreed to pay $1.5bn to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot. The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement. The company has agreed to pay authors about $3,000 for each of an estimated 500,000 books covered by the settlement. "As best as we can tell, it's the largest copyright recovery ever," said Justin Nelson, a lawyer for the authors. "It is the first of its kind in the AI era." A trio of authors - thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson - sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude. A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn't illegal but that Anthropic wrongfully acquired millions of books through piracy websites. If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money. "We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business," said William Long, a legal analyst for Wolters Kluwer. US district judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms. Books are known to be important sources of data - in essence, billions of words carefully strung together - that are needed to build the AI large language models behind chatbots like Anthropic's Claude and its chief rival, OpenAI's ChatGPT. Alsup's June ruling found that Anthropic had downloaded more than 7m digitized books that it "knew had been pirated". It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained. Debut thriller novel The Lost Night by Bartz, a lead plaintiff in the case, was among those found in the Books3 dataset. Anthropic later took at least 5m copies from the pirate website Library Genesis, or LibGen, and at least 2m copies from the Pirate Library Mirror, Alsup wrote. The Authors Guild told its thousands of members last month that it expected "damages will be minimally $750 per work and could be much higher" if Anthropic was found at trial to have willfully infringed their copyrights. The settlement's higher award - approximately $3,000 per work - likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright. On Friday, Mary Rasenberger, the CEO of the Authors Guild, called the settlement "an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors' works to train their AI, robbing those least able to afford it".
[28]
Anthropic agrees to settle authors' AI lawsuit for $1.5 billion
AI company Anthropic has agreed to settle a lawsuit from authors. The cost: $1.5 billion. A judge still needs to approve the settlement, but lawyers representing the group of authors celebrated the major update in the case. "As best as we can tell, it's the largest copyright recovery ever," Justin Nelson, a lawyer for the authors, told the Associated Press. "It is the first of its kind in the AI era." The authors' class-action lawsuit argued that Anthropic took pirated copies of the book to train its AI chatbot, Claude. The lawsuit covered about 500,000 works, meaning the total payout could come in around $3,000 per work, should the settlement be approved. Aparna Sridhar, Anthropic's deputy general counsel, emphasized to Ars Technica in a statement that the court found "Anthropic's approach to training AI models constitutes fair use." "Today's settlement, if approved, will resolve the plaintiffs' remaining legacy claims," Sridhar told Ars. "We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery and solve complex problems." Should the settlement ultimately be approved, it could prove to be an important landmark in the fight against AI companies. Many artists, publishers, and creatives have sued AI companies, including famous authors George RR Martin and John Grisham, who, among others, sued OpenAI, claiming it infringed copyrights to train its model.
[29]
Anthropic tells US judge it will pay $1.5 billion to settle author class action
Sept 5 (Reuters) - Anthropic told a San Francisco federal judge on Friday that it has agreed to pay $1.5 billion to settle a class-action lawsuit from a group of authors who accused the artificial intelligence company of using their books to train its AI chatbot Claude without permission. Anthropic and the plaintiffs in a court filing asked U.S. District Judge William Alsup to approve the settlement, after announcing the agreement in August without disclosing the terms or amount. "If approved, this landmark settlement will be the largest publicly reported copyright recovery in history, larger than any other copyright class action settlement or any individual copyright case litigated to final judgment," the plaintiffs said in the filing. The proposed deal marks the first settlement in a string of lawsuits against tech companies including OpenAI, Microsoft (MSFT.O), opens new tab and Meta Platforms (META.O), opens new tab over their use of copyrighted material to train generative AI systems. Writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson filed the class action against Anthropic last year. They argued that the company, which is backed by Amazon (AMZN.O), opens new tab and Alphabet (GOOGL.O), opens new tab, unlawfully used millions of pirated books to teach its AI assistant Claude to respond to human prompts. The writers' allegations echoed dozens of other lawsuits brought by authors, news outlets, visual artists and others who say that tech companies stole their work to use in AI training. The companies have argued their systems make fair use of copyrighted material to create new, transformative content. Alsup ruled in June that Anthropic made fair use of the authors' work to train Claude, but found that the company violated their rights by saving more than 7 million pirated books to a "central library" that would not necessarily be used for that purpose. A trial was scheduled to begin in December to determine how much Anthropic owed for the alleged piracy, with potential damages ranging into the hundreds of billions of dollars. The pivotal fair-use question is still being debated in other AI copyright cases. Another San Francisco judge hearing a similar ongoing lawsuit against Meta ruled shortly after Alsup's decision that using copyrighted work without permission to train AI would be unlawful in "many circumstances." Reporting by Blake Brittain and Mike Scarcella in Washington; Editing by David Bario and Lisa Shumaker Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Artificial Intelligence * Consumer Protection Blake Brittain Thomson Reuters Blake Brittain reports on intellectual property law, including patents, trademarks, copyrights and trade secrets, for Reuters Legal. He has previously written for Bloomberg Law and Thomson Reuters Practical Law and practiced as an attorney.
[30]
The Anthropic AI settlement doesn't mean I'm getting money for my book
"Success! Your registration form has been submitted successfully. If you are indeed a member of the class, you will receive formal notice later this year." That's the message you get when you register to receive your personal piece of the $1.5 billion that Anthropic, maker of the popular AI chatbot Claude, has proposed to pay authors after the company was found to have illegally acquired pirated books from notorious online "shadow libraries." The deal, announced last week, would cover roughly 500,000 pirated titles, with rights holders set to be compensated $3,000 per work -- hence that eye-watering $1.5 billion overall price tag. The terms, if finalized, would make for the largest publicly reported copyright payout in U.S. history and a benchmark for other AI cases pending against OpenAI, Meta, Microsoft, Apple, and others. But for authors -- at least for me -- the news feels a bit like buying a scratch ticket, getting excited as the cherries seem to be lining up, and then finding out that you won $60. Maybe. A moment of glee followed by a long, sobering look at the fine print. At a hearing on Monday, Judge William Alsup called the deal proposal "nowhere close to complete," postponing preliminary approval and insisting on basic mechanics before any checks go out to the likes of me: a definitive list of covered works, a feasible notice and claims process, plus clear allocation rules for when multiple parties share ownership. In other words, the court wants a real, workable, detailed process for distributing the funds -- and this within an industry, traditional publishing, in which it's often the case that several parties have claims to rights. Authors retain copyrights to their books, but get publishing deals by licensing those works to companies like Macmillan, Penguin, and Hachette (my publisher). Beyond that, you may also have co-authors, work-for-hire agreements, estates, illustrators, and translators involved. The bottom line here? The checks are very much not in the mail, and you shouldn't hold your breath. Then there are the complex details of the judge's legal findings and the mixed feelings they're inspiring across the industry. Alsup's original ruling, which landed this summer, effectively split the legal baby. He held that training on lawfully obtained books is fair use -- a major win for AI developers -- while leaving authors' claims regarding illegally acquired datasets (those from "shadow libraries," etc.) very much alive. This paints the problem as dirty sourcing, not the unauthorized use of the underlying material. The proposed settlement is meant to resolve the piracy angle: Anthropic would delete and pay damages for the unlawful copies, while the deal preserves future claims about model outputs if Claude reproduces protected text. But even if this settlement goes forward, other courts could still reach different conclusions about AI training itself, remaining open to the argument that transformative training on lawfully acquired materials is fair use. So for authors who object to unauthorized use of their books for AI training, the issue is far from resolved, and if anything, looks set back, even as dozens of other cases wind their way through the legal system. Nor is every author happy with the idea of receiving $3,000 per work as compensation. "Authors tend to say [the proposed settlement] is not punitive enough and no amount of money will ever be sufficient," Jane Friedman, a widely respected publishing-industry analyst, wrote in her newsletter, The Bottom Line. "Others, especially those who argue AI training is fair use, say authors and publishers have hit the jackpot and that little or no market harm has been done." For now, eligibility turns on whether Anthropic (allegedly!) downloaded a given book and whether that book was registered with the U.S. Copyright Office within certain relevant timeframes. From what I can tell, I'm eligible. My book was published and registered for copyright by my publisher in 2021, one of the years covered, and my book has also turned up in similar cases, such as the one against Facebook parent company Meta. But as yet, there seems to be no firm way of knowing. Many authors have no idea. Worse, some authors are discovering an even more painful wrinkle. One thing the Anthropic case has made clear is that some publishers never registered for copyright of some of their titles -- or if they did, did so too late. The plaintiffs have been ordered to deliver a final list of works and a fleshed-out plan in mid-September, with another hearing shortly after. If the judge is satisfied, preliminary approval could still happen this fall. If not, a trial is set for December. If you're still following me here, and you don't have a law degree, you're probably feeling dizzy, the details whizzing, gnat-like, around your brain. Understandably, that's how a lot of authors and observers feel, too. In other words, the $1.5 billion proposed settlement figure looks substantial. Heck, it may even be substantial. But to get a slice, a lot of pieces need to click together first, and the net number may be a lot closer to your winnings from PTA bingo night. Believe me, I'm not complaining. As I've written before, realizing one's lifelong author dreams tends to be a long, ongoing lesson in humility -- less the stuff of A Star Is Born, more like a combo of Office Space and The Wrestler. For the publishing industry, however, the overall figure is consequential. Until recently, it looked as if large-scale book piracy by AI companies would go unchecked. Now there looks to be some legal template for protecting rights holders. And that's not nothing.
[31]
'We'll see if I can hold my nose and approve it': Judge hates $1.5b AI settlement with book authors so much he's taking 2 weeks to think it over | Fortune
A federal judge on Monday skewered a $1.5 billion settlement between artificial intelligence company Anthropic and authors who allege nearly half a million books had been illegally pirated to train chatbots, raising the specter that the case could still end up going to trial. After spending nearly an hour mostly lambasting a settlement that he believes is full of pitfalls, U.S. District Judge William Alsup scheduled another hearing in San Francisco on September 25 to review whether his concerns had been addressed. "We'll see if I can hold my nose and approve it" then, Alsup said before adjourning Monday's hearing. Afterwards, the leader of a publishers group involved in the settlement called some of the judge's revised timetable for approving the deal "troubling," in an acknowledgement that the proposed resolution could unravel. Alsup "demonstrated a lack of understanding of how the publishing industry works," said Maria Pallante, CEO of Association of American Publishers, who attended Monday's hearing but was not asked to speak. The judge's misgivings emerged just a few days after Anthropic and attorneys who filed the class-action lawsuit announced a $1.5 billion settlement that is designed to resolve the pirating claims and avert a trial that had been scheduled to begin in December. Alsup had dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn't illegal but that Anthropic wrongfully acquired millions of books through pirate websites to help improve its Claude chatbot. The proposed settlement would pay authors and publishers about $3,000 for each of the books covered by the agreement. Justin Nelson, an attorney for the authors, told Alsup that about 465,000 books are on the list of works pirated by Anthropic. The judge said he needed more ironclad assurances that number won't swell to ensure the company doesn't get blindsided by more lawsuits "coming out of the woodwork." The judge set a September 15 deadline for a "drop-dead list" of the total books that were pirated. Alsup's main concern centered on how the claims process will be handled in an effort to ensure everyone eligible knows about it so the authors don't "get the shaft." He set a September 22 deadline for submitting a claims form for him to review before the Sept. 25 hearing to review the settlement again. The judge also raised worries about two big groups connected to the case -- the Authors Guild in addition to the Association of American Publishers -- working "behind the scenes" in ways that could pressure some authors to accept the settlement without fully understanding it. Authors Guild CEO Mary Rasenberger sat alongside Pallante during Monday's hearing, ans also was asked to discuss the settlement. The trio of authors -- thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson -- who sued last year also sat in the front row of the court gallery, but didn't address Alsup. In a statement issued after the hearing the Authors Guild said it was "confused" about Alsup's concern that it might be secretly trying to undermine some of the writers represented in the settlement. The Authors Guild said its work on the settlement is designed "to ensure that authors' interests are fully represented" while contributing its expertise to "the discussions with complete transparency." Before the hearing Johnson, author of "The Feather Thief" and other books, described the settlement as the "beginning of a fight on behalf of humans that don't believe we have to sacrifice everything on the altar of AI." Nelson, the lawyer for the authors, sought to ensure Alsup that he and other lawyers in the case were confident the money will be fairly distributed because the case has been widely covered by the media, with some stories landing on the front pages of major newspapers. "This is not an under-the-radar warranty case," Nelson said. Alsup made it clear, though, that he was leery about the settlement and warned he may decide to let the case go to trial. "I have an uneasy feeling about all the hangers on in the shadows,'" the judge said. In her statement, Pallante said she hopes Alsup will remain flexible as he learns more about how the publishing industry works so the settlement can be preserved. "The court seems to be envisioning a claims process that would be unworkable, and sees a world with collateral litigation between authors and publishers for years to come," Pallante said. "Class actions are supposed to resolve cases, not create new disputes, and certainly not between the class members who were harmed in the first place."
[32]
Attention Writers: Anthropic Might Owe You $3000 (or More!) If It Was Trained Using Your Work
Writing is a wonderful profession... in writers' dreams! In reality, it's a grind that's comically unprofitable for the vast majority, to say nothing of the tortured ennui that comes with having to deal with actually writing, or the thought of actually writing, or the thought of what you aren't right now actually writing. And the economics are more harrowing than ever, as the once halfway-decent living one could make from publishing a book is now going the way of the dinosaur as people read less, scroll more, and AIs hoover up intellectual property and regurgitate it with little to no remuneration. And yet, for once, there's some good-ish news for the writerly set. A landmark lawsuit against Anthropic, the company behind the chatbot Claude -- and whose CEO recently said that it's okay for their product to benefit dictators -- has resulted in a settlement! While it's not quite the trillion-dollar lawsuit our AI overlords had hysterically claimed could bring about the end of their industry, it's no small chump change, either, especially for the writers who can now get paid out per work that Claude devoured on its way to becoming a best-in-class product. Per a press release from the plaintiffs, Anthropic is supposed to fork over $3,000 for each work covered in the settlement, of which there are something like 500,000. If over 500,000 works' owners file, they'll pay those, too. If under 500,000 works are filed for, their authors could get more for them. All in, the settlement is about $1.5 billion, plus interest. Also of note: This doesn't completely indemnify Anthropic from further lawsuits if someone thinks their work is being used by Anthropic illegally. Even more, Anthropic agreed to delete the pirated works it download. All told, it's big news. The New York Times quoted one lawyer as calling the lawsuit "the AI industry's Napster moment," referring to the file-sharing network that was brought down by storm of lawsuits from the likes of Metallica and Dr. Dre. If any authors think they had their books stolen, they can go to the settlement page here, sign up, and pray they get some cash in the mail. If they want to check if they stand any chance of getting the money, you can search the LibGen database, from which Anthropic illegally pirated the books in question. Sadly, as the author of this blog post hasn't written any books, he will not be getting any Anthropic lawsuit money. But we know someone who might! Kyle Chayka, a staff writer at The New Yorker whose work zeroes in on the intersection between technology, art, and culture, is the author of not one but two books that popped up in LibGen: 2024's "Filterworld: How Algorithms Flattened Culture" and 2020's "The Longing For Less: Living With Minimalism." Also in found in LibGen was the Italian translation of Filterworld. All in, he could stand to make upwards of $12K! We asked Kyle: How does the sum of "$3,000 per class work" feel as a number given that his intellectual property was used to train an AI? Low, high, not worth it on principle, or about right? "It should be a license, really," he replied. "Because the training never goes away. So it could be $5,000 every 5 years, or $1,000 / year as long as they exist. But the price seems about right, honestly -- a decent percentage of most book advances, and about the price of an institutional speaking gig." Fair! But does it make him feel any Type Of Way that his writing will be used to power machines hoovering up natural resources that will one day -- per all the people making it -- crush us like the ants we humans are to them? "Even if AI couldn't be trained on any book ever produced, tech would still find ways around it; the real licensing money will be 'new data' aka fresh journalism," he wrote back. "So outside of my preexisting enmity and hatred for the way that AI is destroying civilization and the planet, I don't think the books make it much worse." Again, tough, but fair. Finally -- and perhaps most importantly -- what will he put his potentially $12K settlement payment towards? "A down payment for a stone townhouse in a small Italian village driving distance from the coast where I can preserve what remains of my lifestyle," he said.
[33]
Judge reviews $1.5B Anthropic settlement proposal with authors over pirated books for AI training
A federal judge has begun reviewing a landmark class-action settlement agreement between the artificial intelligence company Anthropic and book authors who say the company took pirated copies of their works to train its chatbot. The company has agreed to pay authors and publishers $1.5 billion, amounting to about $3,000 for each of an estimated 500,000 books covered by the settlement. But U.S. District Judge William Alsup has raised some questions about the details of the agreement and asked representatives of author and publisher groups to appear in court Monday to discuss. A trio of authors -- thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson -- sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude. Johnson, author of "The Feather Thief" and other books, said he planned to attend the hearing on Monday and described the settlement as the "beginning of a fight on behalf of humans that don't believe we have to sacrifice everything on the altar of AI." Alsup dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn't illegal but that Anthropic wrongfully acquired millions of books through pirate websites. Had Anthropic and the authors not agreed to settle, the case would have gone to trial in December.
[34]
Anthropic to pay authors $1.5B to settle lawsuit over pirated chatbot training material
NEW YORK -- Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot. The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement. The company has agreed to pay authors about $3,000 for each of an estimated 500,000 books covered by the settlement. "As best as we can tell, it's the largest copyright recovery ever," said Justin Nelson, a lawyer for the authors. "It is the first of its kind in the AI era." A trio of authors -- thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson -- sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude. A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn't illegal but that Anthropic wrongfully acquired millions of books through pirate websites. If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money. "We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business," said William Long, a legal analyst for Wolters Kluwer. U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms. Anthropic said in a statement Friday that the settlement, if approved, "will resolve the plaintiffs' remaining legacy claims." "We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems," said Aparna Sridhar, the company's deputy general counsel. As part of the settlement, the company has also agreed to destroy the original book files it downloaded. Books are known to be important sources of data -- in essence, billions of words carefully strung together -- that are needed to build the AI large language models behind chatbots like Anthropic's Claude and its chief rival, OpenAI's ChatGPT. Alsup's June ruling found that Anthropic had downloaded more than 7 million digitized books that it "knew had been pirated." It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained. Debut thriller novel The Lost Night by Bartz, a lead plaintiff in the case, was among those found in the Books3 dataset. Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote. The Authors Guild told its thousands of members last month that it expected "damages will be minimally $750 per work and could be much higher" if Anthropic was found at trial to have willfully infringed their copyrights. The settlement's higher award -- approximately $3,000 per work -- likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright. On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement "an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors' works to train their AI, robbing those least able to afford it." The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren't registered with the U.S. Copyright Office. "On the one hand, it's comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price," said Thomas Heldrup, the group's head of content protection and enforcement. On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules. "It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space," Heldrup said. The privately held Anthropic, founded by ex-OpenAI leaders in 2021, said Tuesday that it had raised another $13 billion in investments, putting its value at $183 billion. Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs.
[35]
AI giant Anthropic to pay $1.5 bn over pirated books
San Francisco (United States) (AFP) - Anthropic will pay at least $1.5 billion to settle a US class action lawsuit over pirated books allegedly used to train its artificial intelligence (AI) models, according to court documents filed Friday. "This settlement sends a strong message to the AI industry that there are serious consequences when they pirate authors' works to train their AI, robbing those least able to afford it," said Mary Rasenberger, CEO of the Authors Guild, in a statement supporting the deal. The settlement stems from a class-action lawsuit filed by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who accused Anthropic of illegally copying their books to train Claude, the company's AI chatbot that rivals ChatGPT. In a partial victory for Anthropic, US District Court Judge William Alsup ruled in June that the company's training of its Claude AI models with books -- whether bought or pirated -- so transformed the works that it constituted "fair use" under the law. "The technology at issue was among the most transformative many of us will see in our lifetimes," Alsup wrote in his 32-page decision, comparing AI training to how humans learn by reading books. However, Alsup rejected Anthropic's bid for blanket protection, ruling that the company's practice of downloading millions of pirated books to build a permanent digital library was not justified by fair use protections. According to the legal filing, the settlement covers approximately 500,000 books, translating to roughly $3,000 per work -- four times the minimum statutory damages under US copyright law. Under the agreement, Anthropic will destroy the original pirated files and any copies derived from them, though the company retains rights to books it legally purchased and scanned. Anthropic did not immediately respond to requests for comment. The settlement, which requires judicial approval, comes as AI companies face growing legal pressure over their training practices. Multiple lawsuits against firms including OpenAI, Meta, and others remain pending, with rightsholders arguing that scraping copyrighted content without permission violates intellectual property law. San Francisco-based Anthropic, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development. The company announced this week that it raised $13 billion in a funding round valuing the AI startup at $183 billion. It will use the capital to expand capacity, deepen safety research, and support international expansion. Anthropic competes with generative AI offerings from Google, OpenAI, Meta, and Microsoft in a race that is expected to attract hundreds of billions of dollars in investment over the next few years. Heavily backed by Amazon, Anthropic was founded in 2021 by former OpenAI executives and has grown rapidly since Claude's initial release in early 2023, with its annual revenue rate quintupling to $5 billion since early this year.
[36]
Anthropic to pay $1.5 billion to settle authors' copyright lawsuit
Anthropic, which operates the Claude artificial intelligence app, has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who alleged the company took pirated copies of their works to train its chatbot. The company has agreed to pay authors about $3,000 for each of an estimated 500,000 books covered by the settlement. A trio of authors -- thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson -- sued last year, and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude. The landmark settlement could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement. A judge could approve the settlement as soon as Monday. "As best as we can tell, it's the largest copyright recovery ever," said Justin Nelson, a lawyer for the authors. "It is the first of its kind in the AI era." In a statement to CBS News, Anthropic Aparna Sridhar deputy general counsel said the settlement "will resolve the plaintiffs' remaining legacy claims." Sridhar added that the settlement comes after the U.S. District Court for the Northern District of California in June ruled that Anthropic's use of legally purchased books to train Claude did not violate U.S. copyright law. "We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery and solve complex problems," Sridhar said. Anthropic, which was founded by former executives with ChatGPT developer OpenAI, introduced Claude in 2023. Like other generative AI bots, the tool lets users ask natural language questions and then provides summarized answers using AI trained on millions of books, articles and other material. If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money. "We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business," said William Long, a legal analyst for Wolters Kluwer. U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms. Books are known to be important sources of data -- in essence, billions of words carefully strung together -- that are needed to build the AI large language models behind chatbots like Anthropic's Claude and its chief rival, OpenAI's ChatGPT. Alsup's June ruling found that Anthropic had downloaded more than 7 million digitized books that it "knew had been pirated." It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained. Debut thriller novel "The Lost Night" by Bartz, a lead plaintiff in the case, was among those found in the Books3 dataset. Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote. The Authors Guild told its thousands of members last month that it expected "damages will be minimally $750 per work and could be much higher" if Anthropic was found at trial to have willfully infringed their copyrights. The settlement's higher award -- approximately $3,000 per work -- likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright. On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement "an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors' works to train their AI, robbing those least able to afford it."
[37]
AI company Anthropic agrees to pay $1.5B to settle lawsuit with authors
The Anthropic app on a smartphone.Gabby Jones / Bloomberg via Getty Images Anthropic, a major artificial intelligence company, has agreed to pay at least $1.5 billion to settle a copyright infringement lawsuit filed by a group of authors who alleged the platform had illegally used pirated copies of their books to train large-language models, according to court documents. "If approved, this landmark settlement will be the largest publicly reported copyright recovery in history, larger than any other copyright class action settlement or any individual copyright case litigated to final judgment," said Justin Nelson, a lawyer for the authors. The lawsuit, filed in federal court in California last year, centered on roughly 500,000 published works. The proposed settlement amounts to a gross recovery of $3,000 per work, Nelson said in a memorandum to the judge in the case. "This result is nothing short of remarkable," Nelson added. In the lawsuit, the plaintiffs alleged that Anthrophic had "committed large-scale copyright infringement" by downloading and "commercially exploiting" books that it had allegedly gotten from pirating websites such as Library Genesis and Pirate Library Mirror. Anthropic had argued that what it was doing fell under "fair use" under U.S. copyright law. In late June, the federal judge assigned to the case ruled that Anthropic's actions constituted fair use because the end result was "exceedingly transformative." But that ruling from Judge William Alsup came with crucial asterisks. He declared that downloading pirated copies of books did not constitute fair use. "In June, the District Court issued a landmark ruling on AI development and copyright law, finding that Anthropic's approach to training AI models constitutes fair use," said Aparna Sridhar, the deputy general counsel of Anthropic. "Today's settlement, if approved, will resolve the plaintiffs' remaining legacy claims. We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems." Sridhar added. The lawsuit was originally filed by three writers: Andrea Bartz, Charles Graeber and Kirk Wallace Johnson. Bartz is a journalist and novelist; Graeber and Johnson are journalists who have published nonfiction books. Bartz, Graeber and Johnson did not immediately respond to requests for comment. The settlement could shape the trajectory of other pending litigation between AI platforms and published authors. John Grisham, "Game of Thrones" author George R.R. Martin and Jodi Picoult are part of a group of nearly 20 bestselling authors who have sued OpenAI, alleging "systematic theft on a mass scale" for using their works to train ChatGPT and other tools. Anthropic agreed to make four payments into the settlement fund, starting with a $300 million payout due within five business days of the court's sign-off on the terms, according to Nelson. Nelson's memo to Alsup said the proposed $1.5 billion payout is the "minimum size" of the settlement. "If the Works List ultimately exceeds 500,000 works," he said, "then Anthropic will pay an additional $3,000 per work that Anthropic adds to the Works List above 500,000 works."
[38]
Anthropic agrees to pay $1.5 billion to authors whose work trained AI in priciest copyright settlement in U.S. history
Anthropic agreed to pay 500,000 authors $3,000 each for copyright infringement. As reported by the New York Times, AI company Anthropic has agreed to a $1.5 billion settlement in a groundbreaking copyright lawsuit involving some 500,000 authors. Anthropic illegally downloaded the authors' books and used them to train its AI model. The total settlement for this case is the largest for any copyright case in U.S. history, although the payout to each affected author is only $3,000. The lawsuit, filed in August 2024, accused Anthropic of benefiting from pirated copyrighted books, stating, "An essential component of Anthropic's business model -- and its flagship 'Claude' family of large language models (or "LLMs") -- is the largescale theft of copyrighted works." It goes on to highlight the harm being done to authors, which goes beyond the theft of their work: "Anthropic's Claude LLMs compromise authors' ability to make a living, in that the LLMs allow anyone to generate -- automatically and freely (or very cheaply) -- texts that writers would otherwise be paid to create and sell. Anthropic's LLMs, which dilute the commercial market for Plaintiffs' and the Class's works, were created without paying writers a cent. Anthropic's immense success is a direct result of its copyright infringement." As a result of that copyright infringement, Anthropic has offered to pay $1.5 billion to settle the class action lawsuit before it goes to trial. This case sets a standard for the growing wave of copyright lawsuits against AI companies, but it isn't as clear-cut as it might look. Judge William Alsup of the Northern District of California ruled that Anthropic is allowed to use copyrighted books to train its AI models if it obtains those books legally. The settlement is the result of pirating the books, not feeding them to an AI, which has been ruled "fair use." Additionally, the settlement Anthropic offered is a historically high sum, but it's a miniscule bit of the company's overall value, which sits at $183 billion at the time of writing. Earlier this week, Anthropic raised more money in a single round of funding than the entire settlement in this copyright case. Meanwhile, the $3,000 for each author impacted by the class action lawsuit is less than a typical book's advance. It's also worth noting that $1.5 billion is actually far less than Anthropic could have potentially been ordered to pay if it hadn't settled. Willful copyright infringement can result in fines of up to $150,000 per copyrighted work. The pirated data sets Anthropic used contained 7 million books. If Anthropic had been forced to pay the maximum amount for each count of copyright infringement, it could have been financial ruin for the AI company. Of course, the maximum possible fine would have been unlikely, but Anthropic still might have had to pay much more than it settled for. This lawsuit against Anthropic is just one of several like it. Authors also have ongoing lawsuits with other AI companies, including Microsoft and OpenAI. Back in June, authors lost a similar lawsuit against Meta, but only because the judge ruled that they hadn't offered enough evidence, stating, "This ruling does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful."
[39]
'Disappointed' judge postpones $1.5bn Anthropic settlement
The judge said lawyers have left ironing out important information for the future. The federal judge overseeing Anthropic's $1.5bn settlement with book authors over a year-long copyright lawsuit has postponed the deal's approval pending the submission of further clarifying information. First reported by Bloomberg Law, district judge William Alsup said that he is concerned class lawyers are striking a deal that will be forced "down the throat of authors". Alsup said he felt "misled" and needs more information about the claim process for class members, adding that "I have an uneasy feeling about hangers on with all this money on the table". The settlement is touted to be the largest payout in the history of US copyright cases and would see a class of 465,000 authors receiving $3,000 per work. The class includes copyright owners whose works were in shadow libraries of pirated materials which Anthropic downloaded. However, as per an order filed 7 September, the district judge said he is "disappointed that counsel have left important questions to be answered in the future", elaborating that information, including the list of works, the list of the class of authors, as well as the claim form are yet to be ironed out. "Those critical choices will need to be confirmed well before 10 October before preliminary approval can be granted," Alsup ruled. At the hearing, the judge said that the agreement is "nowhere close to complete", and wants the final list of all works by 15 September and a revised settlement process, reportedly by 22 September. The next hearing is scheduled for 25 September. Although the authors' lawyer told the judge that they "care deeply that every single proper claim gets compensation". Maria Pallante, the president and CEO of the Association of American Publishers in a statement said that the court "demonstrated a lack of understanding of how the publishing industry works." The court "seems to be envisioning a claims process that would be unworkable, and sees a world with collateral litigation between authors and publishers for years to come," she added. According to Bloomberg Law, the judge said that lawyers stop caring once a monetary relief is established for claimants in a class action lawsuit. He told the parties to design a claim form with a mechanism for copyright holders to opt-in to the settlement. If an owner opts out, the work won't be covered by the settlement. In their legal battle against Anthropic, authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson claimed that "largescale theft of copyrighted works" is a key component of the company's business model. Although, while the court found that Anthropic illegally acquired millions of books through shadow libraries, it ruled that the company was protected by fair use when using books to train its AI models. This settlement deals with Anthropic's illegal downloading of pirated material. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[40]
Anthropic to pay $1.5 billion to book authors to settle AI copyright suit
Artificial intelligence startup Anthropic has agreed to pay a record-setting $1.5 billion to a group of book authors and publishers in order to settle a class action lawsuit. The payout is thought to be the largest in the history of U.S. copyright suits and could influence other cases where an AI company has been sued for copyright violations. "This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong," Justin Nelson, an attorney for the plaintiffs, said in a statement. The suit, filed last year, was brought by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson over copyright infringement. They alleged that Anthropic used the authors' copyrighted books to train its chatbot, Claude. In June, a judge ruled that while Anthropic was allowed to train its AI model using books that it had acquired the copyright for under fair use rules, the startup had illegally acquired books via online libraries that contained bootleg copies of books. The judge concluded that the authors had cause for the case to proceed to a trial. That was slated to start in December.
[41]
Anthropic reaches $1.5 Billion settlement with authors in landmark copyright case
The AI startup agreed to pay authors around $3,000 per book for roughly 500,000 works, after it was accused of downloading millions of pirated texts from shadow libraries to train its large language model, Claude. As part of the deal, Anthropic will also destroy data it was accused of illegally acquiring. The fast-growing AI startup announced earlier this week that it had just raised an additional $13 billion in new venture capital funding in a deal that valued the company at $183 billion. It has also said that it is currently on pace to generate at least $5 billion in revenues over the next 12 months. The settlement would amounts to nearly a third of that figure or more than a tenth of the new funding it just received. While the settlement does not establish a legal precedent, experts said it will likely serve as an anchor figure for the amount other major AI companies will need to pay if they hope to settle similar copyright infringement lawsuits. For instance, a number of authors are suing Meta for using their books without permission. As part of that lawsuit, Meta was forced to disclose internal company emails that suggest it knowingly used a library of pirated books called LibGen -- which is one of the same libraries that Anthropic used. OpenAI and its partner Microsoft are also facing a number of copyright infringement cases, including one filed by the Author's Guild. Aparna Sridhar, deputy general counsel at Anthropic, told Fortune in a statement: "In June, the District Court issued a landmark ruling on AI development and copyright law, finding that Anthropic's approach to training AI models constitutes fair use. Today's settlement, if approved, will resolve the plaintiffs' remaining legacy claims. We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems." A lawyer for the authors who sued Anthropic said the settlement would have far-reaching impacts. "This landmark settlement far surpasses any other known copyright recovery. It is the first of its kind in the AI era. It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners," Justin Nelson, partner with Susman Godfrey LLP and co-lead plaintiffs' counsel on Bartz et al. v. Anthropic PBC, said in a statement. "This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong." The case, which was originally set to go to trial in December, could have exposed Anthropic to damages of up to $1 trillion if the court found that the company willfully violated copyright law. Santa Clara law professor Ed Lee said could that if Anthropic lost the trial, it could have "at least the potential for business-ending liability." Anthropic essentially concurred with Lee's conclusion, writing in a court filing that it felt "inordinate pressure" to settle the case given the size of the potential damages. The jeopardy Anthropic faced hinged on the means it had used to obtain the copyrighted books, rather than the fact that they had used the books to train AI without the explicit permission of the copyright holders. In July, U.S. District Court Judge William Alsup, ruled that using copyrighted books to create an AI model constituted "fair use" for which no specific license was required. But Alsup then focused on the allegation that Anthropic had used digital libraries of pirated books for at least some of the data it fed its AI models, rather than purchasing copies of the books legally. The judge suggested in a decision allowing the case to go to trial that he was inclined to view this as copyright infringement no matter what Anthropic did with the pirated libraries. By settling the case, Anthropic has sidestepped an existential risk to its business. However, the settlement is significantly higher than some legal experts were predicting. The motion is now seeking preliminary approval of what's claimed to be "the largest publicly reported copyright recovery in history." James Grimmelmann, a law professor at Cornell Law School and Cornell Tech, called it a "modest settlement." "It doesn't try to resolve all of the copyright issues around generative AI. Instead, it's focused on what Judge Alsup thought was the one egregiously wrongful thing that Anthropic did: download books in bulk from shadow libraries rather than buying copies and scanning them itself. The payment is substantial, but not so big as to threaten Anthropic's viability or competitive position," he told Fortune. He said that the settlement helps establish that AI companies need to acquire their training data legitimately, but does not answer other copyright questions facing AI companies, such as what they need to do to prevent their generative AI models from producing outputs that infringe copyright. In several cases still pending against AI companies -- including a case The New York Times has filed against OpenAI and a case that movie studio Warner Brothers filed just this week against Midjourney, a firm that makes AI that can generate images and videos -- the copyright holders allege the AI models produced outputs that were identical or substantially similar to copyrighted works. "The recent Warner Bros. suit against Midjourney, for example, focuses on how Midjourney can be used to produce images of DC superheroes and other copyrighted characters," Grimmelmann said. While legal experts say the amount is manageable for a firm the size of Anthropic, Luke McDonagh, an associate professor of law at LSE, said the case may have a downstream impact on smaller AI companies if it does set a business precedent for similar claims. "The figure of $1.5 billion, as the overall amount of the settlement, indicates the kind of level that could resolve some of the other AI copyright cases. It could also point the way forward for licensing of copyright works for AI training," he told Fortune. "This kind of sum -- $3,000 per work -- is manageable for a firm valued as highly as Anthropic and the other large AI firms. It may be less so for smaller firms." Cecilia Ziniti, a lawyer and founder of legal AI company GC AI, said the settlement was a "Napster to iTunes" moment for AI. "This settlement marks the beginning of a necessary evolution toward a legitimate, market-based licensing scheme for training data," she said. She added the settlement could mark the "start of a more mature, sustainable ecosystem where creators are compensated, much like how the music industry adapted to digital distribution." Ziniti also noted the size of the settlement may force the rest of the industry to get more serious about licensing copyrighted works. "The argument that it's too difficult to track and pay for training data is a red herring because we have enough deals at this point to show it can be done," she said, pointing to deals that news publications, including Axel Springer and Vox, have entered into with OpenAI. "This settlement will push other AI companies to the negotiating table and accelerate the creation of a true marketplace for data, likely involving API authentications and revenue-sharing models."
[42]
AI giant Anthropic to pay $1.5 bn over pirated books
Anthropic will pay at least $1.5 billion to settle a US class action lawsuit over allegedly using pirated books to train its artificial intelligence models, according to court documents filed Friday. "This landmark settlement far surpasses any other known copyright recovery," said plaintiffs' attorney Justin Nelson. "It is the first of its kind in the AI era." The settlement stems from a class-action lawsuit filed by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who accused Anthropic of illegally copying their books to train Claude, the company's AI chatbot that rivals ChatGPT. In a partial victory for Anthropic, US District Court Judge William Alsup ruled in June that the company's training of its Claude AI models with books -- whether bought or pirated -- so transformed the works that it constituted "fair use" under the law. "The technology at issue was among the most transformative many of us will see in our lifetimes," Alsup wrote in his decision, comparing AI training to how humans learn by reading books. However, Alsup rejected Anthropic's bid for blanket protection, ruling that the company's practice of downloading millions of pirated books to build a permanent digital library was not justified by fair use protections. "We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems," Anthropic deputy general counsel Aparna Sridhar said in response to an AFP inquiry. San Francisco-based Anthropic announced this week that it raised $13 billion in a funding round valuing the AI startup at $183 billion. Anthropic competes with generative artificial intelligence offerings from Google, OpenAI, Meta, and Microsoft in a race that is expected to attract hundreds of billions of dollars in investment over the next few years. Thousands of books According to the legal filing, the settlement covers approximately 500,000 books, translating to roughly $3,000 per work -- four times the minimum statutory damages under US copyright law. Under the agreement, Anthropic will destroy the original pirated files and any copies made, though the company retains rights to books it legally purchased and scanned. "This settlement sends a strong message to the AI industry that there are serious consequences when they pirate authors' works to train their AI, robbing those least able to afford it," said Mary Rasenberger, CEO of the Authors Guild, in a statement supporting the deal. The settlement, which requires judicial approval, comes as AI companies face growing legal pressure over their training practices. A US judge in June handed Meta a victory over authors who accused the tech giant of violating copyright law by training Llama AI on their creations without permission. District Court Judge Vince Chhabria in San Francisco ruled that Meta's use of the works to train its AI model was "transformative" enough to constitute "fair use" under copyright law. Apple Intelligence Meanwhile, Apple on Friday was targeted with a lawsuit by a pair of US authors accusing the iPhone maker of using pirated books to train generative AI built into its lineup of devices. The tech titan's suite of capabilities called "Apple Intelligence" is part of a move to show it is not being left behind in the AI race. "To train the generative-AI models that are part of Apple Intelligence, Apple first amassed an enormous library of data," read the suit. "Part of Apple's data library includes copyrighted works -- including books created by plaintiffs -- that were copied without author consent, credit, or compensation." Apple "scraped" works from sources including "shadow libraries" stocked with pirated books, the suit contends. Apple did not immediately reply to a request for comment. The suit filed against Apple by Grady Hendrix, author of "My Best Friend's Exorcism," and Jennifer Roberson of Arizon, whose books include "Sword-Bound," seeks class action status.
[43]
Settlement doubts loom over Anthropic's pirated book case
A $1.5 billion settlement between Anthropic and a group of authors is at risk after U.S. District Judge William Alsup signaled he may reject the deal. The case centers on allegations that nearly 465,000 books were illegally pirated to train Anthropic's Claude chatbot. At a San Francisco hearing, Alsup said he was reluctant to approve the agreement, remarking, "We'll see if I can hold my nose and approve it." He set a September 25 follow-up hearing, with deadlines on September 15 for a finalized book list and September 22 for a claims form. The proposed settlement, announced just days earlier, would pay about $3,000 per pirated book. It was meant to avoid a December trial. Alsup's June ruling found that using copyrighted books for AI training is not automatically illegal, but that Anthropic had pulled millions of titles from pirate sites. Authors' attorney Justin Nelson confirmed the list now includes about 465,000 works. Alsup demanded guarantees that the number will not grow. Alsup raised concerns about whether all eligible authors would be notified and whether trade groups like the Authors Guild and Association of American Publishers were pressuring writers to accept the deal. "I have an uneasy feeling about all the hangers on in the shadows," he said. Authors Guild CEO Mary Rasenberger and AAP CEO Maria Pallante both attended the hearing. Pallante later called the judge's timetable "troubling" and warned his approach could spark new conflicts between authors and publishers instead of resolving them. Lead plaintiffs Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson attended but did not speak in court. Before the hearing, Johnson called the settlement "the beginning of a fight on behalf of humans that don't believe we have to sacrifice everything on the altar of AI." Nelson argued the high-profile case has already attracted broad attention, ensuring fairness in distributing funds. He stressed this is not "an under-the-radar warranty case." With Alsup openly skeptical, the $1.5B deal could collapse, sending Anthropic and the authors toward a December trial that would directly test whether large-scale AI training on pirated books constitutes copyright infringement.
[44]
Judge skewers $1.5B Anthropic settlement with authors in pirated books case over AI training
SAN FRANCISCO -- A federal judge on Monday skewered a $1.5 billion settlement between artificial intelligence company Anthropic and authors who allege nearly half a million books had been illegally pirated to train chatbots, raising the specter that the case could still end up going to trial. After spending nearly an hour mostly lambasting a settlement that he believes is full of pitfalls, U.S. District Judge William Alsup scheduled another hearing in San Francisco on September 25 to review whether his concerns had been addressed. "We'll see if I can hold my nose and approve it" then, Alsup said before adjourning Monday's hearing. Afterwards, the leader of a publishers group involved in the settlement called some of the judge's revised timetable for approving the deal "troubling," in an acknowledgement that the proposed resolution could unravel. Alsup "demonstrated a lack of understanding of how the publishing industry works," said Maria Pallante, CEO of Association of American Publishers, who attended Monday's hearing but was not asked to speak. The judge's misgivings emerged just a few days after Anthropic and attorneys who filed the class-action lawsuit announced a $1.5 billion settlement that is designed to resolve the pirating claims and avert a trial that had been scheduled to begin in December. Alsup had dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn't illegal but that Anthropic wrongfully acquired millions of books through pirate websites to help improve its Claude chatbot. The proposed settlement would pay authors and publishers about $3,000 for each of the books covered by the agreement. Justin Nelson, an attorney for the authors, told Alsup that about 465,000 books are on the list of works pirated by Anthropic. The judge said he needed more ironclad assurances that number won't swell to ensure the company doesn't get blindsided by more lawsuits "coming out of the woodwork." The judge set a September 15 deadline for a "drop-dead list" of the total books that were pirated. Alsup's main concern centered on how the claims process will be handled in an effort to ensure everyone eligible knows about it so the authors don't "get the shaft." He set a September 22 deadline for submitting a claims form for him to review before the Sept. 25 hearing to review the settlement again. The judge also raised worries about two big groups connected to the case -- the Authors Guild in addition to the Association of American Publishers -- working "behind the scenes" in ways that could pressure some authors to accept the settlement without fully understanding it. Authors Guild CEO Mary Rasenberger sat alongside Pallante during Monday's hearing, ans also was asked to discuss the settlement. The trio of authors -- thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson -- who sued last year also sat in the front row of the court gallery, but didn't address Alsup. In a statement issued after the hearing the Authors Guild said it was "confused" about Alsup's concern that it might be secretly trying to undermine some of the writers represented in the settlement. The Authors Guild said its work on the settlement is designed "to ensure that authors' interests are fully represented" while contributing its expertise to "the discussions with complete transparency." Before the hearing Johnson, author of "The Feather Thief" and other books, described the settlement as the "beginning of a fight on behalf of humans that don't believe we have to sacrifice everything on the altar of AI." Nelson, the lawyer for the authors, sought to ensure Alsup that he and other lawyers in the case were confident the money will be fairly distributed because the case has been widely covered by the media, with some stories landing on the front pages of major newspapers. "This is not an under-the-radar warranty case," Nelson said. Alsup made it clear, though, that he was leery about the settlement and warned he may decide to let the case go to trial. "I have an uneasy feeling about all the hangers on in the shadows,'" the judge said. In her statement, Pallante said she hopes Alsup will remain flexible as he learns more about how the publishing industry works so the settlement can be preserved. "The court seems to be envisioning a claims process that would be unworkable, and sees a world with collateral litigation between authors and publishers for years to come," Pallante said. "Class actions are supposed to resolve cases, not create new disputes, and certainly not between the class members who were harmed in the first place."
[45]
Judge reviews $1.5B Anthropic settlement proposal with authors over pirated books for AI training
SAN FRANCISCO (AP) -- A federal judge has begun reviewing a landmark class-action settlement agreement between the artificial intelligence company Anthropic and book authors who say the company took pirated copies of their works to train its chatbot. The company has agreed to pay authors and publishers $1.5 billion, amounting to about $3,000 for each of an estimated 500,000 books covered by the settlement. But U.S. District Judge William Alsup has raised some questions about the details of the agreement and asked representatives of author and publisher groups to appear in court Monday to discuss. A trio of authors -- thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson -- sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude. Johnson, author of "The Feather Thief" and other books, said he planned to attend the hearing on Monday and described the settlement as the "beginning of a fight on behalf of humans that don't believe we have to sacrifice everything on the altar of AI." Alsup dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn't illegal but that Anthropic wrongfully acquired millions of books through pirate websites. Had Anthropic and the authors not agreed to settle, the case would have gone to trial in December.
[46]
Anthropic Will Pay Out $1.5 Billion Over Pirated AI Training Content
Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot. The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement. The company has agreed to pay authors about $3,000 for each of an estimated 500,000 books covered by the settlement. "As best as we can tell, it's the largest copyright recovery ever," said Justin Nelson, a lawyer for the authors. "It is the first of its kind in the AI era." A trio of authors -- thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson -- sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude. A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn't illegal but that Anthropic wrongfully acquired millions of books through pirate websites. If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money. "We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business," said William Long, a legal analyst for Wolters Kluwer. U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms. Books are known to be important sources of data -- in essence, billions of words carefully strung together -- that are needed to build the AI large language models behind chatbots like Anthropic's Claude and its chief rival, OpenAI's ChatGPT. Alsup's June ruling found that Anthropic had downloaded more than 7 million digitized books that it "knew had been pirated." It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained. Debut thriller novel "The Lost Night" by Bartz, a lead plaintiff in the case, was among those found in the Books3 dataset. Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote. The Authors Guild told its thousands of members last month that it expected "damages will be minimally $750 per work and could be much higher" if Anthropic was found at trial to have willfully infringed their copyrights. The settlement's higher award -- approximately $3,000 per work -- likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright. On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement "an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors' works to train their AI, robbing those least able to afford it." The final deadline for the 2025 Inc. Best in Business Awards is Friday, September 12, at 11:59 p.m. PT. Apply now.
[47]
Judge Reviews $1.5B Anthropic Settlement Proposal With Authors Over Pirated Books for AI Training
SAN FRANCISCO (AP) -- A federal judge has begun reviewing a landmark class-action settlement agreement between the artificial intelligence company Anthropic and book authors who say the company took pirated copies of their works to train its chatbot. The company has agreed to pay authors and publishers $1.5 billion, amounting to about $3,000 for each of an estimated 500,000 books covered by the settlement. But U.S. District Judge William Alsup has raised some questions about the details of the agreement and asked representatives of author and publisher groups to appear in court Monday to discuss. A trio of authors -- thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson -- sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude. Johnson, author of "The Feather Thief" and other books, said he planned to attend the hearing on Monday and described the settlement as the "beginning of a fight on behalf of humans that don't believe we have to sacrifice everything on the altar of AI." Alsup dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn't illegal but that Anthropic wrongfully acquired millions of books through pirate websites. Had Anthropic and the authors not agreed to settle, the case would have gone to trial in December.
[48]
Anthropic to pay $1.5bn to settle copyright lawsuit with authors
The settlement will see Anthropic paying out $3,000 to around 500,000 authors. In what looks to be the largest payout in the history of US copyright cases, Anthropic will pay $1.5bn to settle with book authors whose copyrighted materials were illegally downloaded to train the AI-chatbot Claude. A few weeks ago, the generative AI (GenAI) giant decided to settle with a trio of authors who, in their lawsuit, claimed that "largescale theft of copyrighted works" is a key component of Anthropic's business model. They said that the company took multiple steps to hide the full extent of its copyright theft. The case was certified as a class action in July, the first in a copyright litigation against AI companies. The class included copyright owners whose works were in LibGen and PiLiMi - shadow libraries of pirated materials - which were downloaded by Anthropic. The presiding judge found that Anthropic illegally acquired millions of books through shadow libraries, but ruled that the company was protected by fair use when using books to train its AI models. At the time, Anthropic told courts that a settlement could kill the company, claiming that the plaintiffs' demands created a "death knell" situation, regardless of the case's legal merit. It was estimated that damages could easily rake up in the hundreds of billions. This settlement, however, deals with materials the company downloaded through the pirated websites and will see Anthropic paying $3,000 to approximately 500,000 authors. Though the company says it did not use any of the pirated material to build AI tech that was publicly released. Moreover, authors will retain all rights and legal claims regarding any books not on the settlement list and can still sue the company if they believe the company is reproducing their works without proper approval. Anthropic has agreed to destroy the files of all works it downloaded from the shadow libraries. In a statement to news outlets, Anthropic's deputy general counsel Aparna Sridhar reiterated the court's judgement that the company's approach to training AI was fair use. "Today's (5 September) settlement, if approved, will resolve the plaintiffs' remaining legacy claims," Sridhar added. While the plaintiff's co-lead counsel Justin Nelson said: "This landmark settlement far surpasses any other known copyright recovery. It is the first of its kind in the AI era. It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners." Maria Pallante, the president and CEO of the Association of American Publishers, said that the settlement "provides enormous value in sending the message that Artificial Intelligence companies cannot unlawfully acquire content from shadow libraries or other pirate sources as the building blocks for their models." The settlement comes just as Anthropic raised $13bn in a funding round that now values the company at more than $180bn. However, Anthropic has had success in other copyright disputes recently. Earlier this year, a California judge sided with the AI company by denying a motion for injunction filed by Universal Music Group, Concord and Capitol CMG, among several other large music publishers, that would have stopped the start-up from using their song lyrics to train its AI models. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[49]
'Uneasy Feeling With All of This Money on the Table': Judge Blasts Anthropic's $1.5 Billion Book Copyright Settlement
A federal judge has cast doubt on the proposed $1.5 billion copyright settlement between AI company Anthropic and book authors represented in a class action lawsuit, delaying its approval. Judge William Alsup declined to approve the settlement on Monday, citing that authors might be excluded from meaningful input as negotiations unfolded behind closed doors. "I have an uneasy feeling about hangers-on with all this money on the table," Alsup said, per Bloomberg. Related: Anthropic Is Now One of the Most Valuable Startups of All Time: 'Exponential Growth' Alsup noted the settlement was "nowhere close to complete" and required further clarification on vital aspects, including how claims would be filed, how class members would be notified, and which works were covered. Without these, Alsup argued, the deal could unfairly disadvantage authors and lead to future litigation. The lawsuit originates from Anthropic's alleged downloading of millions of copyrighted books to train its AI models -- a claim echoing similar legal efforts against major tech firms like OpenAI and Meta. Anthropic proposed paying about $3,000 per book to 500,000 authors in the suit. Alsup said there were many "important questions" that need answering before approving the settlement, including a complete list of books and a clearly defined process for notifying potential class members, adding that class members typically "get the shaft" once deals are made and "attorneys stop caring." He wants clear and early guidance provided to authors, giving them proper time to opt in or out of the suit. All legal eyes are on what happens next as this suit is thought to provide a template for future AI copyright litigation.
[50]
Anthropic to pay authors $1.5 billion to settle lawsuit over pirated books used to train AI chatbots
NEW YORK -- Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot. The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement. The company has agreed to pay authors or publishers about $3,000 for each of an estimated 500,000 books covered by the settlement. "As best as we can tell, it's the largest copyright recovery ever," said Justin Nelson, a lawyer for the authors. "It is the first of its kind in the AI era." A trio of authors -- thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson -- sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude. A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn't illegal but that Anthropic wrongfully acquired millions of books through pirate websites. If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money. "We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business," said William Long, a legal analyst for Wolters Kluwer. U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms. Anthropic said in a statement Friday that the settlement, if approved, "will resolve the plaintiffs' remaining legacy claims." "We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems," said Aparna Sridhar, the company's deputy general counsel. As part of the settlement, the company has also agreed to destroy the original book files it downloaded. Books are known to be important sources of data -- in essence, billions of words carefully strung together -- that are needed to build the AI large language models behind chatbots like Anthropic's Claude and its chief rival, OpenAI's ChatGPT. Alsup's June ruling found that Anthropic had downloaded more than 7 million digitized books that it "knew had been pirated." It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained. Debut thriller novel "The Lost Night" by Bartz, a lead plaintiff in the case, was among those found in the dataset. Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote. The Authors Guild told its thousands of members last month that it expected "damages will be minimally $750 per work and could be much higher" if Anthropic was found at trial to have willfully infringed their copyrights. The settlement's higher award -- approximately $3,000 per work -- likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright. On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement "an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors' works to train their AI, robbing those least able to afford it." The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren't registered with the U.S. Copyright Office. "On the one hand, it's comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price," said Thomas Heldrup, the group's head of content protection and enforcement. On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules. "It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space," Heldrup said. The privately held Anthropic, founded by ex-OpenAI leaders in 2021, earlier this week put its value at $183 billion after raising another $13 billion in investments. Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs. The settlement could influence other disputes, including an ongoing lawsuit by authors and newspapers against OpenAI and its business partner Microsoft, and cases against Metaand Midjourney. And just as the Anthropic settlement terms were filed, another group of authors sued Apple on Friday in the same San Francisco federal court. "This indicates that maybe for other cases, it's possible for creators and AI companies to reach settlements without having to essentially go for broke in court," said Long, the legal analyst. The industry, including Anthropic, had largely praised Alsup's June ruling because he found that training AI systems on copyrighted works so chatbots can produce their own passages of text qualified as "fair use" under U.S. copyright law because it was "quintessentially transformative." Comparing the AI model to "any reader aspiring to be a writer," Alsup wrote that Anthropic "trained upon works not to race ahead and replicate or supplant them -- but to turn a hard corner and create something different." But documents disclosed in court showed Anthropic employees' internal concerns about the legality of their use of pirate sites. The company later shifted its approach and hired Tom Turvey, the former Google executive in charge of Google Books, a searchable library of digitized books that successfully weathered years of copyright battles. With his help, Anthropic began buying books in bulk, tearing off the bindings and scanning each page before feeding the digitized versions into its AI model, according to court documents. That was legal but didn't undo the earlier piracy, according to the judge.
[51]
Anthropic to pay authors $1.5 billion to settle lawsuit over pirated chatbot training material - The Economic Times
Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot. The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement. The company has agreed to pay authors about $3,000 for each of an estimated 500,000 books covered by the settlement. "As best as we can tell, it's the largest copyright recovery ever," said Justin Nelson, a lawyer for the authors. "It is the first of its kind in the AI era." A trio of authors - thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson - sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude. A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn't illegal but that Anthropic wrongfully acquired millions of books through pirate websites. If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money. "We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business," said William Long, a legal analyst for Wolters Kluwer. U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms. Anthropic said in a statement Friday that the settlement, if approved, "will resolve the plaintiffs' remaining legacy claims." "We remain committed to developing safe AI systems that help people and organisations extend their capabilities, advance scientific discovery, and solve complex problems," said Aparna Sridhar, the company's deputy general counsel. As part of the settlement, the company has also agreed to destroy the original book files it downloaded. Books are known to be important sources of data - in essence, billions of words carefully strung together - that are needed to build the AI large language models behind chatbots like Anthropic's Claude and its chief rival, OpenAI's ChatGPT. Alsup's June ruling found that Anthropic had downloaded more than 7 million digitised books that it "knew had been pirated." It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained. Debut thriller novel "The Lost Night" by Bartz, a lead plaintiff in the case, was among those found in the Books3 dataset. Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote. The Authors Guild told its thousands of members last month that it expected "damages will be minimally $750 per work and could be much higher" if Anthropic was found at trial to have willfully infringed their copyrights. The settlement's higher award - approximately $3,000 per work - likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright. On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement "an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors' works to train their AI, robbing those least able to afford it." The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren't registered with the U.S. Copyright Office. "On the one hand, it's comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price," said Thomas Heldrup, the group's head of content protection and enforcement. On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules. "It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space," Heldrup said. The privately held Anthropic, founded by ex-OpenAI leaders in 2021, said Tuesday that it had raised another $13 billion in investments, putting its value at $183 billion. Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs.
[52]
Jeff Bezos-Backed Anthropic To Pay $1.5 Billion In AI Copyright Settlement With Authors After $13 Billion Funding Round - Alphabet (NASDAQ:GOOG), Amazon.com (NASDAQ:AMZN)
Jeff Bezos' Amazon.com Inc. AMZN and Alphabet Inc. GOOG GOOGL-backed Anthropic agreed to pay $1.5 billion to settle a class-action lawsuit from authors who accused the artificial intelligence company of using their books without permission to train its Claude chatbot. Check out the current price of AMZN stock here. Historic Copyright Settlement Sets AI Precedent The proposed settlement marks the first resolution in a wave of copyright lawsuits targeting tech companies including OpenAI, Microsoft Corp. MSFT and Meta Platforms Inc. META over their use of copyrighted material for AI training, Reuters reported. See Also: Elizabeth Warren Warns About The Walgreens Takeover, Saying 'Private Equity Has A Record Of Running Companies Into The Ground' U.S. District Judge William Alsup in San Francisco must approve the deal announced in August. The settlement fund provides $3,000 per book for approximately 500,000 downloaded titles, with potential growth if additional works are identified. Authors Claim Victory in 'Piracy' Battle The plaintiffs' lawyers said the settlement clearly shows that using copyrighted material from sites is unacceptable, calling it the biggest copyright payout ever and the first major case like this in the age of AI. Writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson filed the class action last year, alleging Anthropic unlawfully used millions of pirated books to train Claude. Anthropic denied wrongdoing and said it made fair use of the material. Company Valuation Soars Amid Legal Resolution The settlement comes as Anthropic's valuation surged to $183 billion following Tuesday $13 billion funding round co-led by Fidelity Management & Research and Lightspeed Venture Partners. The company's run-rate revenue jumped from $1 billion in early 2025 to over $5 billion by August. Under the agreement, Anthropic will destroy downloaded book copies but could still face claims related to AI-generated content, according to the report. The company said it remains "committed to developing safe AI systems" without admitting liability. Read Next: Charlie Munger's $100K Rule Gets An Inflation Reality Check -- And The Math Is Eye-Opening Image via Shutterstock Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. AMZNAmazon.com Inc$232.09-1.52%Stock Score Locked: Want to See it? Benzinga Rankings give you vital metrics on any stock - anytime. Reveal Full ScoreEdge RankingsMomentum73.13Growth92.25Quality61.52Value50.81Price TrendShortMediumLongOverviewGOOGAlphabet Inc$235.001.01%GOOGLAlphabet Inc$234.701.03%METAMeta Platforms Inc$749.750.15%MSFTMicrosoft Corp$495.25-2.50%Market News and Data brought to you by Benzinga APIs
[53]
Anthropic To Pay $1.5 Billion In Record Copyright Settlement After Authors Sue Over Pirated Books Used To Train AI Chatbot Claude
With artificial intelligence models getting more advanced, there seem to be questions being raised on how these LLM models are being trained, whether the data collection method is unethical, and the accountability these AI firms should have in that case. Around the same time last year, Anthropic was legally pursued by some authors for allegedly using pirated copies of their work to train the AI chatbot Claude. The copyright case seems to be finally shaping up as Anthropic agrees to pay $1.5 billion in settlements. In what is being called a historic development when it comes to copyright settlements, Anthropic has agreed to pay $1.5 billion to settle a major copyright case filed by a set of authors claiming the company used pirated copies of their books to train its large language models. The settlement is pending the judge's approval, which is expected to be given at a scheduled hearing on September 8, 2025. This would make it not only the largest copyright settlement to date in the U.S., but also the first resolution in the context of rapidly evolving artificial intelligence. The class-action lawsuit asserts that Anthropic used hundreds of thousands of copyrighted materials that were accessed through illicit downloads and not through licensed resources. According to a report by The New York Times, about 500,000 authors were involved in the case filing, and hence, the plaintiffs are expected to get compensation of $3,000 per work. In addition to the financial settlement, the company has also agreed to destroy all illegally obtained materials in its training datasets so that the resources will not be used again. This case would help draw a legal distinction in the field of artificial intelligence, especially regarding techniques for training AI models and what material is classified as fair use in this process. While relying on legitimately purchased books is not considered illegal, companies cannot use pirated copies for their models. This clarity would also help set the tone for similar lawsuits filed against AI companies. From the perspective of authors and publishers, the outcome is considered a landmark victory, given how AI firms are profiting from the work of others without permission. Anthropic, by settling the case rather than lingering on it for years, also established its legal footing more strongly and set a precedent for companies when it comes to handling such cases in the future.
[54]
Anthropic Proposes $1.5 Billion Settlement Over 'Stealing' Books, the Largest Copyright Payout Ever
The settlement "sends a powerful message" to other AI startups and creators, Justin Nelson, the plaintiffs' attorney, stated. Leading AI startup Anthropic, which just raised $13 billion earlier this month at a $183 billion valuation, has agreed to settle a class action lawsuit with a group of authors and publishers for at least $1.5 billion. On Friday, the startup proposed paying about $3,000 per book to 500,000 authors. It would be the largest copyright payout in history, if approved, per The New York Times. The case was filed last year by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who alleged in the class action suit Bartz v. Anthropic that Anthropic had illegally used their work to train its AI models, downloading copyrighted books for free from pirated datasets. The ruling could also influence the more than 40 other copyright lawsuits filed against AI companies across the country. "This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong," Justin Nelson, the plaintiffs' attorney, told CNBC. Related: 'Extraordinarily Expensive': Getty Images Is Pouring Millions of Dollars Into One AI Lawsuit, CEO Says The settlement follows a ruling in June on the case from Judge William Alsup of the U.S. District Court for the Northern District of California. The judge ruled that Anthropic's AI training with copyrighted books was "fair use" because it was "transformative" and turned the books into something new. "Like any reader aspiring to be a writer, Anthropic's [AI] trained upon works not to race ahead and replicate or supplant them -- but to turn a hard corner and create something different," Alsup wrote in the ruling. However, Alsup also determined that Anthropic illegally downloaded countless books from companies like Pirate Library Mirror and Library Genesis to train its AI models. The judge ruled that Anthropic's executives were aware that these online libraries contained pirated material. Anthropic decided to "steal" the books instead of buying them from reputable sellers, he determined. Related: 'Bottomless Pit of Plagiarism': Disney, Universal File the First Major Hollywood Lawsuit Against an AI Startup "Anthropic had no entitlement to use pirated copies for its central library," the ruling reads. The authors and Anthropic chose to settle after the judge's ruling. As part of the proposed settlement, Anthropic agreed to remove pirated works from its database and stated that it did not use pirated books to develop AI that it has now made available to the public. If approved, the settlement "will resolve the plaintiffs' remaining legacy claims," Anthropic's Deputy General Counsel, Aparna Sridhar, said in a statement, per NYT. "We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems," she added. Though the settlement amount is record-breaking, Anthropic can likely afford it. The startup has raised more than $27 billion since its inception in 2021. Related: Anthropic Is Now One of the Most Valuable Startups of All Time: 'Exponential Growth' According to the June ruling, Anthropic made over one billion dollars in annual revenue last year from its AI chatbot Claude, which asks for $20 to $100 per month for paid subscriptions and also offers a free tier. Anthropic faces another legal battle against Reddit. In June, Reddit sued Anthropic for using the site for training data without permission, marking the first time a big tech company has filed a complaint against an AI startup over the material it uses to train AI models.
[55]
AI Giant Anthropic to Pay $1.5 Billion to Authors in Settlement
Anthropic will pay $1.5 billion to settle a lawsuit from authors, who accused the Amazon-backed company of illegally downloading and copying their books to teach its AI system, in among the first deals reached by creators over novel legal issues raised by the technology. The settlement was reached on Aug. 26. Lawyers for authors on Friday notified the court on the terms of the deal. Authors who opt into the agreement will be eligible to share in the $1.5 billion settlement fund, plus additional payments of $3,000 per book allegedly used by Anthropic for training.
[56]
Anthropic Agrees to $1.5 Billion Settlement in AI Copyright Case | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The artificial intelligence (AI) startup was sued last year by a group of authors who accused the company of illegally accessing their books. Under the terms of this settlement, Anthropic will pay around $3,000 per book as well as interest, and will also destroy the datasets that contain the allegedly pirated material, CNBC reported Friday (Sept. 5), citing a court filing. The report noted that the case had caught the attention of AI startups and media companies trying to get a sense of the copyright infringement atmosphere in the AI age. Assuming the settlement is approved, it will be the largest publicly reported copyright recovery on record, the court filing said. "This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong," Justin Nelson, the attorney for the plaintiffs, said in a statement to CNBC. PYMNTS has contacted Anthropic for comment but has not yet gotten a reply. The suit, brought by authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson, had accused Anthropic of "largescale copyright infringement by downloading and commercially exploiting books that it obtained from allegedly pirated datasets." A judge had ruled in June that Anthropic's use of books to train its models fell under the "fair use" umbrella, but ordered a trial to determine if the company had infringed on copyright by using works from the databases Library Genesis and Pirate Library Mirror. The news follows a report from late last month that Anthropic and the authors had agreed to a settlement ahead of that trial. As PYMNTS wrote in July, the ruling in Anthropic's case,- and a similar decision involving Meta, appeared to be emboldening tech firms. "Copyright has long protected the right not just to profit from a work, but to decide how and when it is used," she said. "Now, that control is slipping away."
[57]
Anthropic to Pay $1.5B in Landmark Copyright Case
"If approved [by the court], this landmark settlement will be the largest publicly reported copyright recovery in history," notes the motion filed by the authors as part of preliminary approval of the class settlement in the famous Anthropic vs. Bartz case. Anthropic agreed to pay at least $1.5 billion plus interest to class members to settle the lawsuit, as demanded in their motion plea submitted to the court. However, on September 9, Judge William Alsup criticized counsel, saying that he was "disappointed" for leaving key issues unresolved, including the Works List, the Class List, and the Claim Form, which identify the books in the settlement class, the members eligible for compensation, and the notices sent to claimants, respectively. In 2024, authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson filed a lawsuit against Anthropic, alleging that the company unlawfully used their work to train its large language models (LLMs). The lawsuit later acquired class action status. During court proceedings, it was revealed that in 2021, Anthropic co-founder Ben Mann downloaded Books3, a database of more than 196,000 books, which contained pirated works he knew were unauthorized. Similarly, he also downloaded at least 5 million pirated books from LibGen, a popular shadow library banned in India. Later, in 2022, the company downloaded 2 million pirated books from Pirate Library Mirror. Both databases contained at least two titles from each author in the infringement lawsuit, which later turned into a class action case. A class action lawsuit allows one or more people to sue on behalf of a larger group with similar claims. In August 2025, the U.S. District Court for the Northern District of California ruled that using copyrighted works to train Anthropic's AI models qualifies as fair use under U.S. copyright law, citing the "purpose and character" of the use as transformative. The Section 107 of the US Copyright Act defines fair use limits. Courts consider four factors to decide if Anthropic's use qualifies: Although the court ruled that using copyrighted books could qualify as fair use, it ordered a separate trial on the illegal use of pirated books. In August 2025, Anthropic agreed to settle the case. The $1.5 billion settlement stems from that trial. "Plaintiffs' core allegation is that Anthropic committed large-scale copyright infringement by downloading and commercially exploiting books that it obtained from allegedly pirated datasets," reads the court filing. To address this issue, the complainants proposed the following key principal terms: Referring to the $1.5 billion compensation to authors, the court filing noted: "On a per-work basis, the settlement amount is 4 times larger than the $750 statutory damages amount that a jury could award and 15 times larger than the $200 amount if Anthropic were to prevail on its defense of innocent infringement." The class action suit reached this conclusion after several submissions to the court, including "Attorneys'-Eyes-Only" source code documents in a secure environment and the inspection of hundreds of gigabytes of Slack exports, Notion wikis, and Google Vault data. Authors Guild CEO Mary Rasenberger said, "This settlement is a strong message to the AI industry that there are serious consequences when they pirate authors' works to train their AI, robbing those least able to afford it." Similarly, Association of American Publishers CEO and former Register of Copyrights Maria Pallante said that the settlement is beneficial to everyone. "It provides enormous value in sending the message that Artificial Intelligence companies cannot unlawfully acquire content from shadow libraries or other pirate sources to use as the building blocks for their businesses." Under the settlement, Anthropic will pay at least $1.5 billion into an interest-bearing escrow account at the Federal Deposit Insurance Corporation (FDIC), which will distribute payments to class members on a per-work basis. The company will make four payments: In exchange for settlement payments, class members agree not to sue Anthropic for past use, downloading, or copying of the books on the Works List, including their use to train AI models. For each work, the list will include the title, author, publisher, an ISBN and/or ASIN, and a U.S. copyright registration number. If a class member owns additional works, the settlement covers only those on the Works List. The complainants already provided Anthropic with a draft Works List of about 465,000 works on September 1. Anthropic must propose any changes by September 15, 2025. Both sides will then have two weeks to make further revisions and must submit a final joint Works List, or any disputes over it, by October 10, 2025. The settlement excludes any use, copying, or distribution of works after August 25, 2025, as well as any claims related to outputs from Anthropic's AI, whether past or future. The complainants propose that JND Legal, a national class-action administrator, serve as the settlement administrator to notify the class and carry out the settlement terms, as well as to open an interest-bearing escrow account at an FDIC-insured bank to hold Anthropic's payments. The notice plan includes: Additionally, the administrator must establish the Author-Publisher Working Group to advise and propose ways to efficiently address intra-work distributions. The group will be responsible for designing a claims form with the information needed for allocation, recommending adjustments to the notice, and proposing procedures to reduce the administrative burden of claims administration. According to the plan proposed by the plaintiffs:
[58]
Amazon-backed Anthropic agrees to pay authors $1.5B to settle...
Anthropic told a San Francisco federal judge on Friday that it has agreed to pay $1.5 billion to settle a class-action lawsuit from a group of authors who accused the artificial intelligence company of using their books to train its AI chatbot Claude without permission. Anthropic and the plaintiffs in a court filing asked US District Judge William Alsup to approve the settlement, after announcing the agreement in August without disclosing the terms or amount. "If approved, this landmark settlement will be the largest publicly reported copyright recovery in history, larger than any other copyright class action settlement or any individual copyright case litigated to final judgment," the plaintiffs said in the filing. The proposed deal marks the first settlement in a string of lawsuits against tech companies including OpenAI, Microsoft and Meta Platforms over their use of copyrighted material to train generative AI systems. Anthropic as part of the settlement said it will destroy downloaded copies of books the authors accused it of pirating, and under the deal it could still face infringement claims related to material produced by the company's AI models. In a statement, Anthropic said the company is "committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems." The agreement does not include an admission of liability. Writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson filed the class action against Anthropic last year. They argued that the company, which is backed by Amazon and Alphabet, unlawfully used millions of pirated books to teach its AI assistant Claude to respond to human prompts. The writers' allegations echoed dozens of other lawsuits brought by authors, news outlets, visual artists and others who say that tech companies stole their work to use in AI training. The companies have argued their systems make fair use of copyrighted material to create new, transformative content. Alsup ruled in June that Anthropic made fair use of the authors' work to train Claude, but found that the company violated their rights by saving more than 7 million pirated books to a "central library" that would not necessarily be used for that purpose. A trial was scheduled to begin in December to determine how much Anthropic owed for the alleged piracy, with potential damages ranging into the hundreds of billions of dollars. The pivotal fair-use question is still being debated in other AI copyright cases. Another San Francisco judge hearing a similar ongoing lawsuit against Meta ruled shortly after Alsup's decision that using copyrighted work without permission to train AI would be unlawful in "many circumstances."
[59]
Anthropic to pay US$1.5 billion to settle lawsuit over pirated chatbot training material
NEW YORK -- Artificial intelligence company Anthropic has agreed to pay US$1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot. The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement. The company has agreed to pay authors about US$3,000 for each of an estimated 500,000 books covered by the settlement. "As best as we can tell, it's the largest copyright recovery ever," said Justin Nelson, a lawyer for the authors. "It is the first of its kind in the AI era." A trio of authors -- thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson -- sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.
[60]
Anthropic agrees to pay $1.5 billion to settle author class action
(Reuters) -Anthropic told a San Francisco federal judge on Friday that it has agreed to pay $1.5 billion to settle a class-action lawsuit from a group of authors who accused the artificial intelligence company of using their books to train its AI chatbot Claude without permission. Anthropic and the plaintiffs in a court filing asked U.S. District Judge William Alsup to approve the settlement, after announcing the agreement in August without disclosing the terms or amount. "If approved, this landmark settlement will be the largest publicly reported copyright recovery in history, larger than any other copyright class action settlement or any individual copyright case litigated to final judgment," the plaintiffs said in the filing. The proposed deal marks the first settlement in a string of lawsuits against tech companies including OpenAI, Microsoft and Meta Platforms over their use of copyrighted material to train generative AI systems. Anthropic as part of the settlement said it will destroy downloaded copies of books the authors accused it of pirating, and under the deal it could still face infringement claims related to material produced by the company's AI models. In a statement, Anthropic said the company is "committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems." The agreement does not include an admission of liability. Writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson filed the class action against Anthropic last year. They argued that the company, which is backed by Amazon and Alphabet, unlawfully used millions of pirated books to teach its AI assistant Claude to respond to human prompts. The writers' allegations echoed dozens of other lawsuits brought by authors, news outlets, visual artists and others who say that tech companies stole their work to use in AI training. The companies have argued their systems make fair use of copyrighted material to create new, transformative content. Alsup ruled in June that Anthropic made fair use of the authors' work to train Claude, but found that the company violated their rights by saving more than 7 million pirated books to a "central library" that would not necessarily be used for that purpose. A trial was scheduled to begin in December to determine how much Anthropic owed for the alleged piracy, with potential damages ranging into the hundreds of billions of dollars. The pivotal fair-use question is still being debated in other AI copyright cases. Another San Francisco judge hearing a similar ongoing lawsuit against Meta ruled shortly after Alsup's decision that using copyrighted work without permission to train AI would be unlawful in "many circumstances." (Reporting by Blake Brittain and Mike Scarcella in Washington; Editing by David Bario, Lisa Shumaker and Matthew Lewis)
Share
Share
Copy Link
Anthropic agrees to pay $1.5 billion to authors for pirating books to train AI, marking the largest copyright settlement in US history. The case raises questions about AI training practices and fair compensation for creators.
Anthropic, the company behind the AI chatbot Claude, has agreed to a $1.5 billion settlement in a class-action lawsuit over pirating books to train its AI models
1
3
. This settlement, believed to be the largest in U.S. copyright litigation history, covers approximately 500,000 works and promises a minimum payout of $3,000 per work to affected authors and publishers3
.Source: MediaNama
U.S. District Judge William Alsup, overseeing the case, has expressed significant concerns about the proposed settlement
1
5
. He criticized the rushed nature of the deal and its potential to be shoved "down the throat of authors." Alsup highlighted several unresolved issues, including the finalization of the Works List, Class List, Claim Form, and processes for notification, allocation, and dispute resolution1
.Source: ABC News
While this settlement addresses the piracy aspect, it doesn't resolve the broader question of whether training AI on copyrighted works is legal
2
4
. In a separate ruling, Judge Alsup determined that using legally obtained books for AI training falls under fair use, setting a precedent that could influence future cases2
.Related Stories
The settlement has sparked mixed reactions within the literary and tech communities. The Authors Guild hailed it as "an excellent result for authors, publishers, and rightsholders," emphasizing the consequences for companies pirating works for AI training
3
. However, some critics argue that the settlement amount is relatively small compared to Anthropic's recent $13 billion funding round and overall valuation1
2
.Source: Fast Company
This case highlights the ongoing tension between AI development and copyright protection. As AI companies invest billions in infrastructure and development, questions arise about fair compensation for the creators whose works contribute to these systems
4
. The outcome of this settlement and similar cases could shape the future landscape of AI training practices and copyright law in the digital age.Summarized by
Navi
[2]
27 Aug 2025•Policy and Regulation
Today•Policy and Regulation
29 Jul 2025•Business and Economy