25 Sources
25 Sources
[1]
"First of its kind" AI settlement: Anthropic to pay authors $1.5 billion
Authors revealed today that Anthropic agreed to pay $1.5 billion and destroy all copies of the books the AI company pirated to train its artificial intelligence models. In a press release provided to Ars, the authors confirmed that the settlement is "believed to be the largest publicly reported recovery in the history of US copyright litigation." Covering 500,000 works that Anthropic pirated for AI training, if a court approves the settlement, each author will receive $3,000 per work that Anthropic stole. "Depending on the number of claims submitted, the final figure per work could be higher," the press release noted. Anthropic has already agreed to the settlement terms, but a court must approve them before the settlement is finalized. Preliminary approval may be granted this week, while the ultimate decision may be delayed until 2026, the press release noted. Justin Nelson, a lawyer representing the three authors who initially sued to spark the class action -- Andrea Bartz, Kirk Wallace Johnson, and Charles Graeber -- confirmed that if the "first of its kind" settlement "in the AI era" is approved, the payouts will "far" surpass "any other known copyright recovery." "It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners," Nelson said. "This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong." Groups representing authors celebrated the settlement on Friday. The CEO of the Authors' Guild, Mary Rasenberger, said it was "an excellent result for authors, publishers, and rightsholders generally." Perhaps most critically, the settlement shows "there are serious consequences when" companies "pirate authors' works to train their AI, robbing those least able to afford it," Rasenberger said.
[2]
Screw the money -- Anthropic's $1.5B copyright settlement sucks for writers | TechCrunch
Around half a million writers will be eligible for a payday of at least $3,000, thanks to a historic $1.5 billion settlement in a class action lawsuit that a group of authors brought against Anthropic. This landmark settlement marks the largest payout in the history of U.S. copyright law, but this isn't a victory for authors -- it's yet another win for tech companies. Tech giants are racing to amass as much written material as possible to train their LLMs, which power groundbreaking AI chat products like ChatGPT and Claude -- the same products that are endangering the creative industries, even if their outputs are milquetoast. These AIs can become more sophisticated when they ingest more data, but after scraping basically the entire internet, these companies are literally running out of new information. That's why Anthropic, the company behind Claude, pirated millions of books from "shadow libraries" and fed them into its AI. This particular lawsuit, Bartz v. Anthropic, is one of dozens filed against companies like Meta, Google, OpenAI, and Midjourney over the legality of training AI on copyrighted works. But writers aren't getting this settlement because their work was fed to an AI -- this is just a costly slap on the wrist for Anthropic, a company that just raised another $13B, because it illegally downloaded books instead of buying them. In June, federal judge William Alsup sided with Anthropic and ruled that it is, indeed, legal to train AI on copyrighted material. The judge argues that this use case is "transformative" enough to be protected by the fair use doctrine, a carve-out of copyright law that hasn't been updated since 1976. "Like any reader aspiring to be a writer, Anthropic's LLMs trained upon works not to race ahead and replicate or supplant them -- but to turn a hard corner and create something different," the judge said. It was the piracy -- not the AI training -- that moved Judge Alsup to bring the case to trial, but with Anthropic's settlement, a trial is no longer necessary. "Today's settlement, if approved, will resolve the plaintiffs' remaining legacy claims," said Aparna Sridhar, deputy general counsel at Anthropic, in a statement. "We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems." As dozens more cases over the relationship between AI and copyrighted works go to court, judges now have Bartz v. Anthropic to reference as a precedent. But given the ramifications of these decisions, maybe another judge will arrive at a different conclusion.
[3]
Anthropic Agrees to Pay Authors at Least $1.5 Billion in AI Copyright Settlement
Anthropic will pay at least $3,000 for each copyrighted work that it pirated. The company downloaded unauthorized copies of books in early efforts to gather training data for its AI tools. Anthropic has agreed to pay at least $1.5 billion to settle a lawsuit brought by a group of book authors alleging copyright infringement, an estimated $3,000 per work. The amount is well below what Anthropic may have had to pay if it had lost the case at trial. Experts said the plaintiffs may have been awarded at least billions of dollars in damages, with some estimates placing the total figure over $1 trillion. This is the first class action legal settlement centered on AI and copyright in the United States, and the outcome may shape how regulators and creative industries approach the legal debate over generative AI and intellectual property. "This landmark settlement far surpasses any other known copyright recovery. It is the first of its kind in the AI era. It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners. This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong," says co-lead plaintiffs' counsel Justin Nelson of Susman Godfrey LLP. Anthropic is not admitting any wrongdoing or liability. "Today's settlement, if approved, will resolve the plaintiffs' remaining legacy claims. We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems," Anthropic deputy general counsel Aparna Sridhar said in a statement. The lawsuit, which was originally filed in 2024 in the US District Court for the Northern District of California, was part of a larger ongoing wave of copyright litigation brought against tech companies over the data they used to train artificial intelligence programs. Authors Andrea Bartz, Kirk Wallace Johnson, and Charles Graeber alleged that Anthropic trained its large language models on their work without permission, violating copyright law. This June, senior district judge William Alsup ruled that Anthropic's AI training was shielded by the "fair use" doctrine, which allows unauthorized use of copyrighted works under certain conditions. It was a win for the tech company, but came with a major caveat. Anthropic had relied on a corpus of books pirated from so-called "shadow libraries," including the notorious site LibGen, and Alsup determined that the authors should still be able to bring Anthropic to trial in a class action over pirating their work. "Anthropic downloaded over seven million pirated copies of books, paid nothing, and kept these pirated copies in its library even after deciding it would not use them to train its AI (at all or ever again). Authors argue Anthropic should have paid for these pirated library copies. This order agrees," Alsup wrote in his summary judgement. It's unclear how the literary world will respond to the terms of the settlement. Since this was an "opt-out" class action, authors who are eligible but dissatisfied with the terms will be able to request exclusion to file their own lawsuits. Notably, the plaintiffs filed a motion today to keep the "opt-out threshold" confidential, which means that the public will not have access to the exact number of class members who would need to opt out for the settlement to be terminated. This is not the end of Anthropic's copyright legal challenges. The company is also facing a lawsuit from a group of major record labels, including Universal Music Group, which alleges that the company used copyrighted lyrics to train its Claude chatbot. The plaintiffs are now attempting to amend their case to include allegations that Anthropic used the peer-to-peer file sharing service BitTorrent to illegally download songs, and their lawyers recently stated in court filings that they may file a new lawsuit about piracy if they are not permitted to amend the current complaint.
[4]
Anthropic Will Pay $1.5 Billion to Authors in Landmark AI Piracy Lawsuit
Anthropic will pay $1.5 billion to settle a lawsuit brought by a group of authors alleging that the AI company illegally pirated their copyrighted books to use in training its Claude AI models. The settlement was announced Aug. 29, as the parties in the lawsuit filed a motion with the 9th US Circuit Court of Appeals indicating they had reached an agreement. "This landmark settlement far surpasses any other known copyright recovery. It is the first of its kind in the AI era," Justin Nelson, lawyer for the authors, told CNET. "It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners. This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong." The settlement still needs to be approved by the court, which it could do at a hearing on Monday, Sept. 8. Authors in the class could receive approximately $3,000 per pirated work, according to their attorneys' estimates. They expect the case will include at least 500,000 works, with Anthropic paying an additional $3,000 for any materials added to the case. "In June, the District Court issued a landmark ruling on AI development and copyright law, finding that Anthropic's approach to training AI models constitutes fair use. Today's settlement, if approved, will resolve the plaintiffs' remaining legacy claims," Aparna Sridhar, Anthropic's deputy general counsel, told CNET. "We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems." This settlement is the latest update in a string of legal moves and rulings between the AI company and authors. Earlier this summer, US Senior District Court Judge William Alsup ruled Anthropic's use of the copyrighted materials was justifiable as fair use -- a concept in copyright law that allows people to use copyrighted content without the rights holder's permission for specific purposes, like education. The ruling was the first time a court sided with an AI company and said its use of copyrighted material qualified as fair use, though Alsup said this may not always be true in future cases. Two days after Anthropic's victory, Meta won a similar case under fair use. Read more: We're All Copyright Owners. Why You Need to Care About AI and Copyright Alsup's ruling also revealed that Anthropic systematically acquired and destroyed thousands of used books to scan them into a private, digitized library for AI training. It was this claim that was recommended for a secondary, separate trial that Anthropic has decided to settle out of court. In class action suits, the terms of a settlement need to be reviewed and approved by the court. The settlement means both groups "avoid the cost, delay and uncertainty associated with further litigating the case," Christian Mammen, an intellectual property lawyer and San Francisco office managing partner at Womble Bond Dickinson, told CNET. "Anthropic can move forward with its business without being the first major AI platform to have one of these copyright cases go to trial," Mammen said. "And the plaintiffs can likely receive the benefit of any financial or non-financial settlement terms sooner. If the case were litigated through trial and appeal, it could last another two years or more." Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source. Copyright cases like these highlight the tension between creators and AI developers. AI companies have been pushing hard for fair use exceptions as they gobble up huge swaths of data to train their models and don't want to pay or wait to license them. Without legislation guiding how companies can develop and train AI, court cases like these have become important in shaping the future of the products people use daily. "The terms of this settlement will likely become a data point or benchmark for future negotiations and, possibly, settlements in other AI copyright cases," said Mammen. Every case is different and needs to be weighed on its merits, he added, but it still could be influential. There are still big questions about how copyright law should be applied in the age of AI. Just like how we saw Alsup's Anthropic analysis referenced in Meta's case, each case helps build precedent that guides the legal guardrails and green lights around this technology. The settlement will bring this specific case to an end, but it doesn't give any clarity to the underlying legal dilemmas that AI raises. "This remaining uncertainty in the law could open the door to a further round of litigation," Mammen said, "involving different plaintiffs and different defendants, with similar legal issues but different facts."
[5]
Anthropic to Pay $1.5 Billion to Settle Author Copyright Claims
By Shirin Ghaffary, Annelise Levy (Bloomberg Law) and Aruni Soni (Bloomberg Law) Anthropic PBC will pay $1.5 billion to resolve an authors' copyright lawsuit over the AI company's downloading of millions of pirated books, one of the largest settlements over artificial intelligence and intellectual property to date. A request for preliminary approval of the accord, involving one of the fastest-growing AI startups, was filed Friday with a San Francisco federal judge who had set the closely watched case for trial in December.
[6]
AI start-up Anthropic settles landmark copyright suit for $1.5bn
AI start-up Anthropic has agreed to pay $1.5bn to settle a copyright lawsuit over its use of pirated texts, setting a precedent for tech companies facing intellectual property cases from authors and publishers. The settlement, which must be approved by the San Francisco federal judge overseeing the case, would be "the largest publicly reported copyright recovery in history", according to a court filing on Friday. The class action suit was brought by authors who claimed Anthropic had downloaded 465,000 books and other texts from "pirated websites" including Library Genesis and Pirate Library Mirror, which it then used to train its large language models. Failure to reach an agreement would have led to a trial, with the prospect of damages of up to $1tn. That would have bankrupted Anthropic, a four-year-old start-up backed by Amazon and Google that was recently valued at $170bn. The case is among several facing artificial intelligence companies, including OpenAI and Meta, alleging they have improperly used copyrighted works to train their models. The results will help determine how authors are compensated for the use of their works and could have significant ramifications for how AI companies train their models and the costs of developing them. Mary Rasenberger, chief executive of the Authors Guild, said Friday's settlement sent a "strong message to the AI industry that there are serious consequences when they pirate authors' works to train their AI, robbing those least able to afford it". Anthropic and other AI companies have claimed that training models on copyrighted books is fair use, arguing that their models transform the original work into something with a new meaning. In June, the California district court ruled that Anthropic's use of some copyrighted works in such a way was fair. But it determined that storing pirated works was "inherently, irredeemably infringing", teeing up Friday's settlement. Friday's ruling also means Anthropic will have to destroy the datasets it had downloaded from Library Genesis and Pirate Library Mirror. "In June, the district court issued a landmark ruling on AI development and copyright law, finding that Anthropic's approach to training AI models constitutes fair use. Today's settlement, if approved, will resolve the plaintiffs' remaining legacy claims," said Anthropic's deputy general counsel Aparna Sridhar in a statement. "We remain committed to developing safe AI systems that help people and organisations extend their capabilities, advance scientific discovery and solve complex problems," she added.
[7]
Anthropic agrees to pay $1.5 billion to settle author class action
Sept 5 (Reuters) - Anthropic told a San Francisco federal judge on Friday that it has agreed to pay $1.5 billion to settle a class-action lawsuit from a group of authors who accused the artificial intelligence company of using their books to train its AI chatbot Claude without permission. Anthropic and the plaintiffs in a court filing asked U.S. District Judge William Alsup to approve the settlement, after announcing the agreement in August without disclosing the terms or amount. "If approved, this landmark settlement will be the largest publicly reported copyright recovery in history, larger than any other copyright class action settlement or any individual copyright case litigated to final judgment," the plaintiffs said in the filing. The proposed deal marks the first settlement in a string of lawsuits against tech companies including OpenAI, Microsoft (MSFT.O), opens new tab and Meta Platforms (META.O), opens new tab over their use of copyrighted material to train generative AI systems. Anthropic as part of the settlement said it will destroy downloaded copies of books the authors accused it of pirating, and under the deal it could still face infringement claims related to material produced by the company's AI models. In a statement, Anthropic said the company is "committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems." The agreement does not include an admission of liability. Writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson filed the class action against Anthropic last year. They argued that the company, which is backed by Amazon (AMZN.O), opens new tab and Alphabet (GOOGL.O), opens new tab, unlawfully used millions of pirated books to teach its AI assistant Claude to respond to human prompts. The writers' allegations echoed dozens of other lawsuits brought by authors, news outlets, visual artists and others who say that tech companies stole their work to use in AI training. The companies have argued their systems make fair use of copyrighted material to create new, transformative content. Alsup ruled in June that Anthropic made fair use of the authors' work to train Claude, but found that the company violated their rights by saving more than 7 million pirated books to a "central library" that would not necessarily be used for that purpose. A trial was scheduled to begin in December to determine how much Anthropic owed for the alleged piracy, with potential damages ranging into the hundreds of billions of dollars. The pivotal fair-use question is still being debated in other AI copyright cases. Another San Francisco judge hearing a similar ongoing lawsuit against Meta ruled shortly after Alsup's decision that using copyrighted work without permission to train AI would be unlawful in "many circumstances." Reporting by Blake Brittain and Mike Scarcella in Washington; Editing by David Bario, Lisa Shumaker and Matthew Lewis Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Artificial Intelligence * Consumer Protection Blake Brittain Thomson Reuters Blake Brittain reports on intellectual property law, including patents, trademarks, copyrights and trade secrets, for Reuters Legal. He has previously written for Bloomberg Law and Thomson Reuters Practical Law and practiced as an attorney.
[8]
Anthropic tells US judge it will pay $1.5 billion to settle author class action
Sept 5 (Reuters) - Anthropic told a San Francisco federal judge on Friday that it has agreed to pay $1.5 billion to settle a class-action lawsuit from a group of authors who accused the artificial intelligence company of using their books to train its AI chatbot Claude without permission. Anthropic and the plaintiffs in a court filing asked U.S. District Judge William Alsup to approve the settlement, after announcing the agreement in August without disclosing the terms or amount. "If approved, this landmark settlement will be the largest publicly reported copyright recovery in history, larger than any other copyright class action settlement or any individual copyright case litigated to final judgment," the plaintiffs said in the filing. The proposed deal marks the first settlement in a string of lawsuits against tech companies including OpenAI, Microsoft (MSFT.O), opens new tab and Meta Platforms (META.O), opens new tab over their use of copyrighted material to train generative AI systems. Writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson filed the class action against Anthropic last year. They argued that the company, which is backed by Amazon (AMZN.O), opens new tab and Alphabet (GOOGL.O), opens new tab, unlawfully used millions of pirated books to teach its AI assistant Claude to respond to human prompts. The writers' allegations echoed dozens of other lawsuits brought by authors, news outlets, visual artists and others who say that tech companies stole their work to use in AI training. The companies have argued their systems make fair use of copyrighted material to create new, transformative content. Alsup ruled in June that Anthropic made fair use of the authors' work to train Claude, but found that the company violated their rights by saving more than 7 million pirated books to a "central library" that would not necessarily be used for that purpose. A trial was scheduled to begin in December to determine how much Anthropic owed for the alleged piracy, with potential damages ranging into the hundreds of billions of dollars. The pivotal fair-use question is still being debated in other AI copyright cases. Another San Francisco judge hearing a similar ongoing lawsuit against Meta ruled shortly after Alsup's decision that using copyrighted work without permission to train AI would be unlawful in "many circumstances." Reporting by Blake Brittain and Mike Scarcella in Washington; Editing by David Bario and Lisa Shumaker Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Artificial Intelligence * Consumer Protection Blake Brittain Thomson Reuters Blake Brittain reports on intellectual property law, including patents, trademarks, copyrights and trade secrets, for Reuters Legal. He has previously written for Bloomberg Law and Thomson Reuters Practical Law and practiced as an attorney.
[9]
Anthropic to pay authors $1.5 billion to settle lawsuit over pirated chatbot training material
NEW YORK (AP) -- Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot. The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement. The company has agreed to pay authors about $3,000 for each of an estimated 500,000 books covered by the settlement. "As best as we can tell, it's the largest copyright recovery ever," said Justin Nelson, a lawyer for the authors. "It is the first of its kind in the AI era." A trio of authors -- thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson -- sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude. A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn't illegal but that Anthropic wrongfully acquired millions of books through pirate websites. If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money. "We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business," said William Long, a legal analyst for Wolters Kluwer. U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms. Books are known to be important sources of data -- in essence, billions of words carefully strung together -- that are needed to build the AI large language models behind chatbots like Anthropic's Claude and its chief rival, OpenAI's ChatGPT. Alsup's June ruling found that Anthropic had downloaded more than 7 million digitized books that it "knew had been pirated." It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained. Debut thriller novel "The Lost Night" by Bartz, a lead plaintiff in the case, was among those found in the Books3 dataset. Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote. The Authors Guild told its thousands of members last month that it expected "damages will be minimally $750 per work and could be much higher" if Anthropic was found at trial to have willfully infringed their copyrights. The settlement's higher award -- approximately $3,000 per work -- likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright. On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement "an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors' works to train their AI, robbing those least able to afford it." The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren't registered with the U.S. Copyright Office. "On the one hand, it's comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price," said Thomas Heldrup, the group's head of content protection and enforcement. On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules. "It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space," Heldrup said. The privately held Anthropic, founded by ex-OpenAI leaders in 2021, said Tuesday that it had raised another $13 billion in investments, putting its value at $183 billion. Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs.
[10]
Anthropic will pay a record-breaking $1.5 billion to settle copyright lawsuit with authors
Writers involved in the case will reportedly receive $3,000 per work. Anthropic will pay a record-breaking $1.5 billion to settle a class action lawsuit piracy lawsuit brought by authors and publishers. The settlement is the largest-ever payout for a copyright case in the United States. The AI company behind the Claude chatbot reached a settlement in the case last week, but terms of the agreement weren't disclosed at the time. Now, The New York Times that the 500,000 authors involved in the case will get $3,000 per work. "In June, the District Court issued a landmark ruling on AI development and copyright law, finding that Anthropic's approach to training AI models constitutes fair use," Anthropic's Deputy General Counsel Aparna Sridhar said in a statement. "Today's settlement, if approved, will resolve the plaintiffs' remaining legacy claims. We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems."
[11]
Anthropic agrees to pay $1.5 billion to settle authors' copyright lawsuit
If Anthropic's settlement is approved, it will be the largest publicly reported copyright recovery in history, the filing said. Anthropic has agreed to pay at least $1.5 billion to settle a class action lawsuit with a group of authors, who claimed the artificial intelligence startup had illegally accessed their books. The company will pay roughly $3,000 per book plus interest, and agreed to destroy the datasets containing the allegedly pirated material, according to a filing on Friday. The lawsuit against Anthropic has been closely watched by AI startups and media companies that have been trying to determine what copyright infringement means in the AI era. If Anthropic's settlement is approved, it will be the largest publicly reported copyright recovery in history, according to the filing. "This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong," Justin Nelson, the attorney for the plaintiffs, told CNBC in a statement. Anthropic didn't immediately respond to CNBC's request for comment. The lawsuit, filed in the U.S. District Court for the Northern District of California, was brought last year by authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson. The suit alleged that Anthropic had carried out "largescale copyright infringement by downloading and commercially exploiting books that it obtained from allegedly pirated datasets," the filing said. In June, a judge ruled that Anthropic's use of books to train its AI models was "fair use," but ordered a trial to assess whether the company infringed on copyright by obtaining works from the databases Library Genesis and Pirate Library Mirror. The case was slated to proceed to trial in December, according to Friday's filing. Earlier this week, Anthropic said it closed a $13 billion funding round that valued the company at $183 billion. The financing was led by Iconiq, Fidelity Management and Lightspeed Venture Partners.
[12]
Anthropic Agrees to $1.5 Billion Settlement for Downloading Pirated Books to Train AI
Authors sued after it was revealed Anthropic downloaded the books from Library Genesis. Anthropic has agreed to pay $1.5 billion to settle a lawsuit brought by authors and publishers over its use of millions of copyrighted books to train the models for its AI chatbot Claude, according to a legal filing posted online. A federal judge found in June that Anthropic's use of 7 million pirated books was protected under fair use but that holding the digital works in a "central library" violated copyright law. The judge ruled that executives at the company knew they were downloading pirated works, and a trial was scheduled for December. The settlement, which was presented to a federal judge on Friday, still needs final approval but would pay $3,000 per book to hundreds of thousands of authors, according to the New York Times. The $1.5 billion settlement would be the largest payout in the history of U.S. copyright law, though the amount paid per work has often been higher. For example, in 2012, a woman in Minnesota paid about $9,000 per song downloaded, a figure brought down after she was initially ordered to pay over $60,000 per song. In a statement to Gizmodo on Friday, Anthropic touted the earlier ruling from June that it was engaging in fair use by training models with millions of books. “In June, the District Court issued a landmark ruling on AI development and copyright law, finding that Anthropic's approach to training AI models constitutes fair use," Aparna Sridhar, deputy general counsel at Anthropic, said in a statement by email. "Today's settlement, if approved, will resolve the plaintiffs' remaining legacy claims. We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems," Sridhar continued. According to the legal filing, Anthropic says the payments will go out in four tranches tied to court-approved milestones. The first payment would be $300 million within five days after the court's preliminary approval of the settlement, and another $300 million within five days of the final approval order. Then $450 million would be due, with interest, within 12 months of the preliminary order. And finally $450 million within the year after that. Anthropic, which was recently valued at $183 billion, is still facing lawsuits from companies like Reddit, which struck a deal in early 2024 to let Google train its AI models on the platform's content. And authors still have active lawsuits against the other big tech firms like OpenAI, Microsoft, and Meta. The ruling from June explained that Anthropic's training of AI models with copyrighted books would be considered fair use under U.S. copyright law because theoretically someone could read "all the modern-day classics" and emulate them, which would be protected: â€|not reproduced to the public a given work’s creative elements, nor even one author’s identifiable expressive styleâ€|Yes, Claude has outputted grammar, composition, and style that the underlying LLM distilled from thousands of works. But if someone were to read all the modern-day classics because of their exceptional expression, memorize them, and then emulate a blend of their best writing, would that violate the Copyright Act? Of course not. "Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant themâ€"but to turn a hard corner and create something different," the ruling said. Under this legal theory, all the company needed to do was buy every book it pirated to lawfully train its models, something that certainly costs less than $3,000 per book. But as the New York Times notes, this settlement won't set any legal precedent that could determine future cases because it isn't going to trial.
[13]
Anthropic to pay authors $1.5B to settle lawsuit over pirated chatbot training material
NEW YORK -- Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot. The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement. The company has agreed to pay authors about $3,000 for each of an estimated 500,000 books covered by the settlement. "As best as we can tell, it's the largest copyright recovery ever," said Justin Nelson, a lawyer for the authors. "It is the first of its kind in the AI era." A trio of authors -- thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson -- sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude. A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn't illegal but that Anthropic wrongfully acquired millions of books through pirate websites. If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money. "We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business," said William Long, a legal analyst for Wolters Kluwer. U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms. Anthropic said in a statement Friday that the settlement, if approved, "will resolve the plaintiffs' remaining legacy claims." "We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems," said Aparna Sridhar, the company's deputy general counsel. As part of the settlement, the company has also agreed to destroy the original book files it downloaded. Books are known to be important sources of data -- in essence, billions of words carefully strung together -- that are needed to build the AI large language models behind chatbots like Anthropic's Claude and its chief rival, OpenAI's ChatGPT. Alsup's June ruling found that Anthropic had downloaded more than 7 million digitized books that it "knew had been pirated." It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained. Debut thriller novel The Lost Night by Bartz, a lead plaintiff in the case, was among those found in the Books3 dataset. Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote. The Authors Guild told its thousands of members last month that it expected "damages will be minimally $750 per work and could be much higher" if Anthropic was found at trial to have willfully infringed their copyrights. The settlement's higher award -- approximately $3,000 per work -- likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright. On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement "an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors' works to train their AI, robbing those least able to afford it." The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren't registered with the U.S. Copyright Office. "On the one hand, it's comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price," said Thomas Heldrup, the group's head of content protection and enforcement. On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules. "It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space," Heldrup said. The privately held Anthropic, founded by ex-OpenAI leaders in 2021, said Tuesday that it had raised another $13 billion in investments, putting its value at $183 billion. Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs.
[14]
Anthropic to pay $1.5 billion in copyright settlement
Why it matters: The settlement marks a turning point in the clash between AI companies and content owners, which could alter how training data is sourced and inspire more licensing deals. Zoom in: The judge ruled that Anthropic's approach to buying physical books and making digital copies for training its large language models was fair use, but identified that the company had illegally acquired millions of copyrighted books. * Anthropic will pay an estimated $3,000 per work to roughly 500,000 authors. * The company will also delete the pirated works it downloaded from shadow libraries like like Library Genesis and Pirate Library Mirror. * "We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems," Anthropic deputy general counsel Aparna Sridhar said in a statement. Between the lines: The case spotlights a tension in the AI era with courts ruling that training on copyrighted material can qualify as fair use but how companies obtain that data still carries legal ramifications. Zoom out: Since the lawsuit was settled instead of going to trial, it will not set a legal precedent. * But it raises the stakes for dozens of similar lawsuits and could push more AI companies toward licensing, much like the battle between the music streaming service and record labels after Napster. What's next: The judge must approve the settlement.
[15]
AI startup Anthropic agrees to pay $1.5bn to settle book piracy lawsuit
Settlement could be pivotal after authors claimed company took pirated copies of their work to train chatbots Artificial intelligence company Anthropic has agreed to pay $1.5bn to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot. The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement. The company has agreed to pay authors about $3,000 for each of an estimated 500,000 books covered by the settlement. "As best as we can tell, it's the largest copyright recovery ever," said Justin Nelson, a lawyer for the authors. "It is the first of its kind in the AI era." A trio of authors - thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson - sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude. A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn't illegal but that Anthropic wrongfully acquired millions of books through piracy websites. If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money. "We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business," said William Long, a legal analyst for Wolters Kluwer. US district judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms. Books are known to be important sources of data - in essence, billions of words carefully strung together - that are needed to build the AI large language models behind chatbots like Anthropic's Claude and its chief rival, OpenAI's ChatGPT. Alsup's June ruling found that Anthropic had downloaded more than 7m digitized books that it "knew had been pirated". It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained. Debut thriller novel The Lost Night by Bartz, a lead plaintiff in the case, was among those found in the Books3 dataset. Anthropic later took at least 5m copies from the pirate website Library Genesis, or LibGen, and at least 2m copies from the Pirate Library Mirror, Alsup wrote. The Authors Guild told its thousands of members last month that it expected "damages will be minimally $750 per work and could be much higher" if Anthropic was found at trial to have willfully infringed their copyrights. The settlement's higher award - approximately $3,000 per work - likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright. On Friday, Mary Rasenberger, the CEO of the Authors Guild, called the settlement "an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors' works to train their AI, robbing those least able to afford it".
[16]
Attention Writers: Anthropic Might Owe You $3000 (or More!) If It Was Trained Using Your Work
Writing is a wonderful profession... in writers' dreams! In reality, it's a grind that's comically unprofitable for the vast majority, to say nothing of the tortured ennui that comes with having to deal with actually writing, or the thought of actually writing, or the thought of what you aren't right now actually writing. And the economics are more harrowing than ever, as the once halfway-decent living one could make from publishing a book is now going the way of the dinosaur as people read less, scroll more, and AIs hoover up intellectual property and regurgitate it with little to no remuneration. And yet, for once, there's some good-ish news for the writerly set. A landmark lawsuit against Anthropic, the company behind the chatbot Claude -- and whose CEO recently said that it's okay for their product to benefit dictators -- has resulted in a settlement! While it's not quite the trillion-dollar lawsuit our AI overlords had hysterically claimed could bring about the end of their industry, it's no small chump change, either, especially for the writers who can now get paid out per work that Claude devoured on its way to becoming a best-in-class product. Per a press release from the plaintiffs, Anthropic is supposed to fork over $3,000 for each work covered in the settlement, of which there are something like 500,000. If over 500,000 works' owners file, they'll pay those, too. If under 500,000 works are filed for, their authors could get more for them. All in, the settlement is about $1.5 billion, plus interest. Also of note: This doesn't completely indemnify Anthropic from further lawsuits if someone thinks their work is being used by Anthropic illegally. Even more, Anthropic agreed to delete the pirated works it download. All told, it's big news. The New York Times quoted one lawyer as calling the lawsuit "the AI industry's Napster moment," referring to the file-sharing network that was brought down by storm of lawsuits from the likes of Metallica and Dr. Dre. If any authors think they had their books stolen, they can go to the settlement page here, sign up, and pray they get some cash in the mail. If they want to check if they stand any chance of getting the money, you can search the LibGen database, from which Anthropic illegally pirated the books in question. Sadly, as the author of this blog post hasn't written any books, he will not be getting any Anthropic lawsuit money. But we know someone who might! Kyle Chayka, a staff writer at The New Yorker whose work zeroes in on the intersection between technology, art, and culture, is the author of not one but two books that popped up in LibGen: 2024's "Filterworld: How Algorithms Flattened Culture" and 2020's "The Longing For Less: Living With Minimalism." Also in found in LibGen was the Italian translation of Filterworld. All in, he could stand to make upwards of $12K! We asked Kyle: How does the sum of "$3,000 per class work" feel as a number given that his intellectual property was used to train an AI? Low, high, not worth it on principle, or about right? "It should be a license, really," he replied. "Because the training never goes away. So it could be $5,000 every 5 years, or $1,000 / year as long as they exist. But the price seems about right, honestly -- a decent percentage of most book advances, and about the price of an institutional speaking gig." Fair! But does it make him feel any Type Of Way that his writing will be used to power machines hoovering up natural resources that will one day -- per all the people making it -- crush us like the ants we humans are to them? "Even if AI couldn't be trained on any book ever produced, tech would still find ways around it; the real licensing money will be 'new data' aka fresh journalism," he wrote back. "So outside of my preexisting enmity and hatred for the way that AI is destroying civilization and the planet, I don't think the books make it much worse." Again, tough, but fair. Finally -- and perhaps most importantly -- what will he put his potentially $12K settlement payment towards? "A down payment for a stone townhouse in a small Italian village driving distance from the coast where I can preserve what remains of my lifestyle," he said.
[17]
AI giant Anthropic to pay $1.5 bn over pirated books
San Francisco (United States) (AFP) - Anthropic will pay at least $1.5 billion to settle a US class action lawsuit over pirated books allegedly used to train its artificial intelligence (AI) models, according to court documents filed Friday. "This settlement sends a strong message to the AI industry that there are serious consequences when they pirate authors' works to train their AI, robbing those least able to afford it," said Mary Rasenberger, CEO of the Authors Guild, in a statement supporting the deal. The settlement stems from a class-action lawsuit filed by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who accused Anthropic of illegally copying their books to train Claude, the company's AI chatbot that rivals ChatGPT. In a partial victory for Anthropic, US District Court Judge William Alsup ruled in June that the company's training of its Claude AI models with books -- whether bought or pirated -- so transformed the works that it constituted "fair use" under the law. "The technology at issue was among the most transformative many of us will see in our lifetimes," Alsup wrote in his 32-page decision, comparing AI training to how humans learn by reading books. However, Alsup rejected Anthropic's bid for blanket protection, ruling that the company's practice of downloading millions of pirated books to build a permanent digital library was not justified by fair use protections. According to the legal filing, the settlement covers approximately 500,000 books, translating to roughly $3,000 per work -- four times the minimum statutory damages under US copyright law. Under the agreement, Anthropic will destroy the original pirated files and any copies derived from them, though the company retains rights to books it legally purchased and scanned. Anthropic did not immediately respond to requests for comment. The settlement, which requires judicial approval, comes as AI companies face growing legal pressure over their training practices. Multiple lawsuits against firms including OpenAI, Meta, and others remain pending, with rightsholders arguing that scraping copyrighted content without permission violates intellectual property law. San Francisco-based Anthropic, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development. The company announced this week that it raised $13 billion in a funding round valuing the AI startup at $183 billion. It will use the capital to expand capacity, deepen safety research, and support international expansion. Anthropic competes with generative AI offerings from Google, OpenAI, Meta, and Microsoft in a race that is expected to attract hundreds of billions of dollars in investment over the next few years. Heavily backed by Amazon, Anthropic was founded in 2021 by former OpenAI executives and has grown rapidly since Claude's initial release in early 2023, with its annual revenue rate quintupling to $5 billion since early this year.
[18]
Anthropic to pay $1.5 billion to settle authors' copyright lawsuit
Anthropic, which operates the Claude artificial intelligence app, has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who alleged the company took pirated copies of their works to train its chatbot. The company has agreed to pay authors about $3,000 for each of an estimated 500,000 books covered by the settlement. A trio of authors -- thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson -- sued last year, and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude. The landmark settlement could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement. A judge could approve the settlement as soon as Monday. "As best as we can tell, it's the largest copyright recovery ever," said Justin Nelson, a lawyer for the authors. "It is the first of its kind in the AI era." In a statement to CBS News, Anthropic Aparna Sridhar deputy general counsel said the settlement "will resolve the plaintiffs' remaining legacy claims." Sridhar added that the settlement comes after the U.S. District Court for the Northern District of California in June ruled that Anthropic's use of legally purchased books to train Claude did not violate U.S. copyright law. "We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery and solve complex problems," Sridhar said. Anthropic, which was founded by former executives with ChatGPT developer OpenAI, introduced Claude in 2023. Like other generative AI bots, the tool lets users ask natural language questions and then provides summarized answers using AI trained on millions of books, articles and other material. If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money. "We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business," said William Long, a legal analyst for Wolters Kluwer. U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms. Books are known to be important sources of data -- in essence, billions of words carefully strung together -- that are needed to build the AI large language models behind chatbots like Anthropic's Claude and its chief rival, OpenAI's ChatGPT. Alsup's June ruling found that Anthropic had downloaded more than 7 million digitized books that it "knew had been pirated." It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained. Debut thriller novel "The Lost Night" by Bartz, a lead plaintiff in the case, was among those found in the Books3 dataset. Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote. The Authors Guild told its thousands of members last month that it expected "damages will be minimally $750 per work and could be much higher" if Anthropic was found at trial to have willfully infringed their copyrights. The settlement's higher award -- approximately $3,000 per work -- likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright. On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement "an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors' works to train their AI, robbing those least able to afford it."
[19]
AI company Anthropic agrees to pay $1.5B to settle lawsuit with authors
The Anthropic app on a smartphone.Gabby Jones / Bloomberg via Getty Images Anthropic, a major artificial intelligence company, has agreed to pay at least $1.5 billion to settle a copyright infringement lawsuit filed by a group of authors who alleged the platform had illegally used pirated copies of their books to train large-language models, according to court documents. "If approved, this landmark settlement will be the largest publicly reported copyright recovery in history, larger than any other copyright class action settlement or any individual copyright case litigated to final judgment," said Justin Nelson, a lawyer for the authors. The lawsuit, filed in federal court in California last year, centered on roughly 500,000 published works. The proposed settlement amounts to a gross recovery of $3,000 per work, Nelson said in a memorandum to the judge in the case. "This result is nothing short of remarkable," Nelson added. In the lawsuit, the plaintiffs alleged that Anthrophic had "committed large-scale copyright infringement" by downloading and "commercially exploiting" books that it had allegedly gotten from pirating websites such as Library Genesis and Pirate Library Mirror. Anthropic had argued that what it was doing fell under "fair use" under U.S. copyright law. In late June, the federal judge assigned to the case ruled that Anthropic's actions constituted fair use because the end result was "exceedingly transformative." But that ruling from Judge William Alsup came with crucial asterisks. He declared that downloading pirated copies of books did not constitute fair use. "In June, the District Court issued a landmark ruling on AI development and copyright law, finding that Anthropic's approach to training AI models constitutes fair use," said Aparna Sridhar, the deputy general counsel of Anthropic. "Today's settlement, if approved, will resolve the plaintiffs' remaining legacy claims. We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems." Sridhar added. The lawsuit was originally filed by three writers: Andrea Bartz, Charles Graeber and Kirk Wallace Johnson. Bartz is a journalist and novelist; Graeber and Johnson are journalists who have published nonfiction books. Bartz, Graeber and Johnson did not immediately respond to requests for comment. The settlement could shape the trajectory of other pending litigation between AI platforms and published authors. John Grisham, "Game of Thrones" author George R.R. Martin and Jodi Picoult are part of a group of nearly 20 bestselling authors who have sued OpenAI, alleging "systematic theft on a mass scale" for using their works to train ChatGPT and other tools. Anthropic agreed to make four payments into the settlement fund, starting with a $300 million payout due within five business days of the court's sign-off on the terms, according to Nelson. Nelson's memo to Alsup said the proposed $1.5 billion payout is the "minimum size" of the settlement. "If the Works List ultimately exceeds 500,000 works," he said, "then Anthropic will pay an additional $3,000 per work that Anthropic adds to the Works List above 500,000 works."
[20]
Anthropic Will Pay Out $1.5 Billion Over Pirated AI Training Content
Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot. The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement. The company has agreed to pay authors about $3,000 for each of an estimated 500,000 books covered by the settlement. "As best as we can tell, it's the largest copyright recovery ever," said Justin Nelson, a lawyer for the authors. "It is the first of its kind in the AI era." A trio of authors -- thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson -- sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude. A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn't illegal but that Anthropic wrongfully acquired millions of books through pirate websites. If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money. "We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business," said William Long, a legal analyst for Wolters Kluwer. U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms. Books are known to be important sources of data -- in essence, billions of words carefully strung together -- that are needed to build the AI large language models behind chatbots like Anthropic's Claude and its chief rival, OpenAI's ChatGPT. Alsup's June ruling found that Anthropic had downloaded more than 7 million digitized books that it "knew had been pirated." It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained. Debut thriller novel "The Lost Night" by Bartz, a lead plaintiff in the case, was among those found in the Books3 dataset. Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote. The Authors Guild told its thousands of members last month that it expected "damages will be minimally $750 per work and could be much higher" if Anthropic was found at trial to have willfully infringed their copyrights. The settlement's higher award -- approximately $3,000 per work -- likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright. On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement "an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors' works to train their AI, robbing those least able to afford it." The final deadline for the 2025 Inc. Best in Business Awards is Friday, September 12, at 11:59 p.m. PT. Apply now.
[21]
Anthropic to pay authors $1.5 billion to settle lawsuit over pirated chatbot training material - The Economic Times
Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot. The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement. The company has agreed to pay authors about $3,000 for each of an estimated 500,000 books covered by the settlement. "As best as we can tell, it's the largest copyright recovery ever," said Justin Nelson, a lawyer for the authors. "It is the first of its kind in the AI era." A trio of authors - thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson - sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude. A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn't illegal but that Anthropic wrongfully acquired millions of books through pirate websites. If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money. "We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business," said William Long, a legal analyst for Wolters Kluwer. U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms. Anthropic said in a statement Friday that the settlement, if approved, "will resolve the plaintiffs' remaining legacy claims." "We remain committed to developing safe AI systems that help people and organisations extend their capabilities, advance scientific discovery, and solve complex problems," said Aparna Sridhar, the company's deputy general counsel. As part of the settlement, the company has also agreed to destroy the original book files it downloaded. Books are known to be important sources of data - in essence, billions of words carefully strung together - that are needed to build the AI large language models behind chatbots like Anthropic's Claude and its chief rival, OpenAI's ChatGPT. Alsup's June ruling found that Anthropic had downloaded more than 7 million digitised books that it "knew had been pirated." It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained. Debut thriller novel "The Lost Night" by Bartz, a lead plaintiff in the case, was among those found in the Books3 dataset. Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote. The Authors Guild told its thousands of members last month that it expected "damages will be minimally $750 per work and could be much higher" if Anthropic was found at trial to have willfully infringed their copyrights. The settlement's higher award - approximately $3,000 per work - likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright. On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement "an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors' works to train their AI, robbing those least able to afford it." The Danish Rights Alliance, which successfully fought to take down one of those shadow libraries, said Friday that the settlement would be of little help to European writers and publishers whose works aren't registered with the U.S. Copyright Office. "On the one hand, it's comforting to see that compiling AI training datasets by downloading millions of books from known illegal file-sharing sites comes at a price," said Thomas Heldrup, the group's head of content protection and enforcement. On the other hand, Heldrup said it fits a tech industry playbook to grow a business first and later pay a relatively small fine, compared to the size of the business, for breaking the rules. "It is my understanding that these companies see a settlement like the Anthropic one as a price of conducting business in a fiercely competitive space," Heldrup said. The privately held Anthropic, founded by ex-OpenAI leaders in 2021, said Tuesday that it had raised another $13 billion in investments, putting its value at $183 billion. Anthropic also said it expects to make $5 billion in sales this year, but, like OpenAI and many other AI startups, it has never reported making a profit, relying instead on investors to back the high costs of developing AI technology for the expectation of future payoffs.
[22]
AI Giant Anthropic to Pay $1.5 Billion to Authors in Settlement
Anthropic will pay $1.5 billion to settle a lawsuit from authors, who accused the Amazon-backed company of illegally downloading and copying their books to teach its AI system, in among the first deals reached by creators over novel legal issues raised by the technology. The settlement was reached on Aug. 26. Lawyers for authors on Friday notified the court on the terms of the deal. Authors who opt into the agreement will be eligible to share in the $1.5 billion settlement fund, plus additional payments of $3,000 per book allegedly used by Anthropic for training.
[23]
Anthropic to pay US$1.5 billion to settle lawsuit over pirated chatbot training material
NEW YORK -- Artificial intelligence company Anthropic has agreed to pay US$1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot. The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement. The company has agreed to pay authors about US$3,000 for each of an estimated 500,000 books covered by the settlement. "As best as we can tell, it's the largest copyright recovery ever," said Justin Nelson, a lawyer for the authors. "It is the first of its kind in the AI era." A trio of authors -- thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson -- sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.
[24]
Amazon-backed Anthropic agrees to pay authors $1.5B to settle...
Anthropic told a San Francisco federal judge on Friday that it has agreed to pay $1.5 billion to settle a class-action lawsuit from a group of authors who accused the artificial intelligence company of using their books to train its AI chatbot Claude without permission. Anthropic and the plaintiffs in a court filing asked US District Judge William Alsup to approve the settlement, after announcing the agreement in August without disclosing the terms or amount. "If approved, this landmark settlement will be the largest publicly reported copyright recovery in history, larger than any other copyright class action settlement or any individual copyright case litigated to final judgment," the plaintiffs said in the filing. The proposed deal marks the first settlement in a string of lawsuits against tech companies including OpenAI, Microsoft and Meta Platforms over their use of copyrighted material to train generative AI systems. Anthropic as part of the settlement said it will destroy downloaded copies of books the authors accused it of pirating, and under the deal it could still face infringement claims related to material produced by the company's AI models. In a statement, Anthropic said the company is "committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems." The agreement does not include an admission of liability. Writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson filed the class action against Anthropic last year. They argued that the company, which is backed by Amazon and Alphabet, unlawfully used millions of pirated books to teach its AI assistant Claude to respond to human prompts. The writers' allegations echoed dozens of other lawsuits brought by authors, news outlets, visual artists and others who say that tech companies stole their work to use in AI training. The companies have argued their systems make fair use of copyrighted material to create new, transformative content. Alsup ruled in June that Anthropic made fair use of the authors' work to train Claude, but found that the company violated their rights by saving more than 7 million pirated books to a "central library" that would not necessarily be used for that purpose. A trial was scheduled to begin in December to determine how much Anthropic owed for the alleged piracy, with potential damages ranging into the hundreds of billions of dollars. The pivotal fair-use question is still being debated in other AI copyright cases. Another San Francisco judge hearing a similar ongoing lawsuit against Meta ruled shortly after Alsup's decision that using copyrighted work without permission to train AI would be unlawful in "many circumstances."
[25]
Anthropic agrees to pay $1.5 billion to settle author class action
(Reuters) -Anthropic told a San Francisco federal judge on Friday that it has agreed to pay $1.5 billion to settle a class-action lawsuit from a group of authors who accused the artificial intelligence company of using their books to train its AI chatbot Claude without permission. Anthropic and the plaintiffs in a court filing asked U.S. District Judge William Alsup to approve the settlement, after announcing the agreement in August without disclosing the terms or amount. "If approved, this landmark settlement will be the largest publicly reported copyright recovery in history, larger than any other copyright class action settlement or any individual copyright case litigated to final judgment," the plaintiffs said in the filing. The proposed deal marks the first settlement in a string of lawsuits against tech companies including OpenAI, Microsoft and Meta Platforms over their use of copyrighted material to train generative AI systems. Anthropic as part of the settlement said it will destroy downloaded copies of books the authors accused it of pirating, and under the deal it could still face infringement claims related to material produced by the company's AI models. In a statement, Anthropic said the company is "committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems." The agreement does not include an admission of liability. Writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson filed the class action against Anthropic last year. They argued that the company, which is backed by Amazon and Alphabet, unlawfully used millions of pirated books to teach its AI assistant Claude to respond to human prompts. The writers' allegations echoed dozens of other lawsuits brought by authors, news outlets, visual artists and others who say that tech companies stole their work to use in AI training. The companies have argued their systems make fair use of copyrighted material to create new, transformative content. Alsup ruled in June that Anthropic made fair use of the authors' work to train Claude, but found that the company violated their rights by saving more than 7 million pirated books to a "central library" that would not necessarily be used for that purpose. A trial was scheduled to begin in December to determine how much Anthropic owed for the alleged piracy, with potential damages ranging into the hundreds of billions of dollars. The pivotal fair-use question is still being debated in other AI copyright cases. Another San Francisco judge hearing a similar ongoing lawsuit against Meta ruled shortly after Alsup's decision that using copyrighted work without permission to train AI would be unlawful in "many circumstances." (Reporting by Blake Brittain and Mike Scarcella in Washington; Editing by David Bario, Lisa Shumaker and Matthew Lewis)
Share
Share
Copy Link
Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit brought by authors over copyright infringement, marking a significant moment in the intersection of AI and intellectual property rights.
Anthropic, a leading AI company, has agreed to pay $1.5 billion to settle a class-action lawsuit brought by authors alleging copyright infringement
1
. This settlement, believed to be the largest in U.S. copyright litigation history, covers approximately 500,000 works that Anthropic allegedly pirated for AI training purposes2
.Source: Financial Times News
If approved by the court, each author will receive at least $3,000 per work that Anthropic used without permission
3
. The settlement also requires Anthropic to destroy all copies of the pirated books. This agreement is seen as a significant precedent, potentially influencing future cases involving AI companies and copyright owners4
.The lawsuit, originally filed in 2024, was part of a broader wave of copyright litigation against tech companies over AI training data
3
. In June, Judge William Alsup ruled that Anthropic's AI training was protected under the "fair use" doctrine. However, the judge allowed the case to proceed to trial due to Anthropic's use of pirated books from "shadow libraries"2
.The Authors' Guild CEO, Mary Rasenberger, hailed the settlement as "an excellent result for authors, publishers, and rightsholders generally"
1
. However, some critics argue that this settlement, while substantial, doesn't address the broader issues of AI companies using copyrighted material for training2
.Source: Ars Technica
Related Stories
Despite this settlement, Anthropic faces other legal challenges, including a lawsuit from major record labels over the use of copyrighted lyrics in training its Claude chatbot
3
. The company maintains its commitment to developing safe AI systems while resolving these "legacy claims"4
.Source: Inc. Magazine
This case highlights the ongoing tension between creators and AI developers over intellectual property rights
4
. As AI technology continues to advance, the legal landscape surrounding copyright and fair use in AI training is likely to evolve, potentially leading to new legislation or further court decisions5
.Summarized by
Navi
[2]
[5]
27 Aug 2025•Policy and Regulation
20 Aug 2024
29 Jul 2025•Business and Economy