8 Sources
[1]
Anthropic's lawyer was forced to apologize after Claude hallucinated a legal citation
A lawyer representing Anthropic admitted to using an erroneous citation created by the company's Claude AI chatbot in its ongoing legal battle with music publishers, according to a filing made in a Northern California court on Thursday. Claude hallucinated the citation with "an inaccurate title and inaccurate authors," Anthropic says in the filing, first reported by Bloomberg. Anthropic's lawyers explain that their "manual citation check" did not catch it, nor several other errors that were caused by Claude's hallucinations. Anthropic apologized for the error, and called it "an honest citation mistake and not a fabrication of authority." Earlier this week, lawyers representing Universal Music Group and other music publishers accused Anthropic's expert witness -- one of the company's employees, Olivia Chen -- of using Claude to cite fake articles in her testimony. Federal judge, Susan van Keulen, then ordered Anthropic to respond to these allegations. The music publishers lawsuit is one of several disputes between copyright owners and tech companies over the supposed misuse of their work to create generative AI tools. This is the latest instance of lawyers using AI in court, and then regretting the decision. Earlier this week, a California judge slammed a pair of law firms for submitting "bogus AI-generated research" in his court. In January, an Australian lawyer was caught using ChatGPT in the preparation of court documents and the chatbot produced faulty citations. However, these errors aren't stopping startups from raising enormous rounds to automate legal work. Harvey, which uses generative AI models to assist lawyers, is reportedly in talks to raise over $250 million at a $5 billion valuation.
[2]
Anthropic blames Claude AI for 'embarrassing and unintentional mistake' in legal filing
Jess Weatherbed is a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews. Anthropic has responded to allegations that it used an AI-fabricated source in its legal battle against music publishers, saying its Claude chatbot made an "honest citation mistake." An erroneous citation was included in a filing submitted by Anthropic data scientist Olivia Chen on April 30th, as part of the AI company's defense against claims that copyrighted lyrics were used to train Claude. An attorney representing Universal Music Group, ABKCO, and Concord said in a hearing that sources referenced in Chen's filing were a "complete fabrication," and implied they were hallucinated by Anthropic's AI tool. In a response filed on Thursday, Anthropic defense attorney Ivana Dukanovic said that the scrutinized source was genuine and that Claude had indeed been used to format legal citations in the document. While incorrect volume and page numbers generated by the chatbot were caught and corrected by a "manual citation check," Anthropic admits that wording errors had gone undetected. Dukanovic said, "unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors," and that the error wasn't a "fabrication of authority." The company apologized for the inaccuracy and confusion caused by the citation error, calling it "an embarrassing and unintentional mistake." This is one of many growing examples of how using AI tools for legal citations has caused issues in courtrooms. Last week, a California Judge chastised two law firms for failing to disclose that AI was used to create a supplemental brief rife with "bogus" materials that "didn't exist." A misinformation expert admitted in December that ChatGPT had hallucinated citations in a legal filing he'd submitted.
[3]
Anthopic's law firm blames Claude hallucinations for errors
AI footnote fail triggers legal palmface in music copyright spat An attorney defending AI firm Anthropic in a copyright case brought by music publishers apologized to the court on Thursday for citation errors that slipped into a filing after using the biz's own AI tool, Claude, to format references. The incident reinforces what's becoming a pattern in legal tech: while AI models can be fine-tuned, people keep failing to verify the chatbot's output, despite the consequences. The flawed citations, or "hallucinations," appeared in an April 30, 2025 declaration [PDF] from Anthropic data scientist Olivia Chen in a copyright lawsuit music publishers filed in October 2023. But Chen was not responsible for introducing the errors, which appeared in footnotes 2 and 3. Ivana Dukanovic, an attorney with Latham & Watkins, the firm defending Anthropic, stated that after a colleague located a supporting source for Chen's testimony via Google search, she used Anthropic's Claude model to generate a formatted legal citation. Chen and defense lawyers failed to catch the errors in subsequent proofreading. Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors "After the Latham & Watkins team identified the source as potential additional support for Ms. Chen's testimony, I asked Claude.ai to provide a properly formatted legal citation for that source using the link to the correct article," explained Dukanovic in her May 15, 2025 declaration [PDF]. "Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors. "Our manual citation check did not catch that error. Our citation check also missed additional wording errors introduced in the citations during the formatting process using Claude.ai." But Dukanovic pushed back against the suggestion from the plaintiff's legal team that Chen's declaration was false. "This was an embarrassing and unintentional mistake," she said in her filing with the court. "The article in question genuinely exists, was reviewed by Ms. Chen and supports her opinion on the proper margin of error to use for sampling. The insinuation that Ms. Chen's opinion was influenced by false or fabricated information is thus incorrect. As is the insinuation that Ms. Chen lacks support for her opinion." Dukanovic said Latham & Watkins has implemented procedures "to ensure that this does not occur again." The hallucinations of AI models keep showing up in court filings. Last week, in a plaintiff's claim against insurance firm State Farm (Jacquelyn Jackie Lacey v. State Farm General Insurance Company et al), former Judge Michael R. Wilner, the Special Master appointed to handle the dispute, sanctioned [PDF] the plaintiff's attorneys for misleading him with AI-generated text. He directed the plaintiff's legal team to pay more than $30,000 in court costs that they wouldn't have otherwise had to bear. After reviewing a supplemental brief filed by the plaintiffs, Wilner found that "approximately nine of the 27 legal citations in the ten-page brief were incorrect in some way." Two of the citations, he said, do not exist, and several cited phony judicial opinions. Even with recent advances, no reasonably competent attorney should out-source research and writing to [AI] - particularly without any attempt to verify the accuracy of that material "The lawyers' declarations ultimately made clear that the source of this problem was the inappropriate use of, and reliance on, AI tools," Wilner wrote in his order. Winer's analysis of the misstep is scathing. "I conclude that the lawyers involved in filing the Original and Revised Briefs collectively acted in a manner that was tantamount to bad faith," he wrote. "The initial, undisclosed use of AI products to generate the first draft of the brief was flat-out wrong. "Even with recent advances, no reasonably competent attorney should out-source research and writing to this technology - particularly without any attempt to verify the accuracy of that material. And sending that material to other lawyers without disclosing its sketchy AI origins realistically put those professionals in harm's way." According to Wilner, courts are increasingly called upon to evaluate "the conduct of lawyers and pro se litigants [representing themselves] who improperly use AI in submissions to judges." That's evident in cases like Mata v. Avianca, Inc, United States v. Hayes, and United States v. Cohen. The judge tossed expert testimony [PDF] in another case involving Minnesota Attorney General Keith Ellison - Kohls et al v. Ellison et al. - after learning that the expert's submission to the court contained AI falsehoods. And when AI goes wrong, it generally doesn't go well for the lawyers involved. Attorneys from law firm Morgan & Morgan were sanctioned [PDF] in February after a Wyoming federal judge found they submitted a filing containing multiple fictitious case citations generated by the firm's in-house AI tool. In his sanctions order, US District Judge Kelly Rankin made clear that attorneys are accountable if they submit documents with AI-generated errors. "An attorney who signs a document certifies they made a reasonable inquiry into the existing law," he wrote. "While technology continues to change, this requirement remains the same." One law prof believes that fines won't be enough - lawyers who abuse AI should be disciplined personally. "The quickest way to deter lawyers from failing to cite check their filings is for state bars to make the submission of hallucinated citations in court pleadings, submitted without cite checking by the lawyers, grounds for disciplinary action, including potential suspension of bar licenses," said Edward Lee, a professor of law at Santa Clara University. "The courts' monetary sanctions alone will not likely stem this practice." ®
[4]
Anthropic expert accused of using AI-fabricated source in copyright case
May 13 (Reuters) - A federal judge in San Jose, California, on Tuesday ordered artificial intelligence company Anthropic to respond to allegations that it submitted a court filing containing a "hallucination" created by AI as part of its defense against copyright claims by a group of music publishers. A lawyer representing Universal Music Group (UMG.AS), opens new tab, Concord and ABKCO in a lawsuit over Anthropic's alleged misuse of their lyrics to train its chatbot Claude told U.S. Magistrate Judge Susan van Keulen at a hearing that an Anthropic data scientist cited a nonexistent academic article to bolster the company's argument in a dispute over evidence. Van Keulen asked Anthropic to respond by Thursday to the accusation, which the company said appeared to be an inadvertent citation error. He rejected the music companies' request to immediately question the expert but said the allegation presented "a very serious and grave issue," and that there was "a world of difference between a missed citation and a hallucination generated by AI." Attorneys and spokespeople for Anthropic did not immediately respond to a request for comment following the hearing. The music publishers' lawsuit is one of several high-stakes disputes between copyright owners against tech companies over the alleged misuse of their work to train artificial-intelligence systems. The expert's filing, opens new tab cited an article from the journal American Statistician to argue for specific parameters for determining how often Claude reproduces copyrighted song lyrics, which Anthropic calls a "rare event." The music companies' attorney, Matt Oppenheim of Oppenheim + Zebrak, said during the hearing that he confirmed with one of the supposed authors and the journal itself that the article did not exist. He called the citation a "complete fabrication." Oppenheim said he did not presume the expert, Olivia Chen, intentionally fabricated the citation, "but we do believe it is likely that Ms. Chen used Anthropic's AI tool Claude to develop her argument and authority to support it." Chen could not immediately be reached for comment following the hearing. Anthropic attorney Sy Damle of Latham & Watkins complained at the hearing that the plaintiffs were "sandbagging" them by not raising the accusation earlier. He said the citation was incorrect but appeared to refer to the correct article. The relevant link in the filing directs to a separate American Statistician article, opens new tab with a different title and different authors. "Clearly, there was something that was a mis-citation, and that's what we believe right now," Damle said. Several attorneys have been criticized or sanctioned by courts in recent months for mistakenly citing nonexistent cases and other incorrect information "hallucinated" by AI in their filings. The case is Concord Music Group Inc v. Anthropic PBC, U.S. District Court for the Northern District of California, No. 3:24-cv-03811. For the music publishers: Matt Oppenheim of Oppenheim + Zebrak For Anthropic: Sy Damle of Latham & Watkins Read more: Music publishers sue AI company Anthropic over song lyrics Anthropic wins early round in music publishers' AI copyright case Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Litigation Blake Brittain Thomson Reuters Blake Brittain reports on intellectual property law, including patents, trademarks, copyrights and trade secrets, for Reuters Legal. He has previously written for Bloomberg Law and Thomson Reuters Practical Law and practiced as an attorney.
[5]
Anthropic's lawyers take blame for AI 'hallucination' in music publishers' lawsuit
May 15 (Reuters) - An attorney defending artificial-intelligence company Anthropic in a copyright lawsuit over music lyrics told a California federal judge on Thursday that her law firm Latham & Watkins was responsible for an incorrect footnote in an expert report caused by an AI "hallucination." Ivana Dukanovic said in a court filing, opens new tab that the expert had relied on a legitimate academic journal article, but Dukanovic created a citation for it using Anthropic's chatbot Claude, which made up a fake title and authors in what the attorney called "an embarrassing and unintentional mistake." "Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors," Dukanovic said. The lawsuit from music publishers Universal Music Group (UMG.AS), opens new tab, Concord and ABKCO over Anthropic's alleged misuse of their lyrics to train Claude is one of several high-stakes disputes between copyright owners and tech companies over the use of their work to train AI systems. The publishers' attorney Matt Oppenheim of Oppenheim + Zebrak told the court during a hearing on Tuesday that Anthropic data scientist Olivia Chen may have used an AI-fabricated source to bolster the company's argument in a dispute over evidence. U.S. Magistrate Judge Susan van Keulen said at the hearing that the allegation raised "a very serious and grave issue," and that there was "a world of difference between a missed citation and a hallucination generated by AI." Dukanovic responded on Thursday that Chen had cited a real article from the journal American Statistician that supported her argument, but the attorneys had missed that Claude introduced an incorrect title and authors. A spokesperson for the plaintiffs declined to comment on the new filing. Dukanovic and a spokesperson for Anthropic did not immediately respond to requests for comment. Several attorneys have been criticized or sanctioned by courts in recent months for mistakenly citing nonexistent cases and other incorrect information hallucinated by AI in their filings. Dukanovic said in Thursday's court filing that Latham had implemented "multiple levels of additional review to work to ensure that this does not occur again." The case is Concord Music Group Inc v. Anthropic PBC, U.S. District Court for the Northern District of California, No. 5:24-cv-03811. For the music publishers: Matt Oppenheim of Oppenheim + Zebrak For Anthropic: Sy Damle of Latham & Watkins Read more: Music publishers sue AI company Anthropic over song lyrics Anthropic reaches deal on AI 'guardrails' in lawsuit over music lyrics Anthropic expert accused of using AI-fabricated source in copyright case Reporting by Blake Brittain in Washington Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Legal Industry Blake Brittain Thomson Reuters Blake Brittain reports on intellectual property law, including patents, trademarks, copyrights and trade secrets, for Reuters Legal. He has previously written for Bloomberg Law and Thomson Reuters Practical Law and practiced as an attorney.
[6]
Anthropic Tried to Defend Itself With AI and It Backfired Horribly
The advent of AI has already made a splash in the legal world, to say the least. In the past few months, we've watched as a tech entrepreneur gave testimony through an AI avatar, trial lawyers filed a massive brief riddled with AI hallucinations, and the MyPillow guy tried to exonerate himself in front of a federal judge with ChatGPT. By now, it ought to be a well-known fact that AI is an unreliable source of info for just about anything, let alone for something as intricate as a legal filing. One Stanford University study found that AI tools make up information on 58 to 82 percent of legal queries -- an astonishing amount, in other words. That's evidently something AI company Anthropic wasn't aware of, because they were just caught using AI as part of its defense against allegations that the company trained its software on copywritten music. Earlier this week, a federal judge in California raged that Anthropic had filed a brief containing a major "hallucination," the term describing AI's knack for making up information that doesn't actually exist. Per Reuters, those music publishers filing suit against the AI company argued that Anthropic cited a "nonexistent academic article" in a filing in order to lend credibility to Anthropic's case. The judge demanded answers, and Anthropic's was mind numbing. Rather than deny the fact that the AI produced a hallucination, defense attorneys doubled down. They admitted to using Anthropic's own AI chatbot Claude to write their legal filing. Anthropic Defense Attorney Ivana Dukanovic claims that, while the source Claude cited started off as genuine, its formatting became lost in translation -- which is why the article's title and authors led to an article that didn't exist. As far as Anthropic is concerned, according to The Verge, Claude simply made an "honest citation mistake, and not a fabrication of authority." "I asked Claude.ai to provide a properly formatted legal citation for that source using the link to the correct article," Dukanovic confessed. "Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors. Our manual citation check did not catch that error." Anthropic apologized for the flagrant error, saying it was "an embarrassing and unintentional mistake." Whatever someone wants to call it, one thing it clearly is not: A great sales pitch for Claude. It'd be fair to assume that Anthropic, of all companies, would have a better internal process in place for scrutinizing the work of its in-house AI system -- especially before it's in the hands of a judge overseeing a landmark copyright case. As it stands, Claude is joining the ranks of infamous courtroom gaffs committed by the likes of OpenAI's ChatGPT and Google's Gemini -- further evidence that no existing AI model has what it takes to go up in front of a judge.
[7]
Anthropic Accused of Citing AI 'Hallucination' in Song Lyrics Lawsuit - Decrypt
The case adds to mounting legal pressure on AI developers, with OpenAI, Meta, and Anthropic all facing lawsuits over training models on unlicensed copyrighted material. An AI expert at Amazon-backed firm Anthropic has been accused of citing a fabricated academic article in a court filing meant to defend the company against claims that it trained its AI model on copyrighted song lyrics without permission. The filing, submitted by Anthropic data scientist Olivia Chen, was part of the company's legal response to a $75 million lawsuit filed by Universal Music Group, Concord, ABKCO, and other major publishers. The publishers alleged in the 2023 lawsuit that Anthropic unlawfully used lyrics from hundreds of songs, including those by Beyoncé, The Rolling Stones, and The Beach Boys, to train its Claude language model. Chen's declaration included a citation to an article from The American Statistician, intended to support Anthropic's argument that Claude only reproduces copyrighted lyrics under rare and specific conditions, according to a Reuters report. During a hearing Tuesday in San Jose, the plaintiffs' attorney Matt Oppenheim called the citation a "complete fabrication," but said he didn't believe Chen intentionally made it up, only that she likely used Claude itself to generate the source. Anthropic's attorney, Sy Damle, told the court Chen's error appeared to be a mis-citation, not a fabrication, while criticizing the plaintiffs for raising the issue late in the proceedings. Per Reuters, U.S. Magistrate Judge Susan van Keulen said the issue posed "a very serious and grave" concern, noting that "there's a world of difference between a missed citation and a hallucination generated by AI." She declined a request to immediately question Chen, but ordered Anthropic to formally respond to the allegation by Thursday. Anthropic did not immediately respond to Decrypt's request for comment. The lawsuit against Anthropic was filed in October 2023, with the plaintiffs accusing Anthropic's Claude model of being trained on a massive volume of copyrighted lyrics and reproducing them on demand. They demanded damages, disclosure of the training set, and the destruction of infringing content. Anthropic responded in January 2024, denying that its systems were designed to output copyrighted lyrics. It called any such reproduction a "rare bug" and accused the publishers of offering no evidence that typical users encountered infringing content. In August 2024, the company was hit with another lawsuit, this time from authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who accused Anthropic of training Claude on pirated versions of their books. The case is part of a growing backlash against generative AI companies accused of feeding copyrighted material into training datasets without consent. OpenAI is facing multiple lawsuits from comedian Sarah Silverman, the Authors Guild, and The New York Times, accusing the company of using copyrighted books and articles to train its GPT models without permission or licenses. Meta is named in similar suits, with plaintiffs alleging that its LLaMA models were trained on unlicensed literary works sourced from pirated datasets. Meanwhile, in March, OpenAI and Google urged the Trump administration to ease copyright restrictions around AI training, calling them a barrier to innovation in their formal proposals for the upcoming U.S. "AI Action Plan." In the UK, a government bill that would enable artificial intelligence firms to use copyright-protected work without permission hit a roadblock this week, after the House of Lords backed an amendment requiring AI firms to reveal what copyrighted material they have used in their models.
[8]
Anthropic's lawyers take blame for AI 'hallucination' in music publishers' lawsuit
An attorney defending artificial-intelligence company Anthropic in a copyright lawsuit over music lyrics told a California federal judge on Thursday that her law firm Latham & Watkins was responsible for an incorrect footnote in an expert report caused by an AI "hallucination." Ivana Dukanovic said in a court filing that the expert had relied on a legitimate academic journal article, but Dukanovic created a citation for it using Anthropic's chatbot Claude, which made up a fake title and authors in what the attorney called "an embarrassing and unintentional mistake." "Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors," Dukanovic said. The lawsuit from music publishers Universal Music Group , Concord and ABKCO over Anthropic's alleged misuse of their lyrics to train Claude is one of several high-stakes disputes between copyright owners and tech companies over the use of their work to train AI systems. The publishers' attorney Matt Oppenheim of Oppenheim + Zebrak told the court during a hearing on Tuesday that Anthropic data scientist Olivia Chen may have used an AI-fabricated source to bolster the company's argument in a dispute over evidence. U.S. Magistrate Judge Susan van Keulen said at the hearing that the allegation raised "a very serious and grave issue," and that there was "a world of difference between a missed citation and a hallucination generated by AI." Dukanovic responded on Thursday that Chen had cited a real article from the journal American Statistician that supported her argument, but the attorneys had missed that Claude introduced an incorrect title and authors. A spokesperson for the plaintiffs declined to comment on the new filing. Dukanovic and a spokesperson for Anthropic did not immediately respond to requests for comment. Several attorneys have been criticized or sanctioned by courts in recent months for mistakenly citing nonexistent cases and other incorrect information hallucinated by AI in their filings. Dukanovic said in Thursday's court filing that Latham had implemented "multiple levels of additional review to work to ensure that this does not occur again." The case is Concord Music Group Inc v. Anthropic PBC, U.S. District Court for the Northern District of California, No. 5:24-cv-03811. For the music publishers: Matt Oppenheim of Oppenheim + Zebrak For Anthropic: Sy Damle of Latham & Watkins
Share
Copy Link
Anthropic's lawyers admit to using the company's AI chatbot Claude to generate a citation in a legal filing, resulting in hallucinated information. This incident highlights the risks of using AI in legal work and adds to the growing number of AI-related errors in courtrooms.
In a significant development in the ongoing copyright lawsuit between music publishers and AI company Anthropic, the company's legal team has admitted to an embarrassing error caused by their own AI chatbot, Claude. Ivana Dukanovic, an attorney from Latham & Watkins representing Anthropic, filed a declaration explaining that Claude had generated an inaccurate citation in a legal document, leading to accusations of using fabricated sources 12.
The error occurred in an April 30, 2025 declaration from Anthropic data scientist Olivia Chen. Dukanovic revealed that after locating a supporting source via Google search, she used Claude to generate a formatted legal citation. Unfortunately, while Claude provided the correct publication title, year, and link, it also included an inaccurate title and incorrect authors 3.
This incident has raised serious concerns about the use of AI in legal work. U.S. Magistrate Judge Susan van Keulen emphasized the gravity of the situation, stating there is "a world of difference between a missed citation and a hallucination generated by AI" 4.
This is not an isolated incident. Recent months have seen several cases where attorneys have faced criticism or sanctions for submitting AI-generated errors in court filings:
The incident has sparked discussions about the responsible use of AI in legal work. Some experts argue that fines may not be sufficient, suggesting that lawyers who misuse AI should face personal disciplinary action 3.
In response to this error, Latham & Watkins has implemented additional review procedures to prevent similar occurrences in the future 5. This incident serves as a cautionary tale for the legal industry, highlighting the need for rigorous verification of AI-generated content.
This citation error is part of a larger legal battle between Anthropic and music publishers Universal Music Group, Concord, and ABKCO. The lawsuit, one of several high-stakes disputes between copyright owners and tech companies, centers on the alleged misuse of song lyrics to train AI systems like Claude 45.
As AI continues to play an increasingly significant role in various industries, including law, this incident underscores the importance of maintaining human oversight and responsibility in the use of AI tools. It also raises questions about the potential need for new guidelines or regulations governing the use of AI in legal proceedings.
Apple forms a new team to develop an in-house AI chatbot and search experience, aiming to compete with ChatGPT and revitalize its AI efforts.
5 Sources
Technology
6 hrs ago
5 Sources
Technology
6 hrs ago
Mental health professionals raise concerns about the growing trend of young people turning to AI chatbots for emotional support, warning of potential risks to mental health and social skills development.
5 Sources
Health
14 hrs ago
5 Sources
Health
14 hrs ago
Perplexity CEO Aravind Srinivas claims their new AI browser, Comet, can automate recruiter and administrative assistant roles with a single prompt, potentially disrupting white-collar jobs.
2 Sources
Technology
14 hrs ago
2 Sources
Technology
14 hrs ago
Samsung has announced plans to release a tri-fold smartphone and an XR headset by the end of 2025, showcasing its commitment to innovative form factors and AI-powered devices.
2 Sources
Technology
2 days ago
2 Sources
Technology
2 days ago
The U.S. Army has consolidated multiple contracts into a single $10 billion deal with Palantir Technologies, streamlining procurement for AI and data integration tools over the next decade.
5 Sources
Business and Economy
2 days ago
5 Sources
Business and Economy
2 days ago