3 Sources
3 Sources
[1]
Attorney Slapped With Hefty Fine for Citing 21 Fake, AI-Generated Cases
Emily is an experienced reporter who covers cutting-edge tech, from AI and EVs to brain implants. She stays grounded by hiking and playing guitar. A California attorney made an expensive mistake when trying to cut corners on a legal brief. Amir Mostafavi submitted an appeal in an employment-related case, but 21 of the 23 cases he cited to support his argument were fake -- hallucinated by AI -- or included phony quotes from existing cases. Judge Lee Smalley Edmon sanctioned Mostafavi and fined him $10,000. "To state the obvious, it is a fundamental duty of attorneys to read the legal authorities they cite in appellate briefs," Judge Edmon says in a strongly worded opinion. "Plainly, counsel did not read the cases he cited before filing his appellate briefs: Had he read them, he would have discovered, as we did, that the cases did not contain the language he purported to quote, did not support the propositions for which they were cited, or did not exist." Mostafavi claims he wrote the first draft of the brief but then used AI tools such as ChatGPT, Grok, Gemini, and Claude to "enhance" it. He did not read through the final version before filing it, and says he should not be fined because he did not know AI tools can make up information. Mostavi passed the California bar exam in 2012, a decade before OpenAI claimed in 2023 that ChatGPT was smart enough to master that test. Mostavi was also a part-time law professor at the People's College of Law in Los Angeles, an unaccredited school that shut down in 2023, according to his website. He told the court he has educated himself about AI hallucinations since being called out. Judge Edmon does not condemn the use of AI in the legal profession, and in fact says, "there is nothing inherently wrong with an attorney appropriately using AI in a law practice." But attorneys who use AI must carefully fact-check every citation, and "cannot delegate that role to AI, computers, robots, or any other form of technology." The use of inaccurate, AI-generated text in legal briefings is becoming "far too common," the judge adds. In 2024, a Stanford law professor was caught using AI-generated citations in a legal argument that was, ironically, in support of a bill to curb deepfakes. In 2023, two New York lawyers were sanctioned after submitting a brief that included fake ChatGPT citations. The issue isn't limited to lawyers. Alaska's top education official used AI to draft a state-wide school policy on cell phone use in 2024, and four of the six studies cited were made up. Education Commissioner Deena Bishop admitted to using generative AI to draft the resolution and says it was an early version posted online prematurely. AI companies don't have a solution to stop hallucinations. In a paper published this month, OpenAI said models make up information when they do not know the answer, and they are trained to reward guesswork. They also want to please users, who may not be happy if the AI cannot answer their question, The Register reports. So, they make stuff up. In a legal context, if an AI cannot find a suitable case for an attorney to cite, it might make one up. Judge Edmon argues this wastes the court's time and resources to double-check every citation in submitted documents, and is not a tenable long-term solution for the legal system.
[2]
California attorney fined $10k for filing an appeal with fake legal citations generated by AI
A California attorney must pay a $10,000 fine for filing a state court appeal full of fake quotations generated by the artificial intelligence tool ChatGPT. The fine appears to be the largest issued over AI fabrications by a California court and came with a blistering opinion stating that 21 of 23 quotes from cases cited in the attorney's opening brief were made up. It also noted that numerous out-of-state and federal courts have confronted attorneys for citing fake legal authority. "We therefore publish this opinion as a warning," it continued. "Simply stated, no brief, pleading, motion, or any other paper filed in any court should contain any citations -- whether provided by generative AI or any other source -- that the attorney responsible for submitting the pleading has not personally read and verified." The opinion, issued on Sept. 12 in California's 2nd District Court of Appeal, is a clear example of why the state's legal authorities are scrambling to regulate the use of AI in the judiciary. The state's Judicial Council two weeks ago issued guidelines requiring judges and court staff to either ban generative AI or adopt a generative AI use policy by Dec. 15. Meanwhile, the California Bar Association is considering whether to strengthen its code of conduct to account for various forms of AI following a request by the California Supreme Court last month. The Los Angeles-area attorney fined, Amir Mostafavi, told the court that he did not read text generated by the AI model before submitting the appeal in July 2023, months after OpenAI marketed ChatGPT as capable of passing the bar exam. A three-judge panel fined him for filing a frivolous appeal, violating court rules, citing fake cases, and wasting the court's time and the taxpayers money, according to the opinion. Mostafavi told CalMatters he wrote the appeal and then used ChatGPT to try and improve it. He said that he didn't know it would add case citations or make things up. He thinks it is unrealistic to expect lawyers to stop using AI. It's become an important tool just as online databases largely replaced law libraries and, until AI systems stop hallucinating fake information, he suggests lawyers who use AI to proceed with caution. "In the meantime we're going to have some victims, we're going to have some damages, we're going to have some wreckages," he said. "I hope this example will help others not fall into the hole. I'm paying the price." The fine issued to Mostafavi is the most costly penalty issued to an attorney by a California state court and one of the highest fines ever issued over attorney use of AI, according to Damien Charlotin, who teaches a class on AI and the law at a business school in Paris. He tracks instances of attorneys citing fake cases, primarily in Australia, Canada, the United States, and the United Kingdom. In a widely-publicized case in May, a U.S. District Court judge in California ordered two law firms to pay $31,100 in fees to defense counsel and the court for costs associated with using "bogus AI-generated research." In that ruling, the judge described feeling misled, said they almost cited fake material in a judicial order and said "Strong deterrence is needed to make sure that attorneys don't succumb to this easy shortcut." Charlotin thinks courts and the public should expect to see an exponential rise in these cases in the future. When he started tracking court filings involving AI and fake cases earlier this year, he encountered a few cases a month. Now he sees a few cases a day. Large language models confidently state falsehoods as facts, particularly when there are no supporting facts. "The harder your legal argument is to make, the more the model will tend to hallucinate, because they will try to please you," he said. "That's where the confirmation bias kicks in." A May 2024 analysis by Stanford University's RegLab found that although three out of four lawyers plan to use generative AI in their practice, some forms of AI generate hallucinations in one out of three queries. Detecting fake material cited in legal filings could get harder as models grow in size. Another tracker of cases where lawyers cite nonexistent legal authority due to use of AI identifies 52 such cases in California and more than 600 nationwide. That amount is expected to increase in the near future because AI innovation is outpacing the education of attorneys, said Nicholas Sanctis, a law student at Capital University Law School in Ohio. Jenny Wondracek, who leads the tracker project, said she expects this trend to get worse because she still regularly encounters lawyers who don't know that AI makes things up or believe that legal tech tools can eliminate all fake or false material generated by language models. "I think we'd see a reduction if (lawyers) just understood the basics of the technology," she said. Like Charlotin, she suspects there are more instances of made up cases generated by AI in state court filings than in federal courts, but a lack of standard filing methods makes it difficult to verify that. She said she encounters fake cases most often among overburdened attorneys or people who choose to represent themselves in family court. She suspects the number of arguments filed by attorneys that use AI and cite fake cases will continue to go up, but added that not just attorneys engage in the practice. In recent weeks, she's documented three instances of judges citing fake legal authority in their decisions. As California considers how to treat generative AI and fake case citations, Wondracek said they can consider approaches taken by other states, such as temporary suspensions, requiring attorneys who get caught to take courses to better understand how to ethically use AI, or requiring them to teach law students how they can avoid making the same mistake. Mark McKenna, codirector of the UCLA Institute of Technology, Law & Policy praised fines like the one against Mostafavi as punishing lawyers for "an abdication of your responsibility as a party representing someone." He thinks the problem "will get worse before it gets better," because there's been a rush among law schools and private firms to adopt AI without thinking through the appropriate way to use them. UCLA School of Law professor Andrew Selbst agrees, pointing out that clerks that work for judges are recent law school graduates, and students are getting bombarded with the message that they must use AI or get left behind. Educators and other professionals report feeling similar pressures. "This is getting shoved down all our throats," he said. "It's being pushed in firms and schools and a lot of places and we have not yet grappled with the consequences of that." ___ This story was originally published in CalMatters and distributed through a partnership with The Associated Press.
[3]
California Attorney Fined $10k for Filing an Appeal With Fake Legal Citations Generated by AI
A California attorney must pay a $10,000 fine for filing a state court appeal full of fake quotations generated by the artificial intelligence tool ChatGPT. The fine appears to be the largest issued over AI fabrications by a California court and came with a blistering opinion stating that 21 of 23 quotes from cases cited in the attorney's opening brief were made up. It also noted that numerous out-of-state and federal courts have confronted attorneys for citing fake legal authority. "We therefore publish this opinion as a warning," it continued. "Simply stated, no brief, pleading, motion, or any other paper filed in any court should contain any citations -- whether provided by generative AI or any other source -- that the attorney responsible for submitting the pleading has not personally read and verified." The opinion, issued on Sept. 12 in California's 2nd District Court of Appeal, is a clear example of why the state's legal authorities are scrambling to regulate the use of AI in the judiciary. The state's Judicial Council two weeks ago issued guidelines requiring judges and court staff to either ban generative AI or adopt a generative AI use policy by Dec. 15. Meanwhile, the California Bar Association is considering whether to strengthen its code of conduct to account for various forms of AI following a request by the California Supreme Court last month. The Los Angeles-area attorney fined, Amir Mostafavi, told the court that he did not read text generated by the AI model before submitting the appeal in July 2023, months after OpenAI marketed ChatGPT as capable of passing the bar exam. A three-judge panel fined him for filing a frivolous appeal, violating court rules, citing fake cases, and wasting the court's time and the taxpayers money, according to the opinion. Mostafavi told CalMatters he wrote the appeal and then used ChatGPT to try and improve it. He said that he didn't know it would add case citations or make things up. He thinks it is unrealistic to expect lawyers to stop using AI. It's become an important tool just as online databases largely replaced law libraries and, until AI systems stop hallucinating fake information, he suggests lawyers who use AI to proceed with caution. "In the meantime we're going to have some victims, we're going to have some damages, we're going to have some wreckages," he said. "I hope this example will help others not fall into the hole. I'm paying the price." The fine issued to Mostafavi is the most costly penalty issued to an attorney by a California state court and one of the highest fines ever issued over attorney use of AI, according to Damien Charlotin, who teaches a class on AI and the law at a business school in Paris. He tracks instances of attorneys citing fake cases, primarily in Australia, Canada, the United States, and the United Kingdom. In a widely-publicized case in May, a U.S. District Court judge in California ordered two law firms to pay $31,100 in fees to defense counsel and the court for costs associated with using "bogus AI-generated research." In that ruling, the judge described feeling misled, said they almost cited fake material in a judicial order and said "Strong deterrence is needed to make sure that attorneys don't succumb to this easy shortcut." Charlotin thinks courts and the public should expect to see an exponential rise in these cases in the future. When he started tracking court filings involving AI and fake cases earlier this year, he encountered a few cases a month. Now he sees a few cases a day. Large language models confidently state falsehoods as facts, particularly when there are no supporting facts. "The harder your legal argument is to make, the more the model will tend to hallucinate, because they will try to please you," he said. "That's where the confirmation bias kicks in." A May 2024 analysis by Stanford University's RegLab found that although three out of four lawyers plan to use generative AI in their practice, some forms of AI generate hallucinations in one out of three queries. Detecting fake material cited in legal filings could get harder as models grow in size. Another tracker of cases where lawyers cite nonexistent legal authority due to use of AI identifies 52 such cases in California and more than 600 nationwide. That amount is expected to increase in the near future because AI innovation is outpacing the education of attorneys, said Nicholas Sanctis, a law student at Capital University Law School in Ohio. Jenny Wondracek, who leads the tracker project, said she expects this trend to get worse because she still regularly encounters lawyers who don't know that AI makes things up or believe that legal tech tools can eliminate all fake or false material generated by language models. "I think we'd see a reduction if (lawyers) just understood the basics of the technology," she said. Like Charlotin, she suspects there are more instances of made up cases generated by AI in state court filings than in federal courts, but a lack of standard filing methods makes it difficult to verify that. She said she encounters fake cases most often among overburdened attorneys or people who choose to represent themselves in family court. She suspects the number of arguments filed by attorneys that use AI and cite fake cases will continue to go up, but added that not just attorneys engage in the practice. In recent weeks, she's documented three instances of judges citing fake legal authority in their decisions. As California considers how to treat generative AI and fake case citations, Wondracek said they can consider approaches taken by other states, such as temporary suspensions, requiring attorneys who get caught to take courses to better understand how to ethically use AI, or requiring them to teach law students how they can avoid making the same mistake. Mark McKenna, codirector of the UCLA Institute of Technology, Law & Policy praised fines like the one against Mostafavi as punishing lawyers for "an abdication of your responsibility as a party representing someone." He thinks the problem "will get worse before it gets better," because there's been a rush among law schools and private firms to adopt AI without thinking through the appropriate way to use them. UCLA School of Law professor Andrew Selbst agrees, pointing out that clerks that work for judges are recent law school graduates, and students are getting bombarded with the message that they must use AI or get left behind. Educators and other professionals report feeling similar pressures. "This is getting shoved down all our throats," he said. "It's being pushed in firms and schools and a lot of places and we have not yet grappled with the consequences of that." ___ This story was originally published in CalMatters and distributed through a partnership with The Associated Press.
Share
Share
Copy Link
A California lawyer faces a hefty fine for submitting an appeal with fake legal citations generated by AI tools. This case highlights the growing challenges of AI use in the legal profession and the need for careful verification of AI-generated content.
In a landmark case highlighting the perils of unchecked artificial intelligence use in the legal profession, a California attorney has been slapped with a $10,000 fine for submitting a legal brief containing fake citations generated by AI tools
1
2
. The incident has sent shockwaves through the legal community and raised urgent questions about the responsible use of AI in legal practice.Amir Mostafavi, a Los Angeles-area attorney, filed an appeal in an employment-related case where 21 out of 23 cited cases were either entirely fabricated or contained phony quotes from existing cases
1
. Judge Lee Smalley Edmon of California's 2nd District Court of Appeal issued a strongly worded opinion, emphasizing the fundamental duty of attorneys to read and verify the legal authorities they cite2
.Mostafavi claimed he used AI tools such as ChatGPT, Grok, Gemini, and Claude to "enhance" his initial draft without thoroughly reviewing the final version
1
. This case represents the largest fine issued over AI fabrications by a California court and serves as a stark warning to legal professionals about the risks of relying on AI without proper verification.Experts in the field, such as Damien Charlotin, who teaches AI and law in Paris, and Jenny Wondracek, who leads a tracker project on AI-related legal mishaps, predict an exponential rise in similar cases
2
. The issue isn't confined to California; instances of lawyers citing nonexistent legal authority due to AI use have been identified in over 600 cases nationwide3
.Related Stories
The core of the problem lies in AI's tendency to "hallucinate" or generate false information, especially when faced with complex queries or insufficient data. A May 2024 analysis by Stanford University's RegLab found that some forms of AI generate hallucinations in one out of three queries, despite three out of four lawyers planning to use generative AI in their practice
2
.In response to these challenges, California's legal authorities are scrambling to regulate AI use in the judiciary. The state's Judicial Council has issued guidelines requiring judges and court staff to either ban generative AI or adopt a usage policy by December 15, 2025
2
. Additionally, the California Bar Association is considering strengthening its code of conduct to account for various forms of AI3
.As AI continues to evolve and integrate into legal practice, the incident serves as a crucial reminder of the need for careful verification, ongoing education, and responsible use of AI tools in the legal profession. The challenge moving forward will be to harness the benefits of AI while mitigating the risks of misinformation and maintaining the integrity of legal proceedings.
Summarized by
Navi
[2]
[3]
U.S. News & World Report
|19 Feb 2025•Technology
15 May 2025•Policy and Regulation
15 Aug 2025•Technology