4 Sources
4 Sources
[1]
Attorney Slapped With Hefty Fine for Citing 21 Fake, AI-Generated Cases
Emily is an experienced reporter who covers cutting-edge tech, from AI and EVs to brain implants. She stays grounded by hiking and playing guitar. A California attorney made an expensive mistake when trying to cut corners on a legal brief. Amir Mostafavi submitted an appeal in an employment-related case, but 21 of the 23 cases he cited to support his argument were fake -- hallucinated by AI -- or included phony quotes from existing cases. Judge Lee Smalley Edmon sanctioned Mostafavi and fined him $10,000. "To state the obvious, it is a fundamental duty of attorneys to read the legal authorities they cite in appellate briefs," Judge Edmon says in a strongly worded opinion. "Plainly, counsel did not read the cases he cited before filing his appellate briefs: Had he read them, he would have discovered, as we did, that the cases did not contain the language he purported to quote, did not support the propositions for which they were cited, or did not exist." Mostafavi claims he wrote the first draft of the brief but then used AI tools such as ChatGPT, Grok, Gemini, and Claude to "enhance" it. He did not read through the final version before filing it, and says he should not be fined because he did not know AI tools can make up information. Mostavi passed the California bar exam in 2012, a decade before OpenAI claimed in 2023 that ChatGPT was smart enough to master that test. Mostavi was also a part-time law professor at the People's College of Law in Los Angeles, an unaccredited school that shut down in 2023, according to his website. He told the court he has educated himself about AI hallucinations since being called out. Judge Edmon does not condemn the use of AI in the legal profession, and in fact says, "there is nothing inherently wrong with an attorney appropriately using AI in a law practice." But attorneys who use AI must carefully fact-check every citation, and "cannot delegate that role to AI, computers, robots, or any other form of technology." The use of inaccurate, AI-generated text in legal briefings is becoming "far too common," the judge adds. In 2024, a Stanford law professor was caught using AI-generated citations in a legal argument that was, ironically, in support of a bill to curb deepfakes. In 2023, two New York lawyers were sanctioned after submitting a brief that included fake ChatGPT citations. The issue isn't limited to lawyers. Alaska's top education official used AI to draft a state-wide school policy on cell phone use in 2024, and four of the six studies cited were made up. Education Commissioner Deena Bishop admitted to using generative AI to draft the resolution and says it was an early version posted online prematurely. AI companies don't have a solution to stop hallucinations. In a paper published this month, OpenAI said models make up information when they do not know the answer, and they are trained to reward guesswork. They also want to please users, who may not be happy if the AI cannot answer their question, The Register reports. So, they make stuff up. In a legal context, if an AI cannot find a suitable case for an attorney to cite, it might make one up. Judge Edmon argues this wastes the court's time and resources to double-check every citation in submitted documents, and is not a tenable long-term solution for the legal system.
[2]
California attorney fined $10k for filing an appeal with fake legal citations generated by AI
A California attorney must pay a $10,000 fine for filing a state court appeal full of fake quotations generated by the artificial intelligence tool ChatGPT. The fine appears to be the largest issued over AI fabrications by a California court and came with a blistering opinion stating that 21 of 23 quotes from cases cited in the attorney's opening brief were made up. It also noted that numerous out-of-state and federal courts have confronted attorneys for citing fake legal authority. "We therefore publish this opinion as a warning," it continued. "Simply stated, no brief, pleading, motion, or any other paper filed in any court should contain any citations -- whether provided by generative AI or any other source -- that the attorney responsible for submitting the pleading has not personally read and verified." The opinion, issued on Sept. 12 in California's 2nd District Court of Appeal, is a clear example of why the state's legal authorities are scrambling to regulate the use of AI in the judiciary. The state's Judicial Council two weeks ago issued guidelines requiring judges and court staff to either ban generative AI or adopt a generative AI use policy by Dec. 15. Meanwhile, the California Bar Association is considering whether to strengthen its code of conduct to account for various forms of AI following a request by the California Supreme Court last month. The Los Angeles-area attorney fined, Amir Mostafavi, told the court that he did not read text generated by the AI model before submitting the appeal in July 2023, months after OpenAI marketed ChatGPT as capable of passing the bar exam. A three-judge panel fined him for filing a frivolous appeal, violating court rules, citing fake cases, and wasting the court's time and the taxpayers money, according to the opinion. Mostafavi told CalMatters he wrote the appeal and then used ChatGPT to try and improve it. He said that he didn't know it would add case citations or make things up. He thinks it is unrealistic to expect lawyers to stop using AI. It's become an important tool just as online databases largely replaced law libraries and, until AI systems stop hallucinating fake information, he suggests lawyers who use AI to proceed with caution. "In the meantime we're going to have some victims, we're going to have some damages, we're going to have some wreckages," he said. "I hope this example will help others not fall into the hole. I'm paying the price." The fine issued to Mostafavi is the most costly penalty issued to an attorney by a California state court and one of the highest fines ever issued over attorney use of AI, according to Damien Charlotin, who teaches a class on AI and the law at a business school in Paris. He tracks instances of attorneys citing fake cases, primarily in Australia, Canada, the United States, and the United Kingdom. In a widely-publicized case in May, a U.S. District Court judge in California ordered two law firms to pay $31,100 in fees to defense counsel and the court for costs associated with using "bogus AI-generated research." In that ruling, the judge described feeling misled, said they almost cited fake material in a judicial order and said "Strong deterrence is needed to make sure that attorneys don't succumb to this easy shortcut." Charlotin thinks courts and the public should expect to see an exponential rise in these cases in the future. When he started tracking court filings involving AI and fake cases earlier this year, he encountered a few cases a month. Now he sees a few cases a day. Large language models confidently state falsehoods as facts, particularly when there are no supporting facts. "The harder your legal argument is to make, the more the model will tend to hallucinate, because they will try to please you," he said. "That's where the confirmation bias kicks in." A May 2024 analysis by Stanford University's RegLab found that although three out of four lawyers plan to use generative AI in their practice, some forms of AI generate hallucinations in one out of three queries. Detecting fake material cited in legal filings could get harder as models grow in size. Another tracker of cases where lawyers cite nonexistent legal authority due to use of AI identifies 52 such cases in California and more than 600 nationwide. That amount is expected to increase in the near future because AI innovation is outpacing the education of attorneys, said Nicholas Sanctis, a law student at Capital University Law School in Ohio. Jenny Wondracek, who leads the tracker project, said she expects this trend to get worse because she still regularly encounters lawyers who don't know that AI makes things up or believe that legal tech tools can eliminate all fake or false material generated by language models. "I think we'd see a reduction if (lawyers) just understood the basics of the technology," she said. Like Charlotin, she suspects there are more instances of made up cases generated by AI in state court filings than in federal courts, but a lack of standard filing methods makes it difficult to verify that. She said she encounters fake cases most often among overburdened attorneys or people who choose to represent themselves in family court. She suspects the number of arguments filed by attorneys that use AI and cite fake cases will continue to go up, but added that not just attorneys engage in the practice. In recent weeks, she's documented three instances of judges citing fake legal authority in their decisions. As California considers how to treat generative AI and fake case citations, Wondracek said they can consider approaches taken by other states, such as temporary suspensions, requiring attorneys who get caught to take courses to better understand how to ethically use AI, or requiring them to teach law students how they can avoid making the same mistake. Mark McKenna, codirector of the UCLA Institute of Technology, Law & Policy praised fines like the one against Mostafavi as punishing lawyers for "an abdication of your responsibility as a party representing someone." He thinks the problem "will get worse before it gets better," because there's been a rush among law schools and private firms to adopt AI without thinking through the appropriate way to use them. UCLA School of Law professor Andrew Selbst agrees, pointing out that clerks that work for judges are recent law school graduates, and students are getting bombarded with the message that they must use AI or get left behind. Educators and other professionals report feeling similar pressures. "This is getting shoved down all our throats," he said. "It's being pushed in firms and schools and a lot of places and we have not yet grappled with the consequences of that." ___ This story was originally published in CalMatters and distributed through a partnership with The Associated Press.
[3]
LexisNexis exec says it's 'a matter of time' before attorneys lose their licenses over using open-source AI pilots in court | Fortune
A growing number of AI-created flaws found in legal documents submitted to courts have brought attorneys under increased scrutiny. Courts across the country have sanctioned attorneys for misuse of open-source LLMs like OpenAI's ChatGPT and Anthropic's Claude, which have made up "imaginary" cases, suggested that attorneys invent court decisions to strengthen their arguments, and provided improper citations to legal documents. Experts tell Fortune more of these cases will crop up -- and along with them steep penalties for the attorneys who misuse AI. Damien Charlotin, a lawyer and research fellow at HEC Paris, runs a database of AI hallucination cases. He's tallied 376 cases to date, 244 of which are U.S. cases. Charlotin pointed out that attorneys can be particularly prone to oversights, as individuals in his profession delegate tasks to teams, oftentimes don't read all of the material collected by coworkers, and copy and paste strings of citations without proper fact-checking methods. Now AI is making the practice more apparent as attorneys adjust to the new tech. "We have a situation where these (open-source models) are making up the law," Sean Fitzpatrick, LexisNexis North America, UK & Ireland CEO, told Fortune. "The stakes are getting higher, and that's just on the attorney's side." Fitzpatrick, a proponent of purpose-built AI applications for the legal market, admits the tech giants' low-cost pilot chatbots are good for things like summarizing documents and writing emails. But for "real legal work" like drafting motions, the models "can't do what lawyers need them to do," Fitzpatrick said. For example, drafting courtroom-ready documents for cases that could involve Medicaid coverage decisions, Social Security benefits, or criminal prosecutions cannot afford to have AI-created mistakes, he added. Entering sensitive information into the open-source models also risks breach of attorney-client privilege. Frank Emmert, executive director of the Center for International and Comparative Law at Indiana University and legal AI expert, told Fortune that open-source models can receive privileged information from attorneys that use them. If someone else knows that, they could reverse engineer a contract between a client and attorney, for instance, using the right prompts. "You're not gonna find the full contract, but you're going to find enough information out there if they have been uploading these contracts," Emmert said. "Potentially you could find client names... or at least, you know, information that makes the client identifiable." If uploaded without permission by an attorney, this can become findable, publicly available information, since the open-source models don't protect privilege, Fitzpatrick said. "I think it's only a matter of time before we do see attorneys losing their license over this," he said. Fitzpatrick said models like his company's generative tool Lexis+ AI, which inked a seven-year contract as an information provider to the federal judiciary in March, may be the answer to risks of hallucinations and client privacy. LexisNexis doesn't train its LLMs on our customers' data and prompts are encrypted. Plus, the tech is "most equipped" to solve hallucination issues since it pulls from a "walled garden of content," or a closed, proprietary system that's updated everyday, Fitzpatrick said. Still, LexisNexis doesn't claim to maintain privilege and recognizes that obligation always rests with the attorney, the company said. But experts tell Fortune AI used for legal purposes inherently comes with risks, open source or not. Emmert says he categorizes models into three baskets: open-access tools like ChatGPT, in-house applications he refers to as "small language models," and "medium language models" like LexisNexis' product. Fear of mistakes have pushed firms to restrict use of open-source models and instead develop in-house applications, which are basically a server in the firm where attorneys upload their contracts and documents and start training an AI model on them, Emmert said. But compared to the vast amount of data available to open-source models, in-house applications will always have inferior answers, Emmert said. He said medium sized models can be used to help with contract drafting, document review, evidence evaluation, or discovery procedures, but are still limited in what they can pull from in comparison to the open internet. "And the question is, can we fully trust them? ... One, that they're not hallucinating, and second, that the data really remains privileged and private," Emmert said. He said that if he was part of a law firm, he would hesitate to contract with this type of provider and spend a lot of money for something that is still in its infancy and may end up not being really useful. "Personally, I believe that these AI tools are fantastic," Emmert said. "They can really help us get more work done at a higher level of quality with significantly lower investment of time." Still, he warned the industry is in a new era that requires accelerated education on something that was quickly adopted without being totally understood. "Starting in academia but continuing in the profession, we need to train every lawyer, every judge, to become masters of artificial intelligence -- not in the technical sense, but using it," Emmert said. "That's really where the challenge is."
[4]
California Attorney Fined $10k for Filing an Appeal With Fake Legal Citations Generated by AI
A California attorney must pay a $10,000 fine for filing a state court appeal full of fake quotations generated by the artificial intelligence tool ChatGPT. The fine appears to be the largest issued over AI fabrications by a California court and came with a blistering opinion stating that 21 of 23 quotes from cases cited in the attorney's opening brief were made up. It also noted that numerous out-of-state and federal courts have confronted attorneys for citing fake legal authority. "We therefore publish this opinion as a warning," it continued. "Simply stated, no brief, pleading, motion, or any other paper filed in any court should contain any citations -- whether provided by generative AI or any other source -- that the attorney responsible for submitting the pleading has not personally read and verified." The opinion, issued on Sept. 12 in California's 2nd District Court of Appeal, is a clear example of why the state's legal authorities are scrambling to regulate the use of AI in the judiciary. The state's Judicial Council two weeks ago issued guidelines requiring judges and court staff to either ban generative AI or adopt a generative AI use policy by Dec. 15. Meanwhile, the California Bar Association is considering whether to strengthen its code of conduct to account for various forms of AI following a request by the California Supreme Court last month. The Los Angeles-area attorney fined, Amir Mostafavi, told the court that he did not read text generated by the AI model before submitting the appeal in July 2023, months after OpenAI marketed ChatGPT as capable of passing the bar exam. A three-judge panel fined him for filing a frivolous appeal, violating court rules, citing fake cases, and wasting the court's time and the taxpayers money, according to the opinion. Mostafavi told CalMatters he wrote the appeal and then used ChatGPT to try and improve it. He said that he didn't know it would add case citations or make things up. He thinks it is unrealistic to expect lawyers to stop using AI. It's become an important tool just as online databases largely replaced law libraries and, until AI systems stop hallucinating fake information, he suggests lawyers who use AI to proceed with caution. "In the meantime we're going to have some victims, we're going to have some damages, we're going to have some wreckages," he said. "I hope this example will help others not fall into the hole. I'm paying the price." The fine issued to Mostafavi is the most costly penalty issued to an attorney by a California state court and one of the highest fines ever issued over attorney use of AI, according to Damien Charlotin, who teaches a class on AI and the law at a business school in Paris. He tracks instances of attorneys citing fake cases, primarily in Australia, Canada, the United States, and the United Kingdom. In a widely-publicized case in May, a U.S. District Court judge in California ordered two law firms to pay $31,100 in fees to defense counsel and the court for costs associated with using "bogus AI-generated research." In that ruling, the judge described feeling misled, said they almost cited fake material in a judicial order and said "Strong deterrence is needed to make sure that attorneys don't succumb to this easy shortcut." Charlotin thinks courts and the public should expect to see an exponential rise in these cases in the future. When he started tracking court filings involving AI and fake cases earlier this year, he encountered a few cases a month. Now he sees a few cases a day. Large language models confidently state falsehoods as facts, particularly when there are no supporting facts. "The harder your legal argument is to make, the more the model will tend to hallucinate, because they will try to please you," he said. "That's where the confirmation bias kicks in." A May 2024 analysis by Stanford University's RegLab found that although three out of four lawyers plan to use generative AI in their practice, some forms of AI generate hallucinations in one out of three queries. Detecting fake material cited in legal filings could get harder as models grow in size. Another tracker of cases where lawyers cite nonexistent legal authority due to use of AI identifies 52 such cases in California and more than 600 nationwide. That amount is expected to increase in the near future because AI innovation is outpacing the education of attorneys, said Nicholas Sanctis, a law student at Capital University Law School in Ohio. Jenny Wondracek, who leads the tracker project, said she expects this trend to get worse because she still regularly encounters lawyers who don't know that AI makes things up or believe that legal tech tools can eliminate all fake or false material generated by language models. "I think we'd see a reduction if (lawyers) just understood the basics of the technology," she said. Like Charlotin, she suspects there are more instances of made up cases generated by AI in state court filings than in federal courts, but a lack of standard filing methods makes it difficult to verify that. She said she encounters fake cases most often among overburdened attorneys or people who choose to represent themselves in family court. She suspects the number of arguments filed by attorneys that use AI and cite fake cases will continue to go up, but added that not just attorneys engage in the practice. In recent weeks, she's documented three instances of judges citing fake legal authority in their decisions. As California considers how to treat generative AI and fake case citations, Wondracek said they can consider approaches taken by other states, such as temporary suspensions, requiring attorneys who get caught to take courses to better understand how to ethically use AI, or requiring them to teach law students how they can avoid making the same mistake. Mark McKenna, codirector of the UCLA Institute of Technology, Law & Policy praised fines like the one against Mostafavi as punishing lawyers for "an abdication of your responsibility as a party representing someone." He thinks the problem "will get worse before it gets better," because there's been a rush among law schools and private firms to adopt AI without thinking through the appropriate way to use them. UCLA School of Law professor Andrew Selbst agrees, pointing out that clerks that work for judges are recent law school graduates, and students are getting bombarded with the message that they must use AI or get left behind. Educators and other professionals report feeling similar pressures. "This is getting shoved down all our throats," he said. "It's being pushed in firms and schools and a lot of places and we have not yet grappled with the consequences of that." ___ This story was originally published in CalMatters and distributed through a partnership with The Associated Press.
Share
Share
Copy Link
A California attorney faces a $10,000 fine for submitting an appeal with fake AI-generated legal citations. This case highlights the growing concerns about AI use in the legal profession and the need for careful verification of AI-generated content.
California attorney Amir Mostafavi was fined $10,000 for submitting a legal appeal containing fabricated citations from AI tools
1
2
. This incident highlights critical concerns about AI's ethical use in the legal profession and the vital need for content verification.Judge Lee Smalley Edmon of California's 2nd District Court of Appeal confirmed 21 of 23 citations in Mostafavi's brief were fictitious
1
. The judge stressed attorneys' fundamental duty to verify all cited legal authorities. Mostafavi admitted using AI tools for "enhancement" but neglected to review the final version1
. This case underscores the risks of unchecked AI use and the paramount importance of human oversight in law.Judge Edmon noted that inaccurate, AI-generated text in legal briefings is "far too common"
1
. Similar incidents are reported nationwide, with experts forecasting an exponential increase2
3
.California's legal authorities are enacting regulatory measures:
2
.2
.Related Stories
Despite AI's potential, its limitations present risks:
2
.3
.3
.Responsible AI integration in law demands careful consideration, robust regulation, and continuous education to maximize benefits and mitigate risks.
Summarized by
Navi
[2]
[4]
U.S. News & World Report
|19 Feb 2025•Technology
15 May 2025•Policy and Regulation
08 Oct 2025•Technology