4 Sources
[1]
Judge initially fooled by fake AI citations, nearly put them in a ruling
A plaintiff's law firms were sanctioned and ordered to pay $31,100 after submitting fake AI citations that nearly ended up in a court ruling. Michael Wilner, a retired US magistrate judge serving as special master in US District Court for the Central District of California, admitted that he initially thought the citations were real and "almost" put them into an order. These aren't the first lawyers caught submitting briefs with fake citations generated by AI. In some cases, opposing attorneys figure out what happened and notify the judge. In this instance, the judge noticed that some citations were un-verifiable but was troubled by how close he came to including the bogus citations in an order. "Directly put, Plaintiff's use of AI affirmatively misled me," Judge Wilner wrote in a May 5 order. "I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them -- only to find that they didn't exist. That's scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order. Strong deterrence is needed to make sure that attorneys don't succumb to this easy shortcut." It turned out that "the AI hallucinations weren't too far off the mark in their recitations of the substantive law," but that doesn't excuse the lawyers' use of AI in Wilner's view. "That's a pretty weak no-harm, no-foul defense of the conduct here," he wrote. The sanctioned lawyers are representing former Los Angeles County District Attorney Jackie Lacey, who sued State Farm. Lacey's lawsuit alleges the insurance company refused to provide legal defense to her late husband, who faced a civil suit after pointing a gun at a group of activists who were on their porch. "Large team of attorneys" didn't notice fake cases Lacey is represented by what Wilner described as "a large team of attorneys" at the giant K&L Gates and a smaller firm, Ellis George LLP. Wilner described the incident as "a collective debacle." "The attorneys representing Plaintiff in this civil action submitted briefs to the Special Master that contained bogus AI-generated research," Wilner wrote. "After additional proceedings and considerable thought, I conclude that an award combining litigation sanctions against Plaintiff and financial payments from the lawyers and law firms is appropriate to address this misconduct." Wilner was appointed as special master to oversee a dispute regarding State Farm's assertion of various privileges in discovery. Wilner found that a plaintiff's "supplemental brief contained numerous false, inaccurate, and misleading legal citations and quotations... approximately nine of the 27 legal citations in the ten-page brief were incorrect in some way. At least two of the authorities cited do not exist at all. Additionally, several quotations attributed to the cited judicial opinions were phony and did not accurately represent those materials." Legal scholar Eugene Volokh wrote about the incident yesterday, noting "that both of the firms involved (the massive 1,700-lawyer national one and the smaller 45-lawyer predominantly California one) have, to my knowledge, excellent reputations, and the error is not at all characteristic of their work." Second attempt at brief still had fake citations "The lawyers admit that Mr. [Trent] Copeland, an attorney at Ellis George, used various AI tools to generate an 'outline' for the supplemental brief. That document contained the problematic legal research," Wilner wrote. "Mr. Copeland sent the outline to lawyers at K&L Gates. They incorporated the material into the brief. No attorney or staff member at either firm apparently cite-checked or otherwise reviewed that research before filing the brief with the Special Master. Based on the sworn statements of all involved (which I have no reason to doubt), the attorneys at K&L Gates didn't know that Mr. Copeland used AI to prepare the outline; nor did they ask him." During Wilner's initial review of the brief, he was unable to confirm the accuracy of two citations. He emailed the lawyers about this, and K&L Gates "re-submitted the brief without the two incorrect citations -- but with the remaining AI-generated problems in the body of the text," Wilner wrote. Wilner wasn't fully satisfied with the firm's response that the two errors were "inadvertently included" in the brief and sought more detail. "I didn't discover that Plaintiff's lawyers used AI -- and re-submitted the brief with considerably more made-up citations and quotations beyond the two initial errors -- until I issued a later OSC [order to show cause] soliciting a more detailed explanation. The lawyers' sworn statements and subsequent submission of the actual AI-generated 'outline' made clear the series of events that led to the false filings. The declarations also included profuse apologies and honest admissions of fault." Judge: Don't outsource research to AI The lawyers involved "collectively acted in a manner that was tantamount to bad faith," Wilner wrote. He criticized Copeland's undisclosed use of AI products, saying that "no reasonably competent attorney should out-source research and writing to this technology -- particularly without any attempt to verify the accuracy of that material." Wilner also criticized the K&L Gates lawyers for failing to check the validity of the research sent to them. "[W]hen I contacted them and let them know about my concerns regarding a portion of their research, the lawyers' solution was to excise the phony material and submit the Revised Brief -- still containing a half-dozen AI errors," Wilner wrote. Taken together, the lawyers' actions "demonstrate reckless conduct with the improper purpose of trying to influence my analysis of the disputed privilege issues." The sanctions issued by Wilner affect the plaintiff's case. "I have struck, and decline to consider, any of the supplemental briefs that Plaintiff submitted on the privilege issue," Wilner wrote. "From this, I decline to award any of the discovery relief (augmenting a privilege log, ordering production of materials, or requiring in camera review of items) that Plaintiff sought in the proceedings that led up to the bogus briefs." Wilner ordered Ellis George and K&L Gates to pay $26,100 to the defense as reimbursement for fees paid to an arbitration and mediation firm and another $5,000 to cover some of the defense's other costs. Wilner decided not to sanction or penalize the lawyers individually, choosing instead to impose the penalties on the firms. "In their declarations and during our recent hearing, [the lawyers'] admissions of responsibility have been full, fair, and sincere. I also accept their real and profuse apologies. Justice would not be served by piling on them for their mistakes," Wilner wrote.
[2]
How AI is introducing errors into courtrooms
A few weeks ago, a California judge, Michael Wilner, became intrigued by a set of arguments some lawyers made in a filing. He went to learn more about those arguments by following the articles they cited. But the articles didn't exist. He asked the lawyers' firm for more details, and they responded with a new brief that contained even more mistakes than the first. Wilner ordered the attorneys to give sworn testimonies explaining the mistakes, in which he learned that one of them, from the elite firm Ellis George, used Google Gemini as well as law-specific AI models to help write the document, which generated false information. As detailed in a filing on May 6, the judge fined the firm $31,000. Last week, another California-based judge caught another hallucination in a court filing, this time submitted by the AI company Anthropic in the lawsuit that record labels have brought against it over copyright issues. One of Anthropic's lawyers had asked the company's AI model Claude to create a citation for a legal article, but Claude included the wrong title and author. Anthropic's attorney admitted that the mistake was not caught by anyone reviewing the document. Lastly, and perhaps most concerning, is a case unfolding in Israel. After police arrested an individual on charges of money laundering, Israeli prosecutors submitted a request asking a judge for permission to keep the individual's phone as evidence. But they cited laws that don't exist, prompting the defendant's attorney to accuse them of including AI hallucinations in their request. The prosecutors, according to Israeli news outlets, admitted that this was the case, receiving a scolding from the judge. Taken together, these cases point to a serious problem. Courts rely on documents that are accurate and backed up with citations -- two traits that AI models, despite being adopted by lawyers eager to save time, often fail miserably to deliver. Those mistakes are getting caught (for now), but it's not a stretch to imagine that at some point soon, a judge's decision will be influenced by something that's totally made up by AI, and no one will catch it. I spoke with Maura Grossman, who teaches at the School of Computer Science at the University of Waterloo as well as Osgoode Hall Law School, and has been a vocal early critic of the problems that generative AI poses for courts. She wrote about the problem back in 2023, when the first cases of hallucinations started appearing. She said she thought courts' existing rules requiring lawyers to vet what they submit to the courts, combined with the bad publicity those cases attracted, would put a stop to the problem. That hasn't panned out. Hallucinations "don't seem to have slowed down," she says. "If anything, they've sped up." And these aren't one-off cases with obscure local firms, she says. These are big-time lawyers making significant, embarrassing mistakes with AI. She worries that such mistakes are also cropping up more in documents not written by lawyers themselves, like expert reports (in December, a Stanford professor and expert on AI admitted to including AI-generated mistakes in his testimony). I told Grossman that I find all this a little surprising. Attorneys, more than most, are obsessed with diction. They choose their words with precision. Why are so many getting caught making these mistakes? "Lawyers fall in two camps," she says. "The first are scared to death and don't want to use it at all." But then there are the early adopters. These are lawyers tight on time or without a cadre of other lawyers to help with a brief. They're eager for technology that can help them write documents under tight deadlines. And their checks on the AI's work aren't always thorough.
[3]
Judge slams lawyers for 'bogus AI-generated research'
Emma Roth is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO. A California judge slammed a pair of law firms for the undisclosed use of AI after he received a supplemental brief with "numerous false, inaccurate, and misleading legal citations and quotations." In a ruling submitted last week, Judge Michael Wilner imposed $31,000 in sanctions against the law firms involved, saying "no reasonably competent attorney should out-source research and writing" to AI, as pointed out by law professors Eric Goldman and Blake Reid on Bluesky. "I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them - only to find that they didn't exist," Judge Milner writes. "That's scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order." As noted in the filing, a plaintiff's legal representative for a civil lawsuit against State Farm used AI to generate an outline for a supplemental brief. However, this outline contained "bogus AI-generated research" when it was sent to a separate law firm, K&L Gates, which added the information to a brief. "No attorney or staff member at either firm apparently cite-checked or otherwise reviewed that research before filing the brief," Judge Milner writes. When Judge Milner reviewed the brief, he found that "at least two of the authorities cited do not exist at all." After asking K&L Gates for clarification, the firm resubmitted the brief, which Judge Milner said contained "considerably more made-up citations and quotations beyond the two initial errors." He then issued an Order to Show Cause, resulting in lawyers giving sworn statements that confirm the use of AI. The lawyer who created the outline admitted to using Google Gemini, as well as the AI legal research tools in Westlaw Precision with CoCounsel. This isn't the first time lawyers have been caught using AI in the courtroom. Former Trump lawyer Michael Cohen cited made-up court cases in a legal document after mistaking Google Gemini, then called Bard, as "a super-charged search engine" rather than an AI chatbot. A judge also found that lawyers suing a Colombian airline included a slew of phony cases generated by ChatGPT in their brief. "The initial, undisclosed use of AI products to generate the first draft of the brief was flat-out wrong," Judge Milner writes. "And sending that material to other lawyers without disclosing its sketchy AI origins realistically put those professionals in harm's way."
[4]
Law Firms Caught and Punished for Passing Around "Bogus" AI Slop in Court
A California judge fined two law firms $31,000 after discovering that they'd included AI slop in a legal brief -- the latest instance in a growing tide of avoidable legal drama wrought by lawyers using generative AI to do their work without any due diligence. As The Verge reported this week, the court filing in question was a brief for a civil lawsuit against the insurance giant State Farm. After its submission, a review of the brief found that it contained "bogus AI-generated research" that led to the inclusion of "numerous false, inaccurate, and misleading legal citations and quotations," as judge Michael Wilner wrote in a scathing ruling. According to the ruling, it was only after the judge requested more information about the error-riddled brief that lawyers at the firms involved fessed up to using generative AI. And if he hadn't caught onto it, Milner cautioned, the AI slop could have made its way into an official judicial order. "I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them -- only to find that they didn't exist," Milner wrote in his ruling. "That's scary." "It almost led to the scarier outcome (from my perspective)," he added, "of including those bogus materials in a judicial order." A lawyer at one of the firms involved with the ten-page brief, the Ellis George group, used Google's Gemini and a few other law-specific AI tools to draft an initial outline. That outline included many errors, but was passed along to the next law firm, K&L Gates, without any corrections. Incredibly, the second firm also failed to notice and correct the fabrications. "No attorney or staff member at either firm apparently cite-checked or otherwise reviewed that research before filing the brief," Milner wrote in the ruling. After the brief was submitted, a judicial review found that a staggering nine out of 27 legal citations included in the filing "were incorrect in some way," and "at least two of the authorities cited do not exist." Milner also found that quotes "attributed to the cited judicial opinions were phony and did not accurately represent those materials." As for his decision to levy the hefty fines, Milner said the egregiousness of the failures, coupled with how compelling the AI's made-up responses were, necessitated "strong deterrence." "Strong deterrence is needed," wrote Milner, "to make sure that lawyers don't respond to this easy shortcut."
Share
Copy Link
A California judge fines law firms $31,000 for submitting AI-generated fake citations in a legal brief, highlighting growing concerns about AI use in the legal system.
In a startling incident that highlights the growing concerns surrounding artificial intelligence (AI) in the legal system, a California judge has imposed $31,000 in sanctions on two law firms for submitting a brief containing "bogus AI-generated research" 1. Judge Michael Wilner, serving as a special master in the US District Court for the Central District of California, admitted that he was initially persuaded by the fake citations and nearly included them in a judicial order.
The case involved a civil lawsuit against State Farm, where attorneys from K&L Gates and Ellis George LLP represented the plaintiff. Trent Copeland, an attorney at Ellis George, used AI tools including Google Gemini and law-specific AI models to generate an outline for a supplemental brief 2. This outline, containing problematic legal research, was then sent to K&L Gates, who incorporated the material into the brief without proper verification.
Judge Wilner's review revealed that approximately nine out of 27 legal citations in the ten-page brief were incorrect, with at least two cited authorities not existing at all 3. Additionally, several quotations attributed to judicial opinions were fabricated and did not accurately represent the cited materials.
In his ruling, Judge Wilner expressed grave concern over the incident, stating, "Directly put, Plaintiff's use of AI affirmatively misled me" 1. He emphasized the need for strong deterrence to prevent attorneys from succumbing to the "easy shortcut" of using AI without proper verification.
This incident is not isolated. Similar cases have been reported in other jurisdictions, including one involving Anthropic in a copyright lawsuit and another in Israel where prosecutors cited non-existent laws 2. Legal scholar Maura Grossman notes that these AI-induced errors seem to be increasing, potentially compromising the integrity of court proceedings.
The incident highlights a growing divide among lawyers regarding AI use. While some are cautious about adopting the technology, others, particularly those under time constraints, are eager to leverage AI for assistance 2. However, the risks of unchecked AI use in legal documents are becoming increasingly apparent.
Judge Wilner's ruling serves as a stark reminder of the importance of human oversight in AI-assisted legal work. He emphasized that "no reasonably competent attorney should out-source research and writing to this technology -- particularly without any attempt to verify the accuracy of that material" 4. The incident underscores the need for clear guidelines and ethical standards for AI use in the legal profession to maintain the integrity of the judicial system.
Summarized by
Navi
[2]
Google launches its new Pixel 10 smartphone series, showcasing advanced AI capabilities powered by Gemini, aiming to challenge competitors in the premium handset market.
20 Sources
Technology
3 hrs ago
20 Sources
Technology
3 hrs ago
Google's Pixel 10 series introduces groundbreaking AI features, including Magic Cue, Camera Coach, and Voice Translate, powered by the new Tensor G5 chip and Gemini Nano model.
12 Sources
Technology
3 hrs ago
12 Sources
Technology
3 hrs ago
NASA and IBM have developed Surya, an open-source AI model that can predict solar flares and space weather with improved accuracy, potentially helping to protect Earth's infrastructure from solar storm damage.
6 Sources
Technology
11 hrs ago
6 Sources
Technology
11 hrs ago
Google's latest smartwatch, the Pixel Watch 4, introduces significant upgrades including a curved display, enhanced AI features, and improved health tracking capabilities.
17 Sources
Technology
3 hrs ago
17 Sources
Technology
3 hrs ago
FieldAI, a robotics startup, has raised $405 million to develop "foundational embodied AI models" for various robot types. The company's innovative approach integrates physics principles into AI, enabling safer and more adaptable robot operations across diverse environments.
7 Sources
Technology
3 hrs ago
7 Sources
Technology
3 hrs ago