3 Sources
3 Sources
[1]
Lawyers Are Using AI to Slop-ify Their Legal Briefs, and It's Getting Bad
AI is good for a lot of thingsâ€"namely cheating on stuff and pretending like you're more productive than you actually are. Recently, this affliction has spread to a number of professions where you would have thought the work ethic is slightly better than it apparently is. Case in point: lawyers. Lawyers apparently love chatbots like ChatGPT because they can help them power through the drudgery of writing legal briefs. Unfortunately, as most of us know, chatbots are also prone to making stuff up and, more and more, this is leading to legal blunders with serious implications for everybody involved. The New York Times has a new story out on this unfortunate trend, noting that, more and more, punishments are being doled out to lawyers who are caught sloppily using AI (these punishments can involve a fine or some other minor inconvenience). Apparently, due to the stance of the American Bar Association, it's okay for lawyers to use AI in the course of their legal work. They're just supposed to make sure that the text that the chatbot spits out is, you know, correct, and not full of fabricated legal casesâ€"which is something that seems to keep happening. Indeed, the Times notes: ...according to court filings and interviews with lawyers and scholars, the legal profession in recent months has increasingly become a hotbed for A.I. blunders. Some of those stem from people’s use of chatbots in lieu of hiring a lawyer. Chatbots, for all their pitfalls, can help those representing themselves “speak in a language that judges will understand,†said Jesse Schaefer, a North Carolina-based lawyer...But an increasing number of cases originate among legal professionals, and courts are starting to map out punishments of small fines and other discipline. Now, some lawyers are apparently calling out other lawyers for their blunders, and are trying to creating a tracking system that can compile information on cases involving AI misuse. The Times notes the work of Damien Charlotin, a French attorney who started an online database to track legal blunders involving AI. Scrolling through Charlotin's website, it's definitely sorta terrifying since there are currently 11 pages worth of cases involving this numbskullery (the researchers say they've identified 509 cases so far). The newspaper notes that there is a "growing network of lawyers who track down A.I. abuses committed by their peers" and post them online, in an apparent effort to shame the behavior and alert people to the fact that it's happening. However, it's not clear that it's having the impact it needs to, so far. “These cases are damaging the reputation of the bar,†Stephen Gillers, an ethics professor at New York University School of Law, told the newspaper. “Lawyers everywhere should be ashamed of what members of their profession are doing.â€
[2]
Judge Blasts Lawyer Caught Using ChatGPT in Divorce Court, Orders Him to Take Remedial Law Classes
"In our view, this does not satisfy the requirement of competent representation." Artificial intelligence is here, and it's wreaking havoc on court rooms throughout the US. The latest AI law blunder comes from the Maryland appellate court, where a family lawyer representing a mother in a custody battle was caught filing court briefs cooked up with ChatGPT. The Daily Record, which publishes summaries of Maryland court opinions, reported that the mother's lawyer submitted a complaint for divorce gushing with AI hallucinated legal citations which made it into the court record. Like other ChatGPT legal muckups, many of the citations referenced case law which simply did not exist. The filing also contained existing legal citations which contradicted the arguments made in the brief. In his defense, the attorney, Adam Hyman, said that he "was not involved directly in the research of the offending citations." Instead, he blamed a law clerk he says used ChatGPT to find the citations, as well as to edit the brief before sending it on. Per a later filing, Hyman wrote that the clerk wasn't aware of the risks of AI hallucinations, the phenomenon in which chatbots make up false information to satisfy users' queries. When the clerk forwarded a draft of the erroneous brief, the lawyer said he didn't vet the cases referenced, and added that he "does very little appellate work." Of course, that's a terrible excuse; as a lawyer, it's his job to review what clerks write for accuracy -- and to work with them to understand proper workflows and standards for legal writing. In an opinion filed after the fact, Maryland appellate Judge Kathryn Grill Graeff wrote that "it is unquestionably improper for an attorney to submit a brief with fake cases generated by AI." "[C]ounsel admitted that he did not read the cases cited. Instead, he relied on his law clerk, a non-lawyer, who also clearly did not read the cases, which were fictitious," the judge scathed. "In our view, this does not satisfy the requirement of competent representation. A competent attorney reads the legal authority cited in court pleadings to make sure that they stand for the proposition for which they are cited." Grill Graeff noted that a blunder like this wouldn't usually call for an opinion -- which sets legal precedent -- but that she wanted to "address a problem that is recurring in courts around the country": AI in the courtroom. As part of the process, Hyman was required to admit responsibility for the improper case citations, as he was the only licensed attorney on the case. Both the lawyer and the clerk were required to complete "legal education courses on the ethical use of AI," as well as to implement office-wide protocols for citation verification. Hyman was also referred to the Attorney Grievance Commission for further discipline. Grill Graeff noted that this was the first time that Maryland's appellate courts had to address the problem -- though if recent trends are any indication, it certainly won't be the last.
[3]
Lawyers hit with fines after AI flubs fill their filings: 'They...
Lawyers across the country are getting busted for using AI to write their legal briefs -- and their excuses are even more creative than the fake cases they've allegedly been citing. From blaming hackers to claiming that toggling between windows is just too hard, attorneys are desperately trying to dodge sanctions for a tidal wave of AI-generated nonsense clogging up court dockets. But judges are tired of hearing it and a group of "legal vigilantes" is making sure none of these blunders go unnoticed. A network of lawyers has been tracking down every instance of AI misuse they can find, compiling them in a public database that has swelled to over 500 cases. The database maintained by France-based lawyer and researcher Damien Charlotin exposes fake case citations, bogus quotes and the attorneys responsible -- hoping to shame the profession into cleaning up its act. The number of cases keeps growing, Charlotin told The Post on Wednesday. "[T]his has accelerated exactly at the moment I started cataloguing these cases, from maybe a handful a month to two or three a day," he said in an email. "I think this will continue to grow for a time," Charlotin added. He said some examples are just mistakes, and "hopefully awareness will reduce them, but that's not a given." In other instances, AI is misused by "reckless, sloppy attorneys or vexatious litigants," the researcher wrote. "I am afraid there is little stopping them," he added. Amir Mostafavi, a Los Angeles-area attorney, was recently slapped with a $10,000 fine after filing an appeal in which 21 of 23 case quotes were completely made up by ChatGPT. His excuse? He said he wrote the appeal himself and just asked ChatGPT to "try and improve it," not knowing it would add fake citations. "In the meantime we're going to have some victims, we're going to have some damages, we're going to have some wreckages," Mostafavi told CalMatters. "I hope this example will help others not fall into the hole. I'm paying the price." Ars Technica reported that Innocent Chinweze, a New York City-based lawyer, was recently caught filing a brief riddled with fake cases. He said he'd used Microsoft Copilot for the job. Then, in a bizarre pivot, he claimed his computer had been hacked and that malware was the real culprit. The judge, Kimon C. Thermos, called the excuse an "incredible and unsupported statement." After a lunch break, Chinweze "dramatically" changed his story again -- this time by claiming that he didn't know AI could make things up. Chinweze was fined $1,000 and referred to a grievance committee for conduct that "seriously implicated his honesty, trustworthiness, and fitness to practice law." Another lawyer, Alabama attorney James A. Johnson, blamed his "embarrassing mistake" on the sheer difficulty of using a laptop, according to Ars Technica. He said he was at a hospital with a sick family member and under "time pressure and difficult personal circumstance." Instead of using a bar-provided legal research tool, he opted for a Microsoft Word plug-in called Ghostwriter Legal because, he claimed, it was "tedious to toggle back and forth between programs on [his] laptop with the touchpad." Judge Terry F. Moorer was unimpressed, noting that Ghostwriter clearly stated it used ChatGPT. Johnson's client was even less impressed, firing him on the spot. The judge hit the attorney with a $5,000 fine, ruling his laziness was "tantamount to bad faith." Such cases are "damaging the reputation of the bar," tephen Gillers, an ethics professor at New York University School of Law, told the New York Times. "Lawyers everywhere should be ashamed of what members of their profession are doing," he added. Still, the excuses for AI mistakes keep coming. One lawyer blamed his client for helping draft a problematic filing. Another claimed she had "login issues with her Westlaw subscription." A Georgia lawyer insisted she'd "accidentally filed a rough draft." But the penalties are getting steeper. Florida lawyer James Martin Paul was reportedly hit with a staggering $85,000 sanction for "repeated, abusive, bad-faith conduct that cannot be recognized as legitimate legal practice and must be deterred." When he argued the fine was too high, the court shot back that caving to his arguments "would only benefit serial hallucinators." Illinois attorney William T. Panichi has been sanctioned at least three times, Ars Technica found. After the first, he promised the court, "I'm not going to do it again," just before getting hit with two more rounds of sanctions a month later. Judges are losing their patience. "At this point, to be blunt, any lawyer unaware that using generative AI platforms to do legal research is playing with fire is living in a cloud," wrote US Bankruptcy Judge Michael B. Slade. Another judge, Nancy Miller, blasted a lawyer who argued it only takes "7.6 seconds" to check a citation. Miller noted that the lawyer herself had failed to take those "precious seconds" to check her own work. As one Texas judge put it, "At one of the busiest court dockets in the nation, there are scant resources to spare ferreting out erroneous AI citations." The Post has sought comment from Mostafavi, Chinweze, Johnson, Paul and Panichi.
Share
Share
Copy Link
Lawyers across the US are increasingly facing fines and sanctions for submitting legal briefs containing AI-generated fake case citations and fabricated legal precedents. A database now tracks over 500 cases of AI misuse in legal proceedings, with penalties ranging from $1,000 to $85,000.
The legal profession is grappling with an unprecedented wave of artificial intelligence misuse as lawyers increasingly rely on chatbots like ChatGPT to generate legal briefs, often with disastrous consequences. According to recent reports, attorneys across the United States are facing mounting sanctions and fines for submitting court documents containing AI-generated fabrications, fake case citations, and non-existent legal precedents
1
.
Source: New York Post
The American Bar Association permits lawyers to use AI in their legal work, but requires them to verify the accuracy of AI-generated content. However, many attorneys are failing to meet this basic professional standard, leading to what legal experts describe as a crisis of competence and ethics in the profession
1
.French attorney and researcher Damien Charlotin has created a comprehensive online database documenting instances of AI misuse in legal proceedings. The database has swelled to over 500 documented cases, spanning 11 pages of legal blunders involving artificial intelligence
1
. Charlotin reports that the frequency of these incidents has dramatically increased, accelerating from "maybe a handful a month to two or three a day" since he began cataloguing cases3
.A growing network of lawyers is actively tracking down AI abuses committed by their peers, posting them online in an apparent effort to shame the behavior and alert the public to the widespread nature of the problem
1
.One of the most significant recent cases involved Maryland family lawyer Adam Hyman, who was caught filing court briefs in a custody battle that contained AI-generated fake legal citations. Maryland appellate Judge Kathryn Grill Graeff issued a scathing opinion, noting that "it is unquestionably improper for an attorney to submit a brief with fake cases generated by AI"
2
. Hyman blamed a law clerk who used ChatGPT without understanding the risks of AI hallucinations, but the judge ruled this did not satisfy the requirement of competent representation2
.Los Angeles attorney Amir Mostafavi faced a $10,000 fine after filing an appeal where 21 of 23 case quotes were completely fabricated by ChatGPT. His defense claimed he only asked the AI to "try and improve" his self-written appeal, unaware it would add fake citations
3
.The penalties are becoming increasingly severe. Florida lawyer James Martin Paul received a staggering $85,000 sanction for "repeated, abusive, bad-faith conduct," with the court noting that reducing the fine "would only benefit serial hallucinators"
3
.
Source: Futurism
Related Stories
Lawyers caught using AI improperly have offered increasingly creative excuses for their misconduct. New York City attorney Innocent Chinweze initially blamed Microsoft Copilot for fake citations in his brief, then claimed his computer had been hacked by malware. When Judge Kimon C. Thermos called this an "incredible and unsupported statement," Chinweze changed his story again, claiming ignorance about AI's capacity for fabrication. He was fined $1,000 and referred to a grievance committee
3
.Alabama attorney James A. Johnson blamed his use of AI on the difficulty of using a laptop touchpad while at a hospital with a sick family member. Judge Terry F. Moorer was unimpressed, fining Johnson $5,000 for conduct "tantamount to bad faith"
3
.Judges are expressing mounting frustration with the situation. US Bankruptcy Judge Michael B. Slade wrote bluntly: "At this point, any lawyer unaware that using generative AI platforms to do legal research is playing with fire is living in a cloud"
3
.Summarized by
Navi
[2]
[3]