2 Sources
2 Sources
[1]
Lawyers Are Using AI to Slop-ify Their Legal Briefs, and It's Getting Bad
AI is good for a lot of thingsâ€"namely cheating on stuff and pretending like you're more productive than you actually are. Recently, this affliction has spread to a number of professions where you would have thought the work ethic is slightly better than it apparently is. Case in point: lawyers. Lawyers apparently love chatbots like ChatGPT because they can help them power through the drudgery of writing legal briefs. Unfortunately, as most of us know, chatbots are also prone to making stuff up and, more and more, this is leading to legal blunders with serious implications for everybody involved. The New York Times has a new story out on this unfortunate trend, noting that, more and more, punishments are being doled out to lawyers who are caught sloppily using AI (these punishments can involve a fine or some other minor inconvenience). Apparently, due to the stance of the American Bar Association, it's okay for lawyers to use AI in the course of their legal work. They're just supposed to make sure that the text that the chatbot spits out is, you know, correct, and not full of fabricated legal casesâ€"which is something that seems to keep happening. Indeed, the Times notes: ...according to court filings and interviews with lawyers and scholars, the legal profession in recent months has increasingly become a hotbed for A.I. blunders. Some of those stem from people’s use of chatbots in lieu of hiring a lawyer. Chatbots, for all their pitfalls, can help those representing themselves “speak in a language that judges will understand,†said Jesse Schaefer, a North Carolina-based lawyer...But an increasing number of cases originate among legal professionals, and courts are starting to map out punishments of small fines and other discipline. Now, some lawyers are apparently calling out other lawyers for their blunders, and are trying to creating a tracking system that can compile information on cases involving AI misuse. The Times notes the work of Damien Charlotin, a French attorney who started an online database to track legal blunders involving AI. Scrolling through Charlotin's website, it's definitely sorta terrifying since there are currently 11 pages worth of cases involving this numbskullery (the researchers say they've identified 509 cases so far). The newspaper notes that there is a "growing network of lawyers who track down A.I. abuses committed by their peers" and post them online, in an apparent effort to shame the behavior and alert people to the fact that it's happening. However, it's not clear that it's having the impact it needs to, so far. “These cases are damaging the reputation of the bar,†Stephen Gillers, an ethics professor at New York University School of Law, told the newspaper. “Lawyers everywhere should be ashamed of what members of their profession are doing.â€
[2]
Judge Blasts Lawyer Caught Using ChatGPT in Divorce Court, Orders Him to Take Remedial Law Classes
"In our view, this does not satisfy the requirement of competent representation." Artificial intelligence is here, and it's wreaking havoc on court rooms throughout the US. The latest AI law blunder comes from the Maryland appellate court, where a family lawyer representing a mother in a custody battle was caught filing court briefs cooked up with ChatGPT. The Daily Record, which publishes summaries of Maryland court opinions, reported that the mother's lawyer submitted a complaint for divorce gushing with AI hallucinated legal citations which made it into the court record. Like other ChatGPT legal muckups, many of the citations referenced case law which simply did not exist. The filing also contained existing legal citations which contradicted the arguments made in the brief. In his defense, the attorney, Adam Hyman, said that he "was not involved directly in the research of the offending citations." Instead, he blamed a law clerk he says used ChatGPT to find the citations, as well as to edit the brief before sending it on. Per a later filing, Hyman wrote that the clerk wasn't aware of the risks of AI hallucinations, the phenomenon in which chatbots make up false information to satisfy users' queries. When the clerk forwarded a draft of the erroneous brief, the lawyer said he didn't vet the cases referenced, and added that he "does very little appellate work." Of course, that's a terrible excuse; as a lawyer, it's his job to review what clerks write for accuracy -- and to work with them to understand proper workflows and standards for legal writing. In an opinion filed after the fact, Maryland appellate Judge Kathryn Grill Graeff wrote that "it is unquestionably improper for an attorney to submit a brief with fake cases generated by AI." "[C]ounsel admitted that he did not read the cases cited. Instead, he relied on his law clerk, a non-lawyer, who also clearly did not read the cases, which were fictitious," the judge scathed. "In our view, this does not satisfy the requirement of competent representation. A competent attorney reads the legal authority cited in court pleadings to make sure that they stand for the proposition for which they are cited." Grill Graeff noted that a blunder like this wouldn't usually call for an opinion -- which sets legal precedent -- but that she wanted to "address a problem that is recurring in courts around the country": AI in the courtroom. As part of the process, Hyman was required to admit responsibility for the improper case citations, as he was the only licensed attorney on the case. Both the lawyer and the clerk were required to complete "legal education courses on the ethical use of AI," as well as to implement office-wide protocols for citation verification. Hyman was also referred to the Attorney Grievance Commission for further discipline. Grill Graeff noted that this was the first time that Maryland's appellate courts had to address the problem -- though if recent trends are any indication, it certainly won't be the last.
Share
Share
Copy Link
Courts across the US are increasingly sanctioning lawyers who submit legal briefs containing AI-generated fabricated citations and case law, prompting calls for better oversight and education on AI ethics in legal practice.

The legal profession is facing an unprecedented crisis as lawyers increasingly rely on artificial intelligence tools like ChatGPT to draft legal briefs, leading to a surge in court sanctions and professional misconduct cases. According to recent reports, attorneys are submitting documents containing fabricated case citations and non-existent legal precedents generated by AI chatbots, prompting courts across the United States to implement disciplinary measures
1
.While the American Bar Association permits lawyers to use AI in their legal work, they are required to verify the accuracy of AI-generated content. However, many attorneys are failing to meet this basic professional standard, leading to what legal scholars describe as a "hotbed for A.I. blunders" that is damaging the reputation of the entire legal profession
1
.A recent case in Maryland's appellate court exemplifies the severity of this issue. Attorney Adam Hyman was sanctioned after submitting a divorce complaint containing multiple AI-generated fake citations that made it into the official court record. The brief included references to legal cases that simply did not exist, as well as real citations that contradicted the arguments being made
2
.Judge Kathryn Grill Graeff delivered a scathing opinion, stating that "it is unquestionably improper for an attorney to submit a brief with fake cases generated by AI." She emphasized that competent legal representation requires attorneys to read and verify all cited legal authority, noting that Hyman's reliance on an unvetted law clerk who used ChatGPT without understanding AI hallucination risks fell far short of professional standards
2
.The Maryland case resulted in comprehensive sanctions designed to address both immediate accountability and long-term prevention. Hyman was required to admit responsibility for the improper citations and was referred to the Attorney Grievance Commission for further discipline. Both the attorney and his law clerk were mandated to complete "legal education courses on the ethical use of AI" and implement office-wide protocols for citation verification
2
.Judge Grill Graeff specifically noted that while such blunders wouldn't typically warrant a formal opinion, she felt compelled to address "a problem that is recurring in courts around the country." This marks the first time Maryland's appellate courts have formally addressed AI misuse, though legal experts predict it won't be the last
2
.Related Stories
The scope of AI misuse in legal practice has prompted the creation of specialized tracking systems. French attorney Damien Charlotin has established an online database documenting legal blunders involving AI, which currently spans 11 pages and includes 509 identified cases of professional misconduct related to AI usage
1
.A growing network of lawyers is actively monitoring and documenting AI abuses committed by their peers, posting findings online in an effort to shame inappropriate behavior and raise public awareness. However, the effectiveness of these public accountability measures remains unclear, as cases continue to proliferate across jurisdictions
1
.Legal ethics experts are expressing serious concern about the long-term implications for the profession's credibility. Stephen Gillers, an ethics professor at New York University School of Law, told reporters that "these cases are damaging the reputation of the bar" and that "lawyers everywhere should be ashamed of what members of their profession are doing"
1
.Summarized by
Navi
1
Business and Economy

2
Technology

3
Business and Economy
