3 Sources
3 Sources
[1]
Lawyer Caught Using AI While Explaining to Court Why He Used AI
The attorney not only submitted AI-generated fake citations in a brief for his clients, but also included "multiple new AI-hallucinated citations and quotations" in the process of opposing a motion for sanctions. An attorney in a New York Supreme Court commercial case got caught using AI in his filings, and then got caught using AI again in the brief where he had to explain why he used AI, according to court documents filed earlier this month. New York Supreme Court Judge Joel Cohen wrote in a decision granting the plaintiff's attorneys' request for sanctions that the defendant's counsel, Michael Fourte's law offices, not only submitted AI-hallucinated citations and quotations in the summary judgment brief that led to the filing of the plaintiff's motion for sanctions, but also included "multiple new AI-hallucinated citations and quotations" in the process of opposing the motion. "In other words," the judge wrote, "counsel relied upon unvetted AI -- in his telling, via inadequately supervised colleagues -- to defend his use of unvetted AI."
[2]
Lawyer Gets Caught Using AI in Court, Responds in the Worst Possible Way
What is it with lawyers and AI? We don't know, but it feels like an inordinate number of them keep screwing up with AI tools, apparently never learning from their colleagues who get publicly crucified for making the same mistake. But this latest blunder from a New York attorney, in a lawsuit centered on a disputed loan, takes the cake. As 404 Media reports, after getting caught using AI by leaving in hallucinated quotes and citations in his court filings, defense lawyer Michael Fourte then submitted a brief explaining his AI usage -- which was also written with a large language model. Needless to say, the judge was not amused. "In other words, counsel relied upon unvetted AI -- in his telling, via inadequately supervised colleagues -- to defend his use of unvetted AI," wrote New York Supreme Court judge Joel Cohen in a decision filed earlier this month. "This case adds yet another unfortunate chapter to the story of artificial intelligence misuse in the legal profession," the judge further lamented. Perhaps one of the reasons that we keep hearing about these completely avoidable catastrophes is that catching your opponent even making a single mistake using an AI tool is an easy way to gain an upper hand in court, so everyone's on the lookout for them. That's what happened here: it was the plaintiff's legal team that first caught the mistakes, which included inaccurate or completely made up citations and quotations. The plaintiffs then filed a request for the judge to sanction Fourte, which is when he committed the legal equivalent of shoving a stick between the spokes of your bike wheel: he used AI again. In his opposition to the sanctions motion, Fourte's submitted document contained more than double the amount of made-up or erroneous citations as last time, an astonished-sounding Cohen wrote. His explanation was also pretty unsatisfactory. Fourte neither admitted nor denied the use of AI, wrote judge Cohen, but instead tried to pass off the botched citations as merely "innocuous paraphrases of accurate legal principles." Somehow, it gets worse. After the plaintiffs flagged the new wave of errors in Fourte's opposition to the sanctions motion, the defense lawyer -- who by now was presumably sweating more than a character in a Spaghetti Western -- then strongly implied that AI wasn't used at all, complaining that the plaintiffs provided "no affidavit, forensic analysis, or admission" confirming the use of the tech. When he had an opportunity to set the record straight during oral arguments in court, Fourte further insisted that the the "cases are not fabricated at all," the judge noted. Eventually, though, he cracked. After getting further grilled on how a completely made-up court case ended up in his filings, Fourte admitted he "did use AI." He also, in practically the same breath, said he took "full responsibility" for the AI-generated nonsense -- but also tried to pass off some of the blame on some additional staff he'd brought on the case. Classic. Later, Fourte "muddled those statements of contrition," the judge mused, by saying, "I never said I didn't use AI. I said that I didn't use unvetted AI." To which the judge called BS. "If you are including citations that don't exist, there's only one explanation for that. It's that AI gave you cites and you didn't check them," Cohen responded to Fourte's pleas. "That's the definition of unvetted AI." After all the back and forth, Judge Cohen granted the plaintiff's motion for sanctions. Fourte declined 404's request for comment. "As this matter remains before the Court, and out of respect for the process and client confidentiality, we will not comment on case specifics," he told the outlet. "We have addressed the issue directly with the Court and implemented enhanced verification and supervision protocols. We have no further comment at this time." While his case seems especially egregious, Fourte is definitely not alone. Dozens of other lawyers have been caught using AI for largely the same reason: submitting erroneous or made up case law. Some used public chatbots like ChatGPT, but others used AI tools special-built for law, illustrating how fundamentally error-prone the tech remains. One of the biggest firms caught in an AI scandal is Morgan & Morgan, which rushed out a panicked company-wide email after two of its lawyers faced sanctions for citing AI hallucinations earlier this year. Judges, meanwhile, have done their darndest to make an example out of lawyers careless enough to rely on the word of an LLM -- but clearly, this latest case shows, not everyone's getting the memo.
[3]
A lawyer caught using AI citations and quotes in a Supreme Court legal case has been called out by a judge for defending themselves with, err, AI citations and quotes
For fans of real-life legal drama, I've got a doozy for you. An attorney in a New York Supreme Court legal case was accused by a plaintiff and their lawyers of providing "inaccurate citations and quotations" that appeared to be "hallucinated" by an AI tool. When said lawyer opposed the claims, they were then accused of defending their use of AI with, well, AI (via 404 Media). Or, to put it more succinctly, in the words of judge Joel Coehn: "Counsel relied upon unvetted AI -- in his telling, via inadequately supervised colleagues -- to defend his use of unvetted AI." The case itself involved a dispute between family members and a defaulted loan, usually a relatively straightforward proceeding. However, once the plaintiff brought to the court's attention the apparently AI-generated citations and quotes in the defense counsel's brief, the defense claimed that "these passages were intended as paraphrases or summarized statements of the legal principles established in the cited authorities." The judge points out that the purported paraphrases included "bracketed terms to indicate departure from a quotation (not something one would expect to see in an intended paraphrase) and comments such as 'citation omitted.' Moreover, the cited cases often did not stand for the propositions quoted, were completely unrelated in subject matter, and in one instance did not exist at all." After being called out on the potential use of AI by the plaintiff, the defense then provided opposition to the claim, which was also later deemed to be made with the help of AI. According to the judge: "This time, [the] plaintiff has identified more than double the number of mis-cites, including four citations that do not exist, seven quotations that do not exist in the cited cases, and three that do not support the propositions for which they are offered." The judge called the case "yet another unfortunate chapter to the story of artificial intelligence misuse in the legal profession," and stated that, while some of the citations happened to contain "arguably correct statements of law", it made them "no less frivolous." Well, that must have made for an awkward courtroom. Still, as the judge references, this is far from the first time the use of AI has reared its ugly head in legal proceedings. Earlier this year, a judge ordered a ChatGPT-using lawyer to not only pay $5,500 in sanctions, but to attend a plenary session entitled "Smarter Than Ever: The Potential and Perils of Artificial Intelligence" for their transgressions. It appears that courts look unfavourably upon AI avatars for personal representation, too. Back in April, a plaintiff in an employment dispute attempted to use just such an AI stand-in to deliver their opening presentation, as they felt their tendency to mumble would negatively affect their case, only for the artificial representative to be shut down by the judge before it could finish its first sentence. All of this comes as no surprise, I reckon. The legal profession is a complicated business, and given that AI tools are often touted for their ability to scythe through dense information to pull out the relevant facts, it's no great wonder that some are taking AI-based shortcuts in legal proceedings. And so far, they seem to usually come unstuck. However, part of me wonders how much AI-generated content has already made its way into legal cases without being noticed. In this case, the plaintiff and their counsel appear to have been on the ball, and as a result the judge was able to call out the discrepancy. My favourite law, though, the law of averages, suggests that at least some AI-generated arguments, citations, and quotations might have made it through the cracks. Pessimistic? Me? Never. I'm just a happy-go-lucky bundle of joy.
Share
Share
Copy Link
A New York attorney faces sanctions after submitting AI-generated fake citations in court documents, then compounding the error by using AI again to defend his initial AI usage. The case highlights growing concerns about AI misuse in the legal profession.
In a recent New York Supreme Court commercial case, defense attorney Michael Fourte found himself at the center of a controversy involving the misuse of artificial intelligence (AI) in legal proceedings. The incident has raised serious concerns about the ethical implications and potential pitfalls of using AI tools in the legal profession
1
.The case, which originally involved a dispute over a defaulted loan between family members, took an unexpected turn when the plaintiff's legal team discovered inaccurate citations and quotations in Fourte's summary judgment brief. These errors appeared to be "hallucinated" by an AI tool, prompting the plaintiffs to file a motion for sanctions
1
.In a surprising twist, Fourte's response to the sanctions motion contained even more AI-generated content. New York Supreme Court Judge Joel Cohen noted that the opposition brief included "multiple new AI-hallucinated citations and quotations"
2
. This led Judge Cohen to observe, "In other words, counsel relied upon unvetted AI -- in his telling, via inadequately supervised colleagues -- to defend his use of unvetted AI"1
.The situation worsened as Fourte attempted to explain his actions. Initially, he neither admitted nor denied the use of AI, describing the erroneous citations as "innocuous paraphrases of accurate legal principles"
2
. When pressed further, Fourte implied that AI wasn't used at all, claiming there was "no affidavit, forensic analysis, or admission" confirming its use2
.Related Stories
Eventually, under continued questioning, Fourte admitted to using AI but attempted to deflect some blame onto additional staff brought onto the case. He later claimed, "I never said I didn't use AI. I said that I didn't use unvetted AI"
2
. Judge Cohen dismissed this argument, stating, "If you are including citations that don't exist, there's only one explanation for that. It's that AI gave you cites and you didn't check them. That's the definition of unvetted AI"2
.This case is not an isolated incident. Numerous lawyers have faced similar issues with AI-generated content in legal documents. The problem extends beyond public chatbots like ChatGPT to specialized AI tools designed for legal work, highlighting the persistent unreliability of these technologies
2
.Judge Cohen described the case as "yet another unfortunate chapter to the story of artificial intelligence misuse in the legal profession"
3
. The incident serves as a stark reminder of the importance of thorough vetting and responsible use of AI tools in legal practice.Summarized by
Navi
15 May 2025•Policy and Regulation
19 Feb 2025•Technology
15 Aug 2025•Technology
1
Technology
2
Business and Economy
3
Business and Economy