New York Lawyer's AI Blunder: Using AI to Defend AI Use in Court

3 Sources

Share

A New York attorney faces sanctions after submitting AI-generated fake citations in court documents, then compounding the error by using AI again to defend his initial AI usage. The case highlights growing concerns about AI misuse in the legal profession.

News article

AI Misuse in New York Supreme Court Case

In a recent New York Supreme Court commercial case, defense attorney Michael Fourte found himself at the center of a controversy involving the misuse of artificial intelligence (AI) in legal proceedings. The incident has raised serious concerns about the ethical implications and potential pitfalls of using AI tools in the legal profession

1

.

Initial AI-Generated Citations

The case, which originally involved a dispute over a defaulted loan between family members, took an unexpected turn when the plaintiff's legal team discovered inaccurate citations and quotations in Fourte's summary judgment brief. These errors appeared to be "hallucinated" by an AI tool, prompting the plaintiffs to file a motion for sanctions

1

.

Compounding the Error

In a surprising twist, Fourte's response to the sanctions motion contained even more AI-generated content. New York Supreme Court Judge Joel Cohen noted that the opposition brief included "multiple new AI-hallucinated citations and quotations"

2

. This led Judge Cohen to observe, "In other words, counsel relied upon unvetted AI -- in his telling, via inadequately supervised colleagues -- to defend his use of unvetted AI"

1

.

Escalating Mistakes

The situation worsened as Fourte attempted to explain his actions. Initially, he neither admitted nor denied the use of AI, describing the erroneous citations as "innocuous paraphrases of accurate legal principles"

2

. When pressed further, Fourte implied that AI wasn't used at all, claiming there was "no affidavit, forensic analysis, or admission" confirming its use

2

.

Admission and Consequences

Eventually, under continued questioning, Fourte admitted to using AI but attempted to deflect some blame onto additional staff brought onto the case. He later claimed, "I never said I didn't use AI. I said that I didn't use unvetted AI"

2

. Judge Cohen dismissed this argument, stating, "If you are including citations that don't exist, there's only one explanation for that. It's that AI gave you cites and you didn't check them. That's the definition of unvetted AI"

2

.

Broader Implications for the Legal Profession

This case is not an isolated incident. Numerous lawyers have faced similar issues with AI-generated content in legal documents. The problem extends beyond public chatbots like ChatGPT to specialized AI tools designed for legal work, highlighting the persistent unreliability of these technologies

2

.

Judge Cohen described the case as "yet another unfortunate chapter to the story of artificial intelligence misuse in the legal profession"

3

. The incident serves as a stark reminder of the importance of thorough vetting and responsible use of AI tools in legal practice.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo