Curated by THEOUTPOST
On Sat, 26 Apr, 8:02 AM UTC
4 Sources
[1]
Mike Lindell's lawyers used AI to write brief -- judge finds nearly 30 mistakes
A lawyer representing MyPillow and its CEO Mike Lindell in a defamation case admitted using artificial intelligence in a brief that has nearly 30 defective citations, including misquotes and citations to fictional cases, a federal judge said. "[T]he Court identified nearly thirty defective citations in the Opposition. These defects include but are not limited to misquotes of cited cases; misrepresentations of principles of law associated with cited cases, including discussions of legal principles that simply do not appear within such decisions; misstatements regarding whether case law originated from a binding authority such as the United States Court of Appeals for the Tenth Circuit; misattributions of case law to this District; and most egregiously, citation of cases that do not exist," US District Judge Nina Wang wrote in an order to show cause Wednesday. Wang ordered attorneys Christopher Kachouroff and Jennifer DeMaster to show cause as to why the court should not sanction the defendants, law firm, and individual attorneys. Kachouroff and DeMaster also have to explain why they should not be referred to disciplinary proceedings for violations of the rules of professional conduct. Kachouroff and DeMaster, who are defending Lindell against a lawsuit filed by former Dominion Voting Systems employee Eric Coomer, both signed the February 25 brief with the defective citations. Kachouroff, representing defendants as lead counsel, admitted using AI to write the brief at an April 21 hearing, the judge wrote. The case is in the US District Court for the District of Colorado. "Time and time again, when Mr. Kachouroff was asked for an explanation of why citations to legal authorities were inaccurate, he declined to offer any explanation, or suggested that it was a 'draft pleading,'" Wang wrote. "Not until this Court asked Mr. Kachouroff directly whether the Opposition was the product of generative artificial intelligence did Mr. Kachouroff admit that he did, in fact, use generative artificial intelligence." "Your honor, I may have made a mistake" Kachouroff admitted after further questioning that failed to check citations, but "represented that he personally outlined and wrote a draft of a brief before utilizing generative artificial intelligence," Wang wrote. "Given the pervasiveness of the errors in the legal authority provided to it, this Court treats this representation with skepticism." The judge's order quotes some of Kachouroff's responses at the hearing. When asked about one misquote, he said, "Your honor, I may have made a mistake and I may have paraphrased and put quotes by mistake. I wasn't intending to mislead the court. I don't think the quote is far off from what you read to me." The judge's order continued: When asked how a case from the United States District Court for the Eastern District of Kentucky became attributable to the United States District Court for the District of Colorado, Mr. Kachouroff indicated that he "had given the cite checking to another person," later identified as Ms. DeMaster. When asked whether he would be surprised to find out that the citation Perkins v. Fed. Fruit & Produce Co., 945 F.3d 1242, 1251 (10th Cir. 2019) appearing on page 6 of Defendants' Opposition did not exist as an actual case, Mr. Kachouroff indicated that he would be surprised. The lawyers must explain themselves more fully by May 5. "Counsel will specifically address, under the oath subject to the penalty of perjury, the circumstances surrounding the preparation of the Opposition to Plaintiff's Motion in Limine, including but not limited to whether Defendants were advised and approved of their counsel's use of generative artificial intelligence," the order said. We contacted Kachouroff and DeMaster and will update this article if they respond. Kachouroff is with the law firm McSweeney Cynkar & Kachouroff in Virginia. DeMaster is an attorney in Wisconsin. Lawsuit against Lindell, MyPillow Coomer's lawsuit was filed against Lindell, the Lindell media company called FrankSpeech, and MyPillow. Lindell and his companies "have been among the most prolific vectors of baseless conspiracy theories claiming election fraud in the 2020 election," and Lindell falsely claimed that Coomer committed treason, the lawsuit said. Coomer is the former director of product strategy and security for Dominion. "Defendants have published these numerous false statements, defamatory interviews, and other dishonest content maligning Dr. Coomer on the website frankspeech.com often alongside a sales pitch for products from MyPillow," the lawsuit said. "In addition, Defendants further made claims against Dr. Coomer a centerpiece of a failed 'Cyber Symposium' that they organized and broadcast around the world." The February 25 brief that got Lindell's lawyers in trouble was an opposition to Coomer's motion asking the court to exclude certain evidence. Coomer's brief said "that Defendants will attempt to mislead and distract the jury with a smear campaign against Dr. Coomer based on completely or largely irrelevant attacks on his character instead of presenting proof that Dr. Coomer was involved in a criminal conspiracy to rig the 2020 presidential election. The Court should exercise its discretion to exclude the evidence set forth to avoid unfair character assassination and to ensure a fair trial on the merits." Coomer asked the court to exclude evidence related to a September 2021 motor vehicle accident, his sex life, alleged drug and alcohol use, religious beliefs, and political views. In their brief that apparently relies on incorrect and fictional citations, Lindell's lawyers argued that much of the evidence Coomer wants to exclude is relevant to his credibility, character, and reputation.
[2]
MyPillow CEO's lawyers file AI-generated legal brief riddled with errors
Lawyers for MyPillow CEO and presidential election conspiracy theorist Mike Lindell are facing potential disciplinary action after using generative AI to write a legal brief, resulting in a document rife with fundamental errors. The lawyers did admit to using AI, but claim that this particular mistake was primarily human. On Wednesday, an order by Colorado district court judge Nina Wang noted that the court had identified almost 30 defective citations in a brief filed by Lindell's lawyers on Feb. 25. Signed by attorneys Christopher Kachouroff and Jennifer DeMaster of law firm McSweeney Cynkar and Kachouroff, the filing was part of former Dominion Voting Systems employee Eric Coomer's defamation lawsuit against Lindell. "These defects include but are not limited to misquotes of cited cases; misrepresentations of principles of law associated with cited cases, including discussions of legal principles that simply do not appear within such decisions; misstatements regarding whether case law originated from a binding authority such as the United States Court of Appeals for the Tenth Circuit; misattributions of case law to this District; and most egregiously, citation of cases that do not exist," read Wang's court order. The court further noted that while the lawyers had been given the opportunity to explain this laundry list of errors, they were unable to adequately do so. Kachouroff confirmed that he'd used generative AI to prepare the brief once directly asked about it by the court, and upon further questioning admitted that he had not checked the resultant citations. As such, the court ordered the lawyers to provide an explanation as to why Kachouroff and DeMaster should not be referred to disciplinary proceedings for violating professional conduct rules, as well as sanctioned alongside Lindell and their law firm. Lawyers may face disciplinary action over use of AI Responding to the order on Friday, the lawyers stated that they had been "unaware of any errors or issues" with their filing, so were "caught off-guard" and unprepared to explain themselves when initially questioned by the court. Having now had time to assess the situation, they now claim that the document in question was actually an earlier draft which DeMaster had filed by mistake. Submitting alternate versions of the brief in support of this argument, the lawyers also presented an email exchange between Kachouroff and DeMaster in which they discussed edits. "At that time, counsel had no reason to believe that an AI-generated or unverified draft had been submitted," read their response. "After the hearing and having a subsequent opportunity to investigate [the brief], it was immediately clear that the document filed was not the correct version. It was a prior draft. "It was inadvertent, an erroneous filing that was not done intentionally, and was filed mistakenly through human error." The lawyers further contend in their filing that it is perfectly permissible to use AI to prepare a legal filing, arguing that "[t]here is nothing wrong with using AI when used properly." Kachouroff stated that he "routinely" analyses legal arguments using AI tools such as Microsoft's Co-Pilot, Google (presumably Gemini), and X (presumably Grok), though noted that he is the only person at his law firm to do so. He also stated that he had never heard the term "generative artificial intelligence" before. The lawyers asked that they be allowed to refile their corrected brief, as well as that the potential disciplinary action be dismissed. This incident is just the latest in a growing list of legal professionals inappropriately using AI in their work, some of them not even understanding the technology. In June 2023, two attorneys were fined for citing non-existent legal cases after they'd used ChatGPT to do their research. Later that year, a lawyer for disbarred former Trump attorney Michael Cohen was caught citing fake cases said client had generated with Google Bard. Then in February, yet another attorney appeared to cite cases fabricated by ChatGPT, prompting their law firm Morgan & Morgan to warn employees against blindly trusting AI. Yet despite such cautionary tales, it seems that many lawyers still haven't gotten the message to steer clear.
[3]
The MyPillow Guy's Lawyers Used AI in Court and You'll Never Guess How That Turned Out
We all remember Mike Lindell, the disgraced MyPillow founder who lost everything fighting to prove -- against insurmountable evidence to the contrary -- that the 2020 US election had somehow been stolen from Donald Trump. Well, he's in the news again -- this time for using AI to file his legal briefs. As The New Republic reported, a federal judge has accused Lindell of filing a legal document with "nearly 30 defective citations," which one of his attorneys, Christopher Kachouroff, wrote "utilizing generative artificial intelligence." Go figure: the brief was full of misquotes and miscited cases, sometimes referencing case law that simply didn't exist -- AI had hallucinated them, as the tech is prone to do in order to "complete" its prompt. "If you type a legal question into the Google search function, then generative AI is all too ready to answer," explained legal columnist Virginia Hammerle. She notes that in a similar case in New York, a federal judge sanctioned a team of lawyers and their firm when they turned in a ChatGPT-generated brief without double-checking it for mistakes. "Not until this Court asked Mr. Kachouroff directly whether the [document] was the product of generative artificial intelligence did Mr. Kachouroff admit that he did, in fact, use generative artificial intelligence," the federal judge admonished. "Given the pervasiveness of the errors in the legal authority provided to it, this Court treats this representation with skepticism." The federal judge has now given Lindell's lawyers ten days to argue why they shouldn't face disciplinary proceedings. They're also required to address whether or not Lindell had any knowledge that his lawyers were using AI to write their documents -- yet another headache for the embattled entrepreneur. It's far from the first gaffe Kachouroff has been involved in. During a trial which took place over Zoom last year, the attorney was caught relaxing without any pants on before beginning his cross-examination. It's the latest embarrassment in a years-long legal saga for Lindell, who faces a combined $70 million in debt owing to penalties from a civil lawsuits and an FBI investigation after personally championing Donald Trump's 2020 election-fraud hoax. After failing to pay more than $50,000 related to a defamation suit with voting systems company Smartmatic earlier in April, Lindell sobbed that he was broke. "I'm in ruins," the disgraced pillow tycoon said. "I don't have $5,000 or 5 cents." That's a shame, because if he did, he could be out shopping for new lawyers -- like maybe the kind that keep their pants on during trial.
[4]
MyPillow CEO's Lawyer Embarrassed In Court After Judge Grills Him Over Using AI In Legal Filing
The judge overseeing Mike Lindell's case said there were nearly 30 defects in the court filings, which at some points cited cases that do not exist. A judge berated lawyers representing conspiracy theorist and MyPillow CEO Mike Lindell in a court order, accusing them of filing a court document that used artificial intelligence and contained a number of "fundamental errors." Lindell is being sued by former Dominion Voting Services employee Dr. Eric Coomer, who accused the CEO, a longtime ally of President Donald Trump, of making false remarks about him on a conservative Colorado podcast. Coomer's lawsuit has been going on since 2022, and in February, Lindell's lawyers filed an opposition brief. The document contained "nearly thirty defective citations," Colorado District Judge Nina Y. Wang said in a court order filed Wednesday. She noted a number of "fundamental errors" in the brief, including citing cases that do not exist. Lindell's attorney Christopher I. Kachouroff admitted at a Monday hearing that he used generative artificial intelligence in the error-filled document. Wang ordered Lindell's attorneys to "show cause" as to why the court should not sanction Lindell and his companies. She also ordered his attorneys to justify why the court should not refer them to disciplinary proceedings. Kachouroff responded to Wang's order on Friday in a motion obtained by HuffPost, saying "there is nothing wrong with using AI when used properly." He also claimed that the brief his team filed was not the final copy, but a previous draft submitted "mistakenly." The lawyer also described being questioned in court about the document. "The Court concluded by grilling me on whether the document was generated by AI," he wrote. "I freely admitted that I used AI because it is a very helpful tool when used properly." The defense team's response stated that Kachouroff was out of the country with poor internet service at the time he reviewed the brief with his co-counsel Jennifer T. DeMaster over the phone. DeMaster had used a legal research AI tool to analyze the brief to find and add other cases to help bolster their argument. Kachouroff said he "routinely" uses AI tools to analyze his arguments, as well as his opposing counsel's arguments, but does not rely on them. "Regardless of whether I use AI in a particular pleading, I always conduct verification of citations before filing," Kachouroff said. He said in his response that he was "taken completely off guard" when being questioned in court about the brief because he was unaware the court had a different copy. Kachouroff argued he should have been given advance notice to properly explain the filing.
Share
Share
Copy Link
Lawyers representing Mike Lindell, CEO of MyPillow, are under fire for submitting an AI-generated legal brief containing multiple errors, including citations to non-existent cases. The incident raises concerns about AI use in legal proceedings.
In a startling development in the ongoing defamation case against MyPillow CEO Mike Lindell, his lawyers have come under scrutiny for submitting a legal brief generated by artificial intelligence (AI) that contained numerous errors. U.S. District Judge Nina Wang identified nearly 30 defective citations in the document, including misquotes, misrepresentations of legal principles, and even citations to non-existent cases 1.
Christopher Kachouroff, the lead counsel representing Lindell, admitted to using generative AI to prepare the brief when directly questioned by Judge Wang during a hearing on April 21, 2025. The judge expressed skepticism about Kachouroff's claim that he had personally outlined and written a draft before utilizing AI, given the pervasiveness of errors in the legal authorities provided 1.
The defects in the brief were extensive and varied:
These errors raised serious concerns about the integrity of the legal process and the potential misuse of AI in legal proceedings.
In response to the judge's order to show cause, Kachouroff and his co-counsel Jennifer DeMaster claimed that the submitted document was an earlier draft filed by mistake. They presented alternate versions of the brief and email exchanges to support their argument 2.
Kachouroff stated that he "routinely" uses AI tools such as Microsoft's Co-Pilot, Google, and X to analyze legal arguments, but maintained that he is the only person at his law firm to do so. He also claimed to be unfamiliar with the term "generative artificial intelligence" 24.
This incident is not isolated, as it follows a growing trend of legal professionals inappropriately using AI in their work:
These occurrences have prompted some law firms, such as Morgan & Morgan, to warn employees against blindly trusting AI 2.
The legal brief in question was filed as part of a defamation lawsuit brought by Eric Coomer, a former Dominion Voting Systems employee, against Lindell, his company MyPillow, and his media platform FrankSpeech. Coomer alleges that Lindell and his companies have been prolific vectors of baseless conspiracy theories about the 2020 election, falsely claiming that Coomer committed treason 1.
Judge Wang has ordered Kachouroff and DeMaster to explain why they should not face disciplinary proceedings for violating professional conduct rules. The court is also considering sanctions against Lindell, his companies, and the law firm 13.
This case highlights the growing challenges and ethical considerations surrounding the use of AI in legal practice, emphasizing the need for clear guidelines and rigorous verification processes when employing such technologies in the courtroom.
Reference
Morgan & Morgan, a major US law firm, warns its attorneys about the risks of using AI-generated content in court filings after a case involving fake citations. The incident highlights growing concerns about AI use in the legal profession.
9 Sources
9 Sources
An AI entrepreneur's attempt to use an AI-generated avatar for legal representation in a New York court backfires, raising questions about the role of AI in legal proceedings and the boundaries of courtroom technology.
15 Sources
15 Sources
A 74-year-old plaintiff's attempt to use an AI-generated lawyer avatar in a New York courtroom backfires, raising questions about the use of artificial intelligence in legal proceedings.
2 Sources
2 Sources
Stanford professor Jeff Hancock admits to using ChatGPT for organizing citations in a legal document supporting Minnesota's anti-deepfake law, leading to AI-generated false information in the affidavit.
2 Sources
2 Sources
Stanford professor Jeff Hancock faces allegations of citing non-existent, potentially AI-generated studies in his expert testimony supporting Minnesota's proposed deepfake legislation, raising questions about AI's impact on legal proceedings and academic integrity.
6 Sources
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved