The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Wed, 19 Feb, 8:05 AM UTC
6 Sources
[1]
Large Law Firm Sends Panicked Email as It Realizes Its Attorneys Have Been Using AI to Prepare Court Documents
The law firm Morgan & Morgan has rushed out a stern email to its attorneys after two of them were caught citing fake court cases invented by an AI model, Reuters reports. Sent earlier this month to all of its over 1,000 lawyers, the email warns at length about the tech's proclivity for hallucinating. But the pros of the tech, apparently, still outweigh the cons; rather than banning AI usage -- something that plenty of organizations have done -- Morgan & Morgan leadership take the middle road and give the usual spiel about please double-checking your work to ensure it's not totally made-up nonsense. "As we previously instructed you, if you use AI to identify cases for citation, every case must be independently verified," the email reads. "The integrity of your legal work and reputation depend on it." Last week, a federal judge in Wyoming admonished two Morgan & Morgan lawyers for citing at least nine instance of fake case law in court filings submitted in January. Threatened with sanctions, the embarrassed lawyers blamed an "internal AI tool" for the mishap, and pleaded the judge for mercy. "When lawyers are caught using ChatGPT or any generative AI tool to create citations without checking them, that's incompetence, just pure and simple," Andrew Perlman, dean of Suffolk University's law school and an advocate of using AI in legal work, told Reuters. The judge hasn't decided whether he'll punish the lawyers yet, per Reuters. Nonetheless, it's an enormous embarrassment for the relatively well-known firm, especially given the occasion. The lawsuit in question is against the world's largest company, Walmart, alleging that a hoverboard that the retailer sold was responsible for a fire that burned down the plaintiff's home. Now, the corporate lawyers are probably cackling to themselves in a backroom somewhere, their opponents having shot themselves in the foot so spectacularly. Anyone familiar with the shortcomings inherent to large language models could've seen something like this happening from a mile away. And according to Reuters, the tech's dubious usage in legal settings has already led to lawyers being questioned or disciplined in at least seven cases in the past two years. It's not just the hallucinations that are so pernicious -- it's how authoritatively the AI models lie to you. That, and the fact that anything that promises to automate a task more often than not tends to induce the person using it to let their guard down, a problem that's become pretty apparent in self-driving cars, for example, or in news agencies that have experimented with using AI to summarize stories or to assist with reporting. And so organizations can tell their employees to double-check their work all they want, but the fact remains that screw-ups like these will keep happening. To address the issue, Morgan & Morgan are requiring attorneys to acknowledge they're aware about the risks associated with AI usage by clicking a little box added to its AI tool, The Register reported. We're sure that'll do the trick.
[2]
AI making up cases can get lawyers fired, scandalized law firm warns
Morgan & Morgan -- which bills itself as "America's largest injury law firm" that fights "for the people" -- learned the hard way this month that even one lawyer blindly citing AI-hallucinated case law can risk sullying the reputation of an entire nationwide firm. In a letter shared in a court filing, Morgan & Morgan's chief transformation officer, Yath Ithayakumar, warned the firms' more than 1,000 attorneys that citing fake AI-generated cases in court filings could be cause for disciplinary action, including "termination." "This is a serious issue," Ithayakumar wrote. "The integrity of your legal work and reputation depend on it." Morgan & Morgan's AI troubles were sparked in a lawsuit claiming that Walmart was involved in designing a supposedly defective hoverboard toy that allegedly caused a family's house fire. Despite being an experienced litigator, Rudwin Ayala, the firm's lead attorney on the case, cited eight cases in a court filing that Walmart's lawyers could not find anywhere except on ChatGPT. These "cited cases seemingly do not exist anywhere other than in the world of Artificial Intelligence," Walmart's lawyers said, urging the court to consider sanctions. So far, the court has not ruled on possible sanctions. But Ayala was immediately dropped from the case and was replaced by his direct supervisor, T. Michael Morgan, Esq. Expressing "great embarrassment" over Ayala's fake citations that wasted the court's time, Morgan struck a deal with Walmart's attorneys to pay all fees and expenses associated with replying to the errant court filing, which Morgan told the court should serve as a "cautionary tale" for both his firm and "all firms." Reuters found that lawyers improperly citing AI-hallucinated cases have scrambled litigation in at least seven cases in the past two years. Some lawyers have been sanctioned, including an early case last June fining lawyers $5,000 for citing chatbot "gibberish" in filings. And in at least one case in Texas, Reuters reported, a lawyer was fined $2,000 and required to attend a course on responsible use of generative AI in legal applications. But in another high-profile incident, Michael Cohen, Donald Trump's former lawyer, avoided sanctions after Cohen accidentally gave his own attorney three fake case citations to help his defense in his criminal tax and campaign finance litigation. In a court filing, Morgan explained that Ayala was solely responsible for the AI citations in the Walmart case. No one else involved " had any knowledge or even notice" that the errant court filing "contained any AI-generated content, let alone hallucinated content," Morgan said, insisting that had he known, he would have required Ayala to independently verify all citations. "The risk that a Court could rely upon and incorporate invented cases into our body of common law is a nauseatingly frightening thought," Morgan said, "deeply" apologizing to the court while acknowledging that AI can be "dangerous when used carelessly." Further, Morgan said, it's clear that his firm must work harder to train attorneys on AI tools the firm has been using since November 2024 that were intended to support -- not replace -- lawyers as they researched cases. Despite the firm supposedly warning lawyers that AI can hallucinate or fabricate information, Ayala shockingly claimed that he "mistakenly" believed that the firm's "internal AI support" was "fully capable" of not just researching but also drafting briefs. "This deeply regrettable filing serves as a hard lesson for me and our firm as we enter a world in which artificial intelligence becomes more intertwined with everyday practice," Morgan told the court. "While artificial intelligence is a powerful tool, it is a tool which must be used carefully. There are no shortcuts in law." Andrew Perlman, dean of Suffolk University's law school, advocates for responsible AI use in court and told Reuters that lawyers citing ChatGPT or other AI tools without verifying outputs is "incompetence, just pure and simple." Morgan & Morgan declined Ars' request to comment. Law firm makes changes to prevent AI citations Morgan & Morgan wants to make sure that no one else at the firm makes the same mistakes that Ayala did. In the letter sent to all attorneys, Ithayakumar reiterated that AI cannot be solely used to dependably research cases or draft briefs, as "AI can generate plausible responses that may be entirely fabricated information." "As all lawyers know (or should know), it has been documented that AI sometimes invents case law, complete with fabricated citations, holdings, and even direct quotes," his letter said. "As we previously instructed you, if you use AI to identify cases for citation, every case must be independently verified." While Harry Surden, a law professor who studies AI legal issues, told Reuters that "lawyers have always made mistakes," he also suggested that an increasing reliance on AI tools in the legal field requires lawyers to increase AI literacy to fully understand "the strengths and weaknesses of the tools." (A July 2024 Reuters survey found that 63 percent of lawyers have used AI and 12 percent use it regularly, after experts signaled an AI-fueled paradigm shift in the legal field in 2023.) At Morgan & Morgan, it has become clear in 2025 that better AI training is needed across its nationwide firm. Morgan told the court that the firm's technology team and risk management members have met to "discuss and implement further policies to prevent another occurrence in the future." Additionally, a checkbox acknowledging AI's potential for hallucinations was added, and it must be clicked before any attorney at the firm can access the internal AI platform. "Further, safeguards and training are being discussed to protect against any errant uses of artificial intelligence," Morgan told the court. Whether these efforts will help Morgan & Morgan avoid sanctions is unclear, but Ithayakumar suggested that on par with sanctions might be the reputational loss to the firm's or any individual lawyer's credibility. "Blind reliance on AI is equivalent to citing an unverified case," Ithayakumar told lawyers, saying that it is their "responsibility and ethical obligation" to verify AI outputs. "Failure to comply with AI verification requirements may result in court sanctions, professional discipline, discipline by the firm (up to and including termination), and reputational harm. Every lawyer must stay informed of the specific AI-related rules and orders in the jurisdictions where they practice and strictly adhere to these obligations."
[3]
AI 'Hallucinations' in Court Papers Spell Trouble for Lawyers
(Reuters) - U.S. personal injury law firm Morgan & Morgan sent an urgent email this month to its more than 1,000 lawyers: Artificial intelligence can invent fake case law, and using made-up information in a court filing could get you fired. A federal judge in Wyoming had just threatened to sanction two lawyers at the firm who included fictitious case citations in a lawsuit against Walmart. One of the lawyers admitted in court filings last week that he used an AI program that "hallucinated" the cases and apologized for what he called an inadvertent mistake. AI's penchant for generating legal fiction in case filings has led courts around the country to question or discipline lawyers in at least seven cases over the last two years, and created a new high-tech headache for litigants and judges, Reuters found. The Walmart case stands out because it involves a well-known law firm and a big corporate defendant. But examples like it have cropped up in all kinds of lawsuits since chatbots like ChatGPT ushered in the AI era, highlighting a new litigation risk. A Morgan & Morgan spokesperson did not respond to a request for comment. Walmart declined to comment. The judge has not yet ruled whether to discipline the lawyers in the Walmart case, which involved an allegedly defective hoverboard toy. Advances in generative AI are helping reduce the time lawyers need to research and draft legal briefs, leading many law firms to contract with AI vendors or build their own AI tools. Sixty-three percent of lawyers surveyed by Reuters' parent company Thomson Reuters last year said they have used AI for work, and 12% said they use it regularly. Generative AI, however, is known to confidently make up facts, and lawyers who use it must take caution, legal experts said. AI sometimes produces false information, known as "hallucinations" in the industry, because the models generate responses based on statistical patterns learned from large datasets rather than by verifying facts in those datasets. Attorney ethics rules require lawyers to vet and stand by their court filings or risk being disciplined. The American Bar Association told its 400,000 members last year that those obligations extend to "even an unintentional misstatement" produced through AI. The consequences have not changed just because legal research tools have evolved, said Andrew Perlman, dean of Suffolk University's law school and an advocate of using AI to enhance legal work. "When lawyers are caught using ChatGPT or any generative AI tool to create citations without checking them, that's incompetence, just pure and simple," Perlman said. 'LACK OF AI LITERACY' In one of the earliest court rebukes over attorneys' use of AI, a federal judge in Manhattan in June 2023 fined two New York lawyers $5,000 for citing cases that were invented by AI in a personal injury case against an airline. A different New York federal judge last year considered imposing sanctions in a case involving Michael Cohen, the former lawyer and fixer for Donald Trump, who said he mistakenly gave his own attorney fake case citations that the attorney submitted in Cohen's criminal tax and campaign finance case. Cohen, who used Google's AI chatbot Bard, and his lawyer were not sanctioned, but the judge called the episode "embarrassing." In November, a Texas federal judge ordered a lawyer who cited nonexistent cases and quotations in a wrongful termination lawsuit to pay a $2,000 penalty and attend a course about generative AI in the legal field. A federal judge in Minnesota last month said a misinformation expert had destroyed his credibility with the court after he admitted to unintentionally citing fake, AI-generated citations in a case involving a "deepfake" parody of Vice President Kamala Harris. Harry Surden, a law professor at the University of Colorado's law school who studies AI and the law, said he recommends lawyers spend time learning "the strengths and weaknesses of the tools." He said the mounting examples show a "lack of AI literacy" in the profession, but the technology itself is not the problem. "Lawyers have always made mistakes in their filings before AI," he said. "This is not new." (Reporting by Sara Merken in New York; Editing by David Bario and Aurora Ellis)
[4]
AI 'hallucinations' in court papers spell trouble for lawyers
(Reuters) - U.S. personal injury law firm Morgan & Morgan sent an urgent email this month to its more than 1,000 lawyers: Artificial intelligence can invent fake case law, and using made-up information in a court filing could get you fired. A federal judge in Wyoming had just threatened to sanction two lawyers at the firm who included fictitious case citations in a lawsuit against Walmart. One of the lawyers admitted in court filings last week that he used an AI program that "hallucinated" the cases and apologized for what he called an inadvertent mistake. AI's penchant for generating legal fiction in case filings has led courts around the country to question or discipline lawyers in at least seven cases over the last two years, and created a new high-tech headache for litigants and judges, Reuters found. The Walmart case stands out because it involves a well-known law firm and a big corporate defendant. But examples like it have cropped up in all kinds of lawsuits since chatbots like ChatGPT ushered in the AI era, highlighting a new litigation risk. A Morgan & Morgan spokesperson did not respond to a request for comment. Walmart declined to comment. The judge has not yet ruled whether to discipline the lawyers in the Walmart case, which involved an allegedly defective hoverboard toy. Advances in generative AI are helping reduce the time lawyers need to research and draft legal briefs, leading many law firms to contract with AI vendors or build their own AI tools. Sixty-three percent of lawyers surveyed by Reuters' parent company Thomson Reuters last year said they have used AI for work, and 12% said they use it regularly. Generative AI, however, is known to confidently make up facts, and lawyers who use it must take caution, legal experts said. AI sometimes produces false information, known as "hallucinations" in the industry, because the models generate responses based on statistical patterns learned from large datasets rather than by verifying facts in those datasets. Attorney ethics rules require lawyers to vet and stand by their court filings or risk being disciplined. The American Bar Association told its 400,000 members last year that those obligations extend to "even an unintentional misstatement" produced through AI. The consequences have not changed just because legal research tools have evolved, said Andrew Perlman, dean of Suffolk University's law school and an advocate of using AI to enhance legal work. "When lawyers are caught using ChatGPT or any generative AI tool to create citations without checking them, that's incompetence, just pure and simple," Perlman said. 'LACK OF AI LITERACY' In one of the earliest court rebukes over attorneys' use of AI, a federal judge in Manhattan in June 2023 fined two New York lawyers $5,000 for citing cases that were invented by AI in a personal injury case against an airline. A different New York federal judge last year considered imposing sanctions in a case involving Michael Cohen, the former lawyer and fixer for Donald Trump, who said he mistakenly gave his own attorney fake case citations that the attorney submitted in Cohen's criminal tax and campaign finance case. Cohen, who used Google's AI chatbot Bard, and his lawyer were not sanctioned, but the judge called the episode "embarrassing." In November, a Texas federal judge ordered a lawyer who cited nonexistent cases and quotations in a wrongful termination lawsuit to pay a $2,000 penalty and attend a course about generative AI in the legal field. A federal judge in Minnesota last month said a misinformation expert had destroyed his credibility with the court after he admitted to unintentionally citing fake, AI-generated citations in a case involving a "deepfake" parody of Vice President Kamala Harris. Harry Surden, a law professor at the University of Colorado's law school who studies AI and the law, said he recommends lawyers spend time learning "the strengths and weaknesses of the tools." He said the mounting examples show a "lack of AI literacy" in the profession, but the technology itself is not the problem. "Lawyers have always made mistakes in their filings before AI," he said. "This is not new." (Reporting by Sara Merken in New York; Editing by David Bario and Aurora Ellis)
[5]
AI 'hallucinations' in court papers spell trouble for lawyers
Advances in generative AI are helping reduce the time lawyers need to research and draft legal briefs, leading many law firms to contract with AI vendors or build their own AI tools. Sixty-three percent of lawyers surveyed by Reuters' parent company Thomson Reuters last year said they have used AI for work, and 12% said they use it regularly.US personal injury law firm Morgan & Morgan sent an urgent email this month to its more than 1,000 lawyers: Artificial intelligence can invent fake case law, and using made-up information in a court filing could get you fired. A federal judge in Wyoming had just threatened to sanction two lawyers at the firm who included fictitious case citations in a lawsuit against Walmart. One of the lawyers admitted in court filings last week that he used an AI program that "hallucinated" the cases and apologised for what he called an inadvertent mistake. AI's penchant for generating legal fiction in case filings has led courts around the country to question or discipline lawyers in at least seven cases over the last two years, and created a new high-tech headache for litigants and judges, Reuters found. The Walmart case stands out because it involves a well-known law firm and a big corporate defendant. But examples like it have cropped up in all kinds of lawsuits since chatbots like ChatGPT ushered in the AI era, highlighting a new litigation risk. A Morgan & Morgan spokesperson did not respond to a request for comment. Walmart declined to comment. The judge has not yet ruled whether to discipline the lawyers in the Walmart case, which involved an allegedly defective hoverboard toy. Advances in generative AI are helping reduce the time lawyers need to research and draft legal briefs, leading many law firms to contract with AI vendors or build their own AI tools. Sixty-three percent of lawyers surveyed by Reuters' parent company Thomson Reuters last year said they have used AI for work, and 12% said they use it regularly. Generative AI, however, is known to confidently make up facts, and lawyers who use it must take caution, legal experts said. AI sometimes produces false information, known as "hallucinations" in the industry, because the models generate responses based on statistical patterns learned from large datasets rather than by verifying facts in those datasets. Attorney ethics rules require lawyers to vet and stand by their court filings or risk being disciplined. The American Bar Association told its 400,000 members last year that those obligations extend to "even an unintentional misstatement" produced through AI. The consequences have not changed just because legal research tools have evolved, said Andrew Perlman, dean of Suffolk University's law school and an advocate of using AI to enhance legal work. "When lawyers are caught using ChatGPT or any generative AI tool to create citations without checking them, that's incompetence, just pure and simple," Perlman said. 'Lack of AI literacy' In one of the earliest court rebukes over attorneys' use of AI, a federal judge in Manhattan in June 2023 fined two New York lawyers $5,000 for citing cases that were invented by AI in a personal injury case against an airline. A different New York federal judge last year considered imposing sanctions in a case involving Michael Cohen, the former lawyer and fixer for Donald Trump, who said he mistakenly gave his own attorney fake case citations that the attorney submitted in Cohen's criminal tax and campaign finance case. Cohen, who used Google's AI chatbot Bard, and his lawyer were not sanctioned, but the judge called the episode "embarrassing." In November, a Texas federal judge ordered a lawyer who cited nonexistent cases and quotations in a wrongful termination lawsuit to pay a $2,000 penalty and attend a course about generative AI in the legal field. A federal judge in Minnesota last month said a misinformation expert had destroyed his credibility with the court after he admitted to unintentionally citing fake, AI-generated citations in a case involving a "deepfake" parody of Vice President Kamala Harris. Harry Surden, a law professor at the University of Colorado's law school who studies AI and the law, said he recommends lawyers spend time learning "the strengths and weaknesses of the tools." He said the mounting examples show a "lack of AI literacy" in the profession, but the technology itself is not the problem. "Lawyers have always made mistakes in their filings before AI," he said. "This is not new."
[6]
AI hallucinations could get lawyers fired, law firm says
U.S. personal injury law firm Morgan & Morgan sent an urgent email this month to its more than 1,000 lawyers: Artificial intelligence can invent fake case law, and using made-up information in a court filing could get you fired. A federal judge in Wyoming had just threatened to sanction two lawyers at the firm who included fictitious case citations in a lawsuit against Walmart. One of the lawyers admitted in court filings last week that he used an AI program that "hallucinated" the cases and apologized for what he called an inadvertent mistake. AI's penchant for generating legal fiction in case filings has led courts around the country to question or discipline lawyers in at least seven cases over the past two years, and created a new high-tech headache for litigants and judges, Reuters found. The Walmart case stands out because it involves a well-known law firm and a big corporate defendant. But examples like it have cropped up in all kinds of lawsuits since chatbots like ChatGPT ushered in the AI era, highlighting a new litigation risk.
Share
Share
Copy Link
Morgan & Morgan, a major US law firm, warns its attorneys about the risks of using AI in legal work after two lawyers cited non-existent cases generated by AI in a lawsuit against Walmart.
In a recent incident that has sent shockwaves through the legal community, Morgan & Morgan, one of America's largest personal injury law firms, found itself embroiled in controversy when two of its attorneys cited non-existent court cases generated by artificial intelligence (AI) in a lawsuit against Walmart 12. This incident has highlighted the growing challenges and risks associated with the use of AI in legal practice.
The case in question involved a lawsuit against Walmart over an allegedly defective hoverboard that caused a house fire. Rudwin Ayala, the lead attorney on the case, cited eight cases in a court filing that Walmart's lawyers could not find anywhere except on ChatGPT 2. This led to a federal judge in Wyoming threatening sanctions against the lawyers involved.
In response to this embarrassing situation, Morgan & Morgan took swift action:
This incident is not isolated. Reuters found that lawyers improperly citing AI-hallucinated cases have scrambled litigation in at least seven cases in the past two years 23. The consequences have been severe in some instances:
The legal profession is grappling with the double-edged sword of AI technology:
Efficiency vs. Accuracy: 63% of lawyers surveyed by Thomson Reuters in 2024 said they have used AI for work, with 12% using it regularly 34. AI can significantly reduce the time needed for legal research and drafting.
AI Hallucinations: Generative AI is known to confidently produce false information, a phenomenon known as "hallucinations" 34. This poses a significant risk in legal practice where accuracy is paramount.
Ethical Considerations: The American Bar Association has emphasized that lawyers' ethical obligations extend to "even an unintentional misstatement" produced through AI 34.
The legal community is responding to these challenges in various ways:
Enhanced Training: Law firms like Morgan & Morgan are implementing further policies and training to prevent AI-related errors 2.
Technological Safeguards: Some firms are adding checkboxes acknowledging AI's potential for hallucinations before allowing access to internal AI platforms 2.
Calls for AI Literacy: Experts like Harry Surden, a law professor at the University of Colorado, recommend that lawyers spend time learning "the strengths and weaknesses of the tools" 345.
As the legal profession navigates this new terrain, it's clear that while AI offers significant benefits, it also presents new risks that must be carefully managed. The incident at Morgan & Morgan serves as a cautionary tale for the entire legal industry, underscoring the need for vigilance, proper training, and ethical considerations in the use of AI in legal practice.
Reference
[3]
[4]
[5]
US law firms are increasingly adopting AI technologies to enhance efficiency and competitiveness, while navigating complex ethical and practical challenges. This trend is reshaping legal practices and education.
7 Sources
7 Sources
Stanford professor Jeff Hancock admits to using ChatGPT for organizing citations in a legal document supporting Minnesota's anti-deepfake law, leading to AI-generated false information in the affidavit.
2 Sources
2 Sources
Nvidia's monopoly in AI chips has prompted countries and tech giants to seek alternatives, driving a global competition for AI hardware supremacy.
2 Sources
2 Sources
Stanford professor Jeff Hancock faces allegations of citing non-existent, potentially AI-generated studies in his expert testimony supporting Minnesota's proposed deepfake legislation, raising questions about AI's impact on legal proceedings and academic integrity.
6 Sources
6 Sources
A U.S. federal judge has ruled in favor of Thomson Reuters in a copyright infringement case against AI startup Ross Intelligence, potentially setting a precedent for future AI-related copyright disputes.
17 Sources
17 Sources