6 Sources
6 Sources
[1]
AI in the courtroom: the dangers of using ChatGTP in legal practice in South Africa
University of the Free State provides funding as a partner of The Conversation AFRICA. A South African court case made headlines for all the wrong reasons in January 2025. The legal team in Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others had relied on case law that simply didn't exist. It had been generated by ChatGPT, a generative artificial intelligence (AI) chatbot developed by OpenAI. Only two of the nine case authorities the legal team submitted to the High Court were genuine. The rest were AI-fabricated "hallucinations". The court called this conduct "irresponsible and unprofessional" and referred the matter to the Legal Practice Council, the statutory body that regulates legal practitioners in South Africa, for investigation. It was not the first time South African courts had encountered such an incident. Parker v Forsyth in 2023 also dealt with fake case law produced by ChatGPT. But the judge was more forgiving in that instance, finding no intent to mislead. The Mavundla ruling marks a turning point: courts are losing patience with legal practitioners who use AI irresponsibly. We are legal academics who have been doing research on the growing use of AI, particularly generative AI, in legal research and education. While these technologies offer powerful tools for enhancing efficiency and productivity, they also present serious risks when used irresponsibly. Aspiring legal practitioners who misuse AI tools without proper guidance or ethical grounding risk severe professional consequences, even before their careers begin. Law schools should equip students with the skills and judgment to use AI tools responsibly. But most institutions remain unprepared for the pace at which AI is being adopted. Very few universities have formal policies or training on AI. Students are left with no guide through this rapidly evolving terrain. Our work calls for a proactive and structured approach to AI education in law schools. When technology becomes a liability The advocate in the Mavundla case admitted she had not verified the citations and relied instead on research done by a junior colleague. That colleague, a candidate attorney, claimed to have obtained the material from an online research tool. While she denied using ChatGPT, the pattern matched similar global incidents where lawyers unknowingly filed AI-generated judgments. In the 2024 American case of Park v Kim, the attorney cited non-existent case law in her reply brief, which she admitted was generated using ChatGPT. In the 2024 Canadian case of Zhang v Chen, the lawyer filed a notice of application containing two non-existent case authorities fabricated by ChatGPT. The court in Mavundla was unequivocal: no matter how advanced technology becomes, lawyers remain responsible for ensuring that every source they present is accurate. Workload pressure or ignorance of AI's risks is no defence. The judge also criticised the supervising attorney for failing to check the documents before filing them. The episode underscored a broader ethical principle: senior lawyers must properly train and supervise junior colleagues. The lesson here extends far beyond one law firm. Integrity, accuracy and critical thinking are not optional extras in the legal profession. They are core values that must be taught and practised from the beginning, during legal education. The classroom is the first courtroom The Mavundla case should serve as a warning to universities. If experienced legal practitioners can fall into AI traps regarding law, students still learning to research and reason can too. Generative AI tools like ChatGPT can be powerful allies - they can summarise cases, draft arguments and analyse complex texts in seconds. But they can also confidently fabricate information. Because AI models don't always "know" when they are wrong, they produce text that looks authoritative but may be entirely false. Read more: AI can be a danger to students - 3 things universities must do For students, the dangers are twofold. First, over-reliance on AI can stunt the development of critical research skills. Second, it can lead to serious academic or professional misconduct. A student who submits AI-fabricated content could face disciplinary action at university and reputational damage that follows them into their legal career. In our paper we argue that, instead of banning AI tools outright, law schools should teach students to use them responsibly. This means developing "AI literacy": the ability to question, verify and contextualise AI-generated information. Students should learn to treat AI systems as assistants, not authorities. Read more: Universities can turn AI from a threat to an opportunity by teaching critical thinking In South African legal practice, authority traditionally refers to recognised sources such as legislation, judicial precedent and academic commentary, which lawyers cite to support their arguments. These sources are accessed through established legal databases and law reports, a process that, while time-consuming, ensures accuracy, accountability and adherence to the rule of law. From law faculties to courtrooms Legal educators can embed AI literacy into existing courses on research methodology, professional ethics and legal writing. Exercises could include verifying AI-generated summaries against real judgments or analysing the ethical implications of relying on machine-produced arguments. Teaching responsible AI use is not simply about avoiding embarrassment in court. It is about protecting the integrity of the justice system itself. As seen in Mavundla, one candidate attorney's uncritical use of AI led to professional investigation, public scrutiny and reputational damage to the firm. The financial risks are also real. Courts can order lawyers to pay costs out of their pockets, when serious professional misconduct occurs. In the digital era, where court judgments and media reports spread instantly online, a lawyer's reputation can collapse overnight if they are found to have relied on fake or unverified AI material. It would also be beneficial for courts to be trained in detecting fake cases generated by AI. The way forward Our study concludes that AI is here to stay, and so is its use in law. The challenge is not whether the legal profession should use AI, but how. Law schools have a critical opportunity, and an ethical duty, to prepare future practitioners for a world where technology and human judgment must work side by side. Speed and convenience can never replace accuracy and integrity. As AI becomes a routine part of legal research, tomorrow's lawyers must be trained not just to prompt - but to think.
[2]
Mistake-filled legal briefs show the limits of relying on AI tools at work
NEW YORK (AP) -- Judges around the world are dealing with a growing problem: legal briefs that were generated with the help of artificial intelligence and submitted with errors such as citations to cases that don't exist, according to attorneys and court documents. The trend serves as a cautionary tale for people who are learning to use AI tools at work. Many employers want to hire workers who can use the technology to help with tasks such as conducting research and drafting reports. As teachers, accountants and marketing professionals begin engaging with AI chatbots and assistants to generate ideas and improve productivity, they're also discovering the programs can make mistakes. A French data scientist and lawyer, Damien Charlotin, has catalogued at least 490 court filings in the past six months that contained "hallucinations," which are AI responses that contain false or misleading information. The pace is accelerating as more people use AI, he said. "Even the more sophisticated player can have an issue with this," Charlotin said. "AI can be a boon. It's wonderful, but also there are these pitfalls." Charlotin, a senior research fellow at HEC Paris, a business school located just outside France's capital city, created a database to track cases in which a judge ruled that generative AI produced hallucinated content such as fabricated case law and false quotes. The majority of rulings are from U.S. cases in which plaintiffs represented themselves without an attorney, he said. While most judges issued warnings about the errors, some levied fines. But even high-profile companies have submitted problematic legal documents. A federal judge in Colorado ruled that a lawyer for MyPillow Inc., filed a brief containing nearly 30 defective citations as part of a defamation case against the company and founder Michael Lindell. The legal profession isn't the only one wrestling with AI's foibles. The AI overviews that appear at the top of web search result pages frequently contain errors. And AI tools also raise privacy concerns. Workers in all industries need to be cautious about the information they upload or put into prompts to ensure they're safeguarding the confidential information of employers and clients. Legal and workplace experts share their experiences with AI's mistakes and describe perils to avoid. Don't trust AI to make big decisions for you. Some AI users treat the tool as an intern to whom you assign tasks and whose completed work you expect to check. "Think about AI as augmenting your workflow," said Maria Flynn, CEO of Jobs for the Future, a nonprofit focused on workforce development. It can act as an assistant for tasks such as drafting an email or researching a travel itinerary, but don't think of it as a substitute that can do all of the work, she said. When preparing for a meeting, Flynn experimented with an in-house AI tool, asking it to suggest discussion questions based on an article she shared with the team. "Some of the questions it proposed weren't the right context really for our organization, so I was able to give it some of that feedback ... and it came back with five very thoughtful questions," she said. Flynn also has found problems in the output of the AI tool, which still is in a pilot stage. She once asked it to compile information on work her organization had done in various states. But the AI tool was treating completed work and funding proposals as the same thing. "In that case, our AI tool was not able to identify the difference between something that had been proposed and something that had been completed," Flynn said. Luckily, she had the institutional knowledge to recognize the errors. "If you're new in an organization, ask coworkers if the results look accurate to them," Flynn suggested. While AI can help with brainstorming, relying on it to provide factual information is risky. Take the time to check the accuracy of what AI generates, even if it's tempting to skip that step. "People are making an assumption because it sounds so plausible that it's right, and it's convenient," Justin Daniels, an Atlanta-based attorney and shareholder with the law firm Baker Donelson, said. "Having to go back and check all the cites, or when I look at a contract that AI has summarized, I have to go back and read what the contract says, that's a little inconvenient and time-consuming, but that's what you have to do. As much as you think the AI can substitute for that, it can't." It can be tempting to use AI to record and take notes during meetings. Some tools generate useful summaries and outline action steps based on what was said. But many jurisdictions require the consent of participants prior to recording a conversations. Before using AI to take notes, pause and consider whether the conversation should be kept privileged and confidential, said Danielle Kays, a Chicago-based partner at law firm Fisher Phillips. Consult with colleagues in the legal or human resources departments before deploying a notetaker in high-risk situations such as investigations, performance reviews or legal strategy discussions, she suggested. "People are claiming that with use of AI there should be various levels of consent, and that is something that is working its way through the courts," Kays said. "That is an issue that I would say companies should continue to watch as it is litigated." If you're using free AI tools to draft a memo or marketing campaign, don't tell it identifying information or corporate secrets. Once you've uploaded that information, it's possible others using the same tool might find it. That's because when other people ask an AI tool questions, it will search available information, including details you revealed, as it builds its answer, Flynn said. "It doesn't discern whether something is public or private," she added. If your employer doesn't offer AI training, try experimenting with free tools such as ChatGPT or Microsoft Copilot. Some universities and tech companies offer classes that can help you develop your understanding of how AI works and ways it can be useful. A course that teaches people how to construct the best AI prompts or hands-on courses that provide opportunities to practice are valuable, Flynn said. Despite potential problems with the tools, learning how they work can be beneficial at a time when they're ubiquitous. "The largest potential pitfall in learning to use AI is not learning to use it at at all," Flynn said. "We're all going to need to become fluent in AI, and taking the early steps of building your familiarity, your literacy, your comfort with the tool is going to be critically important." ___ Share your stories and questions about workplace wellness at [email protected]. Follow AP's Be Well coverage, focusing on wellness, fitness, diet and mental health at https://apnews.com/hub/be-well
[3]
AI in the courtroom: The dangers of using ChatGPT in legal practice in South Africa
A South African court case made headlines for all the wrong reasons in January 2025. The legal team in Mavundla vs. MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others had relied on case law that simply didn't exist. It had been generated by ChatGPT, a generative artificial intelligence (AI) chatbot developed by OpenAI. Only two of the nine case authorities the legal team submitted to the High Court were genuine. The rest were AI-fabricated "hallucinations." The court called this conduct "irresponsible and unprofessional" and referred the matter to the Legal Practice Council, the statutory body that regulates legal practitioners in South Africa, for investigation. It was not the first time South African courts had encountered such an incident. Parker vs. Forsyth in 2023 also dealt with fake case law produced by ChatGPT. But the judge was more forgiving in that instance, finding no intent to mislead. The Mavundla ruling marks a turning point: courts are losing patience with legal practitioners who use AI irresponsibly. We are legal academics who have been doing research on the growing use of AI, particularly generative AI, in legal research and education. While these technologies offer powerful tools for enhancing efficiency and productivity, they also present serious risks when used irresponsibly. Aspiring legal practitioners who misuse AI tools without proper guidance or ethical grounding risk severe professional consequences, even before their careers begin. Law schools should equip students with the skills and judgment to use AI tools responsibly. But most institutions remain unprepared for the pace at which AI is being adopted. Very few universities have formal policies or training on AI. Students are left with no guide through this rapidly evolving terrain. Our work calls for a proactive and structured approach to AI education in law schools. When technology becomes a liability The advocate in the Mavundla case admitted she had not verified the citations and relied instead on research done by a junior colleague. That colleague, a candidate attorney, claimed to have obtained the material from an online research tool. While she denied using ChatGPT, the pattern matched similar global incidents where lawyers unknowingly filed AI-generated judgments. In the 2024 American case of Park vs. Kim, the attorney cited non-existent case law in her reply brief, which she admitted was generated using ChatGPT. In the 2024 Canadian case of Zhang vs. Chen, the lawyer filed a notice of application containing two non-existent case authorities fabricated by ChatGPT. The court in Mavundla was unequivocal: No matter how advanced technology becomes, lawyers remain responsible for ensuring that every source they present is accurate. Workload pressure or ignorance of AI's risks is no defense. The judge also criticized the supervising attorney for failing to check the documents before filing them. The episode underscored a broader ethical principle: Senior lawyers must properly train and supervise junior colleagues. The lesson here extends far beyond one law firm. Integrity, accuracy and critical thinking are not optional extras in the legal profession. They are core values that must be taught and practiced from the beginning, during legal education. The classroom is the first courtroom The Mavundla case should serve as a warning to universities. If experienced legal practitioners can fall into AI traps regarding law, students still learning to research and reason can too. Generative AI tools like ChatGPT can be powerful allies -- they can summarize cases, draft arguments and analyze complex texts in seconds. But they can also confidently fabricate information. Because AI models don't always "know" when they are wrong, they produce text that looks authoritative but may be entirely false. For students, the dangers are twofold. First, over-reliance on AI can stunt the development of critical research skills. Second, it can lead to serious academic or professional misconduct. A student who submits AI-fabricated content could face disciplinary action at university and reputational damage that follows them into their legal career. In our paper we argue that instead of banning AI tools outright, law schools should teach students to use them responsibly. This means developing "AI literacy": the ability to question, verify and contextualize AI-generated information. Students should learn to treat AI systems as assistants, not authorities. In South African legal practice, authority traditionally refers to recognized sources such as legislation, judicial precedent and academic commentary, which lawyers cite to support their arguments. These sources are accessed through established legal databases and law reports, a process that, while time-consuming, ensures accuracy, accountability and adherence to the rule of law. From law faculties to courtrooms Legal educators can embed AI literacy into existing courses on research methodology, professional ethics and legal writing. Exercises could include verifying AI-generated summaries against real judgments or analyzing the ethical implications of relying on machine-produced arguments. Teaching responsible AI use is not simply about avoiding embarrassment in court. It is about protecting the integrity of the justice system itself. As seen in Mavundla, one candidate attorney's uncritical use of AI led to professional investigation, public scrutiny and reputational damage to the firm. The financial risks are also real. Courts can order lawyers to pay costs out of their pockets when serious professional misconduct occurs. In the digital era, where court judgments and media reports spread instantly online, a lawyer's reputation can collapse overnight if they are found to have relied on fake or unverified AI material. It would also be beneficial for courts to be trained in detecting fake cases generated by AI. The way forward Our study concludes that AI is here to stay, and so is its use in law. The challenge is not whether the legal profession should use AI, but how. Law schools have a critical opportunity, and an ethical duty, to prepare future practitioners for a world where technology and human judgment must work side by side. Speed and convenience can never replace accuracy and integrity. As AI becomes a routine part of legal research, tomorrow's lawyers must be trained not just to prompt -- but to think. This article is republished from The Conversation under a Creative Commons license. Read the original article.
[4]
Generative AI is hallucinating again, this time in federal court filings
What Happened: So, it's starting to happen. Courts all over the world are getting legal documents from lawyers that are filled with... well, lies made up by AI. We're talking about totally fabricated court cases, fake quotes, and citations that just don't exist. A French data scientist and lawyer named Damien Charlotin has actually been tracking this. He found at least 490 court filings in just the last six months that had these AI "hallucinations." Most of them are from the U.S., where judges have been calling out (and even fining) lawyers for handing in this AI-generated nonsense. In one big case, a lawyer for MyPillow actually submitted a brief that had nearly 30 fake citations in it. Yikes. Why This Is a Big Deal: This isn't just a lawyer problem; it's a huge dilemma for everyone. We're all starting to use these AI tools for everything - drafting reports, summarising meetings, doing research. The problem is that the AI is incredibly good at sounding confident, even when it's completely wrong. It gives you an answer that feels plausible but is just... false. As Charlotin said, "AI can be a boon, but there are these pitfalls." Even smart, experienced people are getting tricked by it, and that's a massive risk for any company that relies on AI-generated work without double-checking it. Why Should I Care: Look, whether you're a teacher, a lawyer, or a manager, AI is becoming a part of the job. You can't avoid it. But as experts are warning, you absolutely cannot use these tools blindly. Maria Flynn, the CEO of Jobs for the Future, quoted in the AP report - put it perfectly: treat AI as an "assistant, not a substitute." You still have to be the human in the loop. You have to check the facts, make sure you're not breaking privacy laws, and please, don't upload your company's secret data. As another attorney said, "People assume it's right because it sounds right -- but that assumption can cost you." It can cost you your reputation, your job, or even land you in legal trouble. Recommended Videos What's Next: The bottom line is we all need to get smarter about how we use AI. Fast. Companies are being urged to actually train their teams on this stuff - how to check its work, how to use it safely, and what not to do. It's becoming a basic job skill. As Flynn put it, "The biggest pitfall is not learning to use AI at all." The future isn't about AI replacing people; it's about people who know how to use AI responsibly.
[5]
Mistake-filled legal briefs show the limits of relying on AI tools at work
NEW YORK (AP) -- Judges around the world are dealing with a growing problem: legal briefs that were generated with the help of artificial intelligence and submitted with errors such as citations to cases that don't exist, according to attorneys and court documents. The trend serves as a cautionary tale for people who are learning to use AI tools at work. Many employers want to hire workers who can use the technology to help with tasks such as conducting research and drafting reports. As teachers, accountants and marketing professionals begin engaging with AI chatbots and assistants to generate ideas and improve productivity, they're also discovering the programs can make mistakes. A French data scientist and lawyer, Damien Charlotin, has catalogued at least 490 court filings in the past six months that contained "hallucinations," which are AI responses that contain false or misleading information. The pace is accelerating as more people use AI, he said. "Even the more sophisticated player can have an issue with this," Charlotin said. "AI can be a boon. It's wonderful, but also there are these pitfalls." Charlotin, a senior research fellow at HEC Paris, a business school located just outside France's capital city, created a database to track cases in which a judge ruled that generative AI produced hallucinated content such as fabricated case law and false quotes. The majority of rulings are from U.S. cases in which plaintiffs represented themselves without an attorney, he said. While most judges issued warnings about the errors, some levied fines. But even high-profile companies have submitted problematic legal documents. A federal judge in Colorado ruled that a lawyer for MyPillow Inc., filed a brief containing nearly 30 defective citations as part of a defamation case against the company and founder Michael Lindell. The legal profession isn't the only one wrestling with AI's foibles. The AI overviews that appear at the top of web search result pages frequently contain errors. And AI tools also raise privacy concerns. Workers in all industries need to be cautious about the details they upload or put into prompts to ensure they're safeguarding the confidential information of employers and clients. Legal and workplace experts share their experiences with AI's mistakes and describe perils to avoid. Think of AI as an assistant Don't trust AI to make big decisions for you. Some AI users treat the tool as an intern to whom you assign tasks and whose completed work you expect to check. "Think about AI as augmenting your workflow," said Maria Flynn, CEO of Jobs for the Future, a nonprofit focused on workforce development. It can act as an assistant for tasks such as drafting an email or researching a travel itinerary, but don't think of it as a substitute that can do all of the work, she said. When preparing for a meeting, Flynn experimented with an in-house AI tool, asking it to suggest discussion questions based on an article she shared with the team. "Some of the questions it proposed weren't the right context really for our organization, so I was able to give it some of that feedback ... and it came back with five very thoughtful questions," she said. Check for accuracy Flynn also has found problems in the output of the AI tool, which still is in a pilot stage. She once asked it to compile information on work her organization had done in various states. But the AI tool was treating completed work and funding proposals as the same thing. "In that case, our AI tool was not able to identify the difference between something that had been proposed and something that had been completed," Flynn said. Luckily, she had the institutional knowledge to recognize the errors. "If you're new in an organization, ask coworkers if the results look accurate to them," Flynn suggested. While AI can help with brainstorming, relying on it to provide factual information is risky. Take the time to check the accuracy of what AI generates, even if it's tempting to skip that step. "People are making an assumption because it sounds so plausible that it's right, and it's convenient," Justin Daniels, an Atlanta-based attorney and shareholder with the law firm Baker Donelson, said. "Having to go back and check all the cites, or when I look at a contract that AI has summarized, I have to go back and read what the contract says, that's a little inconvenient and time-consuming, but that's what you have to do. As much as you think the AI can substitute for that, it can't." Be careful with notetakers It can be tempting to use AI to record and take notes during meetings. Some tools generate useful summaries and outline action steps based on what was said. But many jurisdictions require the consent of participants prior to recording conversations. Before using AI to take notes, pause and consider whether the conversation should be kept privileged and confidential, said Danielle Kays, a Chicago-based partner at law firm Fisher Phillips. Consult with colleagues in the legal or human resources departments before deploying a notetaker in high-risk situations such as investigations, performance reviews or legal strategy discussions, she suggested. "People are claiming that with use of AI there should be various levels of consent, and that is something that is working its way through the courts," Kays said. "That is an issue that I would say companies should continue to watch as it is litigated." Protecting confidential information If you're using free AI tools to draft a memo or marketing campaign, don't tell it identifying information or corporate secrets. Once you've uploaded that information, it's possible others using the same tool might find it. That's because when other people ask an AI tool questions, it will search available information, including details you revealed, as it builds its answer, Flynn said. "It doesn't discern whether something is public or private," she added. Seek schooling If your employer doesn't offer AI training, try experimenting with free tools such as ChatGPT or Microsoft Copilot. Some universities and tech companies offer classes that can help you develop your understanding of how AI works and ways it can be useful. A course that teaches people how to construct the best AI prompts or hands-on courses that provide opportunities to practice are valuable, Flynn said. Despite potential problems with the tools, learning how they work can be beneficial at a time when they're ubiquitous. "The largest potential pitfall in learning to use AI is not learning to use it at at all," Flynn said. "We're all going to need to become fluent in AI, and taking the early steps of building your familiarity, your literacy, your comfort with the tool is going to be critically important." ___ Share your stories and questions about workplace wellness at [email protected]. Follow AP's Be Well coverage, focusing on wellness, fitness, diet and mental health at https://apnews.com/hub/be-well
[6]
Mistake-Filled Legal Briefs Show the Limits of Relying on AI Tools at Work
NEW YORK (AP) -- Judges around the world are dealing with a growing problem: legal briefs that were generated with the help of artificial intelligence and submitted with errors such as citations to cases that don't exist, according to attorneys and court documents. The trend serves as a cautionary tale for people who are learning to use AI tools at work. Many employers want to hire workers who can use the technology to help with tasks such as conducting research and drafting reports. As teachers, accountants and marketing professionals begin engaging with AI chatbots and assistants to generate ideas and improve productivity, they're also discovering the programs can make mistakes. A French data scientist and lawyer, Damien Charlotin, has catalogued at least 490 court filings in the past six months that contained "hallucinations," which are AI responses that contain false or misleading information. The pace is accelerating as more people use AI, he said. "Even the more sophisticated player can have an issue with this," Charlotin said. "AI can be a boon. It's wonderful, but also there are these pitfalls." Charlotin, a senior research fellow at HEC Paris, a business school located just outside France's capital city, created a database to track cases in which a judge ruled that generative AI produced hallucinated content such as fabricated case law and false quotes. The majority of rulings are from U.S. cases in which plaintiffs represented themselves without an attorney, he said. While most judges issued warnings about the errors, some levied fines. But even high-profile companies have submitted problematic legal documents. A federal judge in Colorado ruled that a lawyer for MyPillow Inc., filed a brief containing nearly 30 defective citations as part of a defamation case against the company and founder Michael Lindell. The legal profession isn't the only one wrestling with AI's foibles. The AI overviews that appear at the top of web search result pages frequently contain errors. And AI tools also raise privacy concerns. Workers in all industries need to be cautious about the information they upload or put into prompts to ensure they're safeguarding the confidential information of employers and clients. Legal and workplace experts share their experiences with AI's mistakes and describe perils to avoid. Think of AI as an assistant Don't trust AI to make big decisions for you. Some AI users treat the tool as an intern to whom you assign tasks and whose completed work you expect to check. "Think about AI as augmenting your workflow," said Maria Flynn, CEO of Jobs for the Future, a nonprofit focused on workforce development. It can act as an assistant for tasks such as drafting an email or researching a travel itinerary, but don't think of it as a substitute that can do all of the work, she said. When preparing for a meeting, Flynn experimented with an in-house AI tool, asking it to suggest discussion questions based on an article she shared with the team. "Some of the questions it proposed weren't the right context really for our organization, so I was able to give it some of that feedback ... and it came back with five very thoughtful questions," she said. Check for accuracy Flynn also has found problems in the output of the AI tool, which still is in a pilot stage. She once asked it to compile information on work her organization had done in various states. But the AI tool was treating completed work and funding proposals as the same thing. "In that case, our AI tool was not able to identify the difference between something that had been proposed and something that had been completed," Flynn said. Luckily, she had the institutional knowledge to recognize the errors. "If you're new in an organization, ask coworkers if the results look accurate to them," Flynn suggested. While AI can help with brainstorming, relying on it to provide factual information is risky. Take the time to check the accuracy of what AI generates, even if it's tempting to skip that step. "People are making an assumption because it sounds so plausible that it's right, and it's convenient," Justin Daniels, an Atlanta-based attorney and shareholder with the law firm Baker Donelson, said. "Having to go back and check all the cites, or when I look at a contract that AI has summarized, I have to go back and read what the contract says, that's a little inconvenient and time-consuming, but that's what you have to do. As much as you think the AI can substitute for that, it can't." Be careful with notetakers It can be tempting to use AI to record and take notes during meetings. Some tools generate useful summaries and outline action steps based on what was said. But many jurisdictions require the consent of participants prior to recording a conversations. Before using AI to take notes, pause and consider whether the conversation should be kept privileged and confidential, said Danielle Kays, a Chicago-based partner at law firm Fisher Phillips. Consult with colleagues in the legal or human resources departments before deploying a notetaker in high-risk situations such as investigations, performance reviews or legal strategy discussions, she suggested. "People are claiming that with use of AI there should be various levels of consent, and that is something that is working its way through the courts," Kays said. "That is an issue that I would say companies should continue to watch as it is litigated." Protecting confidential information If you're using free AI tools to draft a memo or marketing campaign, don't tell it identifying information or corporate secrets. Once you've uploaded that information, it's possible others using the same tool might find it. That's because when other people ask an AI tool questions, it will search available information, including details you revealed, as it builds its answer, Flynn said. "It doesn't discern whether something is public or private," she added. Seek schooling If your employer doesn't offer AI training, try experimenting with free tools such as ChatGPT or Microsoft Copilot. Some universities and tech companies offer classes that can help you develop your understanding of how AI works and ways it can be useful. A course that teaches people how to construct the best AI prompts or hands-on courses that provide opportunities to practice are valuable, Flynn said. Despite potential problems with the tools, learning how they work can be beneficial at a time when they're ubiquitous. "The largest potential pitfall in learning to use AI is not learning to use it at at all," Flynn said. "We're all going to need to become fluent in AI, and taking the early steps of building your familiarity, your literacy, your comfort with the tool is going to be critically important." ___ Share your stories and questions about workplace wellness at [email protected]. Follow AP's Be Well coverage, focusing on wellness, fitness, diet and mental health at https://apnews.com/hub/be-well
Share
Share
Copy Link
Legal professionals worldwide are facing serious consequences for submitting AI-generated legal briefs containing fabricated case law and citations. A growing database tracks nearly 500 court filings with AI hallucinations in just six months.
Courts worldwide are confronting an unprecedented challenge as legal professionals increasingly submit briefs containing AI-generated fabrications. French data scientist and lawyer Damien Charlotin has documented at least 490 court filings in the past six months containing AI "hallucinations" - false or misleading information generated by artificial intelligence tools
2
. The majority of these problematic cases originate from the United States, where judges have issued warnings and, in some instances, levied fines against attorneys2
.
Source: The Conversation
A landmark case in South Africa has highlighted the severity of this issue. In Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others, the legal team submitted nine case authorities to the High Court, of which only two were genuine
1
. The remaining seven were AI-fabricated hallucinations generated by ChatGPT. The court deemed this conduct "irresponsible and unprofessional" and referred the matter to the Legal Practice Council for investigation1
.
Source: Tech Xplore
This incident marks a turning point in judicial tolerance. While a previous 2023 case, Parker v Forsyth, also involved fake ChatGPT-generated case law, the judge showed more leniency, finding no intent to mislead
3
. The Mavundla ruling demonstrates that courts are losing patience with irresponsible AI usage.Similar incidents have occurred globally, establishing a concerning pattern. In the 2024 American case Park v Kim, an attorney cited non-existent case law generated by ChatGPT in her reply brief
1
. The 2024 Canadian case Zhang v Chen involved a lawyer filing a notice of application containing two fabricated case authorities from ChatGPT . Even high-profile companies face scrutiny - a federal judge in Colorado ruled that a MyPillow Inc. lawyer filed a brief containing nearly 30 defective citations2
.
Source: Digital Trends
Related Stories
The Mavundla case revealed critical failures in professional oversight. The advocate admitted she had not verified the citations, relying instead on research conducted by a junior colleague
1
. The court emphasized that regardless of technological advancement, lawyers remain responsible for ensuring source accuracy, with workload pressure or AI ignorance providing no defense .The judge also criticized the supervising attorney for failing to review documents before filing, underscoring the ethical principle that senior lawyers must properly train and supervise junior colleagues
1
.Legal academics argue that law schools must address this crisis through comprehensive AI education. Most universities lack formal AI policies or training programs, leaving students without guidance in this rapidly evolving landscape
1
. The recommendation is not to ban AI tools outright but to develop "AI literacy" - the ability to question, verify, and contextualize AI-generated information3
.Experts emphasize treating AI systems as assistants rather than authorities. Maria Flynn, CEO of Jobs for the Future, advocates thinking of AI as "augmenting your workflow" rather than substituting for human judgment
5
. This approach requires users to verify AI outputs, particularly when dealing with factual information that could have serious consequences if incorrect.Summarized by
Navi
[1]
[4]
22 Sept 2025•Technology

22 Jul 2025•Policy and Regulation

09 Nov 2025•Policy and Regulation

1
Business and Economy

2
Technology

3
Policy and Regulation
