10 Sources
[1]
Lawyers could face 'severe' penalties for fake AI-generated citations, UK court warns | TechCrunch
The High Court of England and Wales says lawyers need to take stronger steps to prevent the misuse of artificial intelligence in their work. In a ruling tying together two recent cases, Judge Victoria Sharp wrote that generative AI tools like ChatGPT "are not capable of conducting reliable legal research." "Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect," Judge Sharp wrote. "The responses may make confident assertions that are simply untrue." That doesn't mean lawyers cannot use AI in their research, but she said they have a professional duty "to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work." Judge Sharp suggested that the growing number of cases where lawyers (including, on the U.S. side, lawyers representing major AI platforms) have cited what appear to be AI-generated falsehoods suggests that "more needs to be done to ensure that the guidance is followed and lawyers comply with their duties to the court," and she said her ruling will be forwarded to professional bodies including the Bar Council and the Law Society. In one of the cases in question, a lawyer representing a man seeking damages against two banks submitted a filing with 45 citations -- 18 of those cases did not exist, while many others "did not contain the quotations that were attributed to them, did not support the propositions for which they were cited, and did not have any relevance to the subject matter of the application," Judge Sharp said. In the other, a lawyer representing a man who had been evicted from his London home wrote a court filing citing five cases that did not appear to exist. (The lawyer denied using AI, though she said the citations may have come from AI-generated summaries that appeared in "Google or Safari.") Judge Sharp said that while the court decided not to initiate contempt proceedings, that is "not a precedent." "Lawyers who do not comply with their professional obligations in this respect risk severe sanction," she added. Both lawyers were either referred or referred themselves to professional regulators. Judge Sharp noted that when lawyers do not meet their duties to the court, the court's powers range from "public admonition" to the imposition of costs, contempt proceedings, or even "referral to the police."
[2]
Lawyers face sanctions for citing fake cases with AI, warns UK judge
LONDON, June 6 (Reuters) - Lawyers who use artificial intelligence to cite non-existent cases can be held in contempt of court or even face criminal charges, London's High Court warned on Friday, in the latest example of generative AI leading lawyers astray. A senior judge lambasted lawyers in two cases who apparently used AI tools when preparing written arguments, which referred to fake case law, and called on regulators and industry leaders to ensure lawyers know their ethical obligations. "There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused," Judge Victoria Sharp said in a written ruling, opens new tab. "In those circumstances, practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities ... and by those with the responsibility for regulating the provision of legal services." The ruling comes after lawyers around the world have been forced to explain themselves for relying on false authorities, since ChatGPT and other generative AI tools became widely available more than two years ago. Sharp warned in her ruling that lawyers who refer to non-existent cases will be in breach of their duty to not mislead the court, which could also amount to contempt of court. She added that "in the most egregious cases, deliberately placing false material before the court with the intention of interfering with the administration of justice amounts to the common law criminal offence of perverting the course of justice". Sharp noted that legal regulators and the judiciary had issued guidance about the use of AI by lawyers, but said that "guidance on its own is insufficient to address the misuse of artificial intelligence". Reporting by Sam Tobin; Editing by Sachin Ravikumar Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial Intelligence
[3]
UK judge warns of risk to justice after lawyers cited fake AI-generated cases in court
LONDON (AP) -- Lawyers have cited fake cases generated by artificial intelligence in court proceedings in England, a judge has said -- warning that attorneys could be prosecuted if they don't check the accuracy of their research. High Court justice Victoria Sharp said the misuse of AI has "serious implications for the administration of justice and public confidence in the justice system." In the latest example of how judicial systems around the world are grappling with how to handle the increasing presence of artificial intelligence in court, Sharp and fellow judge Jeremy Johnson chastised lawyers in two recent cases in a ruling on Friday. They were asked to rule after lower court judges raised concerns about "suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked," leading to false information being put before the court. In a ruling written by Sharp, the judges said that in a 90 million pound ($120 million) lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist. The client in the case, Hamad Al-Haroun, apologized for unintentionally misleading the court with false information produced by publicly available AI tools, and said he was responsible, rather than his solicitor Abid Hussain. But Sharp said it was "extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around." In the other incident, a lawyer cited five fake cases in a tenant's housing claim against the London Borough of Haringey. Barrister Sarah Forey denied using AI, but Sharp said she had "not provided to the court a coherent explanation for what happened." The judges referred the lawyers in both cases to their professional regulators, but did not take more serious action. Sharp said providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison. She said in the judgment that AI is a "powerful technology" and a "useful tool" for the law. "Artificial intelligence is a tool that carries with it risks as well as opportunities," the judge said. "Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained."
[4]
UK Court Warns Lawyers Can Be Prosecuted Over A.I. Tools That 'Hallucinate' Fake Material
The High Court of England and Wales warned lawyers on Friday that they could face criminal prosecution for presenting to judges false material generated by artificial intelligence, after a series of cases cited made-up quotes and rulings that did not exist. In a rare intervention, one of the country's most senior judges said that existing guidance to lawyers had proved "insufficient to address the misuse of artificial intelligence" and that further steps were urgently needed. The ruling by Victoria Sharp, president of the King's Bench Division of the High Court, and a second judge, Jeremy Johnson, detailed two recent cases in which fake material was used in written legal arguments that were presented in court. Subscribe to The Times to read as many articles as you like.
[5]
High court tells UK lawyers to 'urgently' stop misuse of AI in legal work
Ruling follows two cases blighted by actual or suspected use of artificial intelligence that created fake case-law citations The high court has told senior lawyers to take urgent action to prevent the misuse of artificial intelligence after dozens of fake case-law citations were put before the courts that were either completely fictitious or contained made-up passages. Lawyers are increasingly using AI systems to help them build legal arguments, but two cases this year were blighted by made-up case-law citations which were either definitely or suspected to have been generated by AI. In a £89m damages case against the Qatar National Bank, the claimants made 45 case-law citations, 18 of which turned out to be fictitious, with quotes in many of the others also bogus. The claimant admitted using publicly available AI tools and his solicitor accepted he cited the sham authorities. When Haringey Law Centre challenged the London borough of Haringey over its alleged failure to provide its client with temporary accommodation, its lawyer cited phantom case law five times. Suspicions were raised when the solicitor defending the council had to repeatedly query why they could not find any trace of the supposed authorities. It resulted in a legal action for wasted legal costs and a court found the law centre and its lawyer, a pupil barrister, were negligent. The barrister denied using AI in that case but said she may have inadvertently done so while using Google or Safari in preparation for a separate case where she also cited phantom authorities. In that case she said she may have taken account of AI summaries without realising what they were. In a regulatory ruling responding to the cases on Friday, Dame Victoria Sharp, the president of the King's bench division, said there were "serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused" and that lawyers misusing AI could face sanctions, from public admonishment to facing contempt of court proceedings and referral to the police. She called on the Bar Council and the Law Society to consider steps to curb the problem "as a matter of urgency" and told heads of barristers' chambers and managing partners of solicitors to ensure all lawyers know their professional and ethical duties if using AI. "Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect," she wrote. "The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source." Ian Jeffery, the chief executive of the Law Society of England and Wales, said the ruling "lays bare the dangers of using AI in legal work". "Artificial intelligence tools are increasingly used to support legal service delivery," he added. "However, the real risk of incorrect outputs produced by generative AI requires lawyers to check, review and ensure the accuracy of their work." The cases are not the first to have been blighted by AI-created hallucinations. In a UK tax tribunal in 2023, an appellant who claimed to have been helped by "a friend in a solicitor's office" provided nine bogus historical tribunal decisions as supposed precedents. She admitted it was "possible" she had used ChatGPT, but said it surely made no difference as there must be other cases that made her point. The appellants in a €5.8m (£4.9m) Danish case this year narrowly avoided contempt proceedings when they relied on a made-up ruling that the judge spotted. And a 2023 case in the US district court for the southern district of New York descended into chaos when a lawyer was challenged to produce the seven apparently fictitious cases they had cited. The simply asked ChatGPT to summarise the cases it had already made up and the result, said the judge was "gibberish" and fined the two lawyers and their firm $5,000.
[6]
UK judge warns of risk to justice after lawyers cited fake AI-generated cases in court
LONDON (AP) -- Lawyers have cited fake cases generated by artificial intelligence in court proceedings in England, a judge has said -- warning that attorneys could be prosecuted if they don't check the accuracy of their research. High Court justice Victoria Sharp said the misuse of AI has "serious implications for the administration of justice and public confidence in the justice system." In the latest example of how judicial systems around the world are grappling with how to handle the increasing presence of artificial intelligence in court, Sharp and fellow judge Jeremy Johnson chastised lawyers in two recent cases in a ruling on Friday. They were asked to rule after lower court judges raised concerns about "suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked," leading to false information being put before the court. In a ruling written by Sharp, the judges said that in a 90 million pound ($120 million) lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist. The client in the case, Hamad Al-Haroun, apologized for unintentionally misleading the court with false information produced by publicly available AI tools, and said he was responsible, rather than his solicitor Abid Hussain. But Sharp said it was "extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around." In the other incident, a lawyer cited five fake cases in a tenant's housing claim against the London Borough of Haringey. Barrister Sarah Forey denied using AI, but Sharp said she had "not provided to the court a coherent explanation for what happened." The judges referred the lawyers in both cases to their professional regulators, but did not take more serious action. Sharp said providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison. She said in the judgment that AI is a "powerful technology" and a "useful tool" for the law. "Artificial intelligence is a tool that carries with it risks as well as opportunities," the judge said. "Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained."
[7]
UK judge raises alarm after lawyers submit fake legal cases produced by AI tools
Lawyers in England have been caught citing fake cases generated by artificial intelligence in court, prompting a judge to warn of potential prosecution for failing to verify research accuracy. The misuse of AI raises serious concerns about the integrity of the justice system.Lawyers have cited fake cases generated by artificial intelligence in court proceedings in England, a judge has said - warning that attorneys could be prosecuted if they don't check the accuracy of their research. High Court justice Victoria Sharp said the misuse of AI has "serious implications for the administration of justice and public confidence in the justice system." In the latest example of how judicial systems around the world are grappling with how to handle the increasing presence of artificial intelligence in court, Sharp and fellow judge Jeremy Johnson chastised lawyers in two recent cases in a ruling on Friday. They were asked to rule after lower court judges raised concerns about "suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked," leading to false information being put before the court. In a ruling written by Sharp, the judges said that in a 90 million pound (USD 120 million) lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist. The client in the case, Hamad Al-Haroun, apologised for unintentionally misleading the court with false information produced by publicly available AI tools, and said he was responsible, rather than his solicitor Abid Hussain. But Sharp said it was "extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around." In the other incident, a lawyer cited five fake cases in a tenant's housing claim against the London Borough of Haringey. Barrister Sarah Forey denied using AI, but Sharp said she had "not provided to the court a coherent explanation for what happened." The judges referred the lawyers in both cases to their professional regulators, but did not take more serious action. Sharp said providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison. She said in the judgment that AI is a "powerful technology" and a "useful tool" for the law. "Artificial intelligence is a tool that carries with it risks as well as opportunities," the judge said. "Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained."
[8]
UK judge warns of risk to justice after lawyers cited fake...
Lawyers have cited fake cases generated by artificial intelligence in court proceedings in England, a judge has said, warning that attorneys could be prosecuted if they don't check the accuracy of their research. High Court justice Victoria Sharp said the misuse of AI has "serious implications for the administration of justice and public confidence in the justice system." In the latest example of how judicial systems around the world are grappling with how to handle the increasing presence of artificial intelligence in court, Sharp and fellow judge Jeremy Johnson chastised lawyers in two recent cases in a ruling on Friday. They were asked to rule after lower court judges raised concerns about "suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked," leading to false information being put before the court. In a ruling written by Sharp, the judges said that in a 90 million pound ($120 million) lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist. The client in the case, Hamad Al-Haroun, apologized for unintentionally misleading the court with false information produced by publicly available AI tools, and said he was responsible, rather than his solicitor, Abid Hussain. But Sharp said it was "extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around." In the other incident, a lawyer cited five fake cases in a tenant's housing claim against the London Borough of Haringey. Barrister Sarah Forey denied using AI, but Sharp said she had "not provided to the court a coherent explanation for what happened." The judges referred the lawyers in both cases to their professional regulators, but did not take more serious action. Sharp said providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison. She said in the judgment that AI is a "powerful technology" and a "useful tool" for the law. "Artificial intelligence is a tool that carries with it risks as well as opportunities," the judge said. "Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained."
[9]
U.K. judge warns of risk to justice after lawyers cited fake AI-generated cases in court
LONDON -- Lawyers have cited fake cases generated by artificial intelligence in court proceedings in England, a judge has said -- warning that attorneys could be prosecuted if they don't check the accuracy of their research. High Court justice Victoria Sharp said the misuse of AI has "serious implications for the administration of justice and public confidence in the justice system." In the latest example of how judicial systems around the world are grappling with how to handle the increasing presence of artificial intelligence in court, Sharp and fellow judge Jeremy Johnson chastised lawyers in two recent cases in a ruling on Friday. They were asked to rule after lower court judges raised concerns about "suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked," leading to false information being put before the court. In a ruling written by Sharp, the judges said that in a 90 million pound (US$120 million) lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist. The client in the case, Hamad Al-Haroun, apologized for unintentionally misleading the court with false information produced by publicly available AI tools, and said he was responsible, rather than his solicitor Abid Hussain. But Sharp said it was "extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around." In the other incident, a lawyer cited five fake cases in a tenant's housing claim against the London Borough of Haringey. Barrister Sarah Forey denied using AI, but Sharp said she had "not provided to the court a coherent explanation for what happened." The judges referred the lawyers in both cases to their professional regulators, but did not take more serious action. Sharp said providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison. She said in the judgment that AI is a "powerful technology" and a "useful tool" for the law. "Artificial intelligence is a tool that carries with it risks as well as opportunities," the judge said. "Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained."
[10]
Lawyers face sanctions for citing fake cases with AI, warns UK judge
LONDON (Reuters) -Lawyers who use artificial intelligence to cite non-existent cases can be held in contempt of court or even face criminal charges, London's High Court warned on Friday, in the latest example of generative AI leading lawyers astray. A senior judge lambasted lawyers in two cases who apparently used AI tools when preparing written arguments, which referred to fake case law, and called on regulators and industry leaders to ensure lawyers know their ethical obligations. "There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused," Judge Victoria Sharp said in a written ruling. "In those circumstances, practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities ... and by those with the responsibility for regulating the provision of legal services." The ruling comes after lawyers around the world have been forced to explain themselves for relying on false authorities, since ChatGPT and other generative AI tools became widely available more than two years ago. Sharp warned in her ruling that lawyers who refer to non-existent cases will be in breach of their duty to not mislead the court, which could also amount to contempt of court. She added that "in the most egregious cases, deliberately placing false material before the court with the intention of interfering with the administration of justice amounts to the common law criminal offence of perverting the course of justice". Sharp noted that legal regulators and the judiciary had issued guidance about the use of AI by lawyers, but said that "guidance on its own is insufficient to address the misuse of artificial intelligence". (Reporting by Sam Tobin; Editing by Sachin Ravikumar)
Share
Copy Link
The High Court of England and Wales has issued a stern warning to lawyers about the misuse of AI in legal work, particularly citing non-existent cases. The court emphasizes the need for stronger measures to prevent AI misuse and warns of potential severe sanctions for non-compliance.
The High Court of England and Wales has sounded a stern alarm regarding the misuse of artificial intelligence (AI) in legal proceedings. In a landmark ruling, Judge Victoria Sharp emphasized that lawyers could face "severe" penalties, including potential criminal charges, for presenting AI-generated fake citations in court 1.
Source: Economic Times
The ruling comes in response to two recent cases where lawyers submitted court filings with numerous non-existent or irrelevant citations. In one instance, a lawyer representing a claimant in a £90 million lawsuit cited 18 non-existent cases out of 45 total citations 2. Another case involved a lawyer representing an evicted tenant who cited five non-existent cases 3.
Source: Reuters
Judge Sharp pointed out that while AI tools like ChatGPT can produce seemingly coherent and plausible responses, they are "not capable of conducting reliable legal research" 1. She warned that these tools may make confident assertions that are entirely untrue or cite non-existent sources.
The court emphasized that lawyers have a professional duty to verify the accuracy of AI-generated research against authoritative sources before using it in their work. Failure to comply with these obligations could result in a range of sanctions, including:
In the most severe cases, deliberately placing false material before the court could amount to the criminal offense of perverting the course of justice 2.
Source: The New York Times
Judge Sharp's ruling highlights the need for more robust measures to ensure compliance with professional and ethical standards. She stated that existing guidance has proven "insufficient to address the misuse of artificial intelligence" and called for urgent action from legal regulatory bodies and industry leaders 4.
The ruling has significant implications for the legal profession's use of AI tools. Ian Jeffery, Chief Executive of the Law Society of England and Wales, acknowledged the increasing use of AI in legal service delivery but emphasized the "real risk of incorrect outputs" and the necessity for lawyers to thoroughly check and review their work 5.
This issue is not unique to the UK. Similar incidents have occurred in other jurisdictions, including a case in the US where lawyers were fined $5,000 for citing fictitious cases generated by ChatGPT 5. These cases underscore the global challenge facing the legal profession as it grapples with the integration of AI technologies.
Apple is reportedly in talks with OpenAI and Anthropic to potentially use their AI models to power an updated version of Siri, marking a significant shift in the company's AI strategy.
29 Sources
Technology
19 hrs ago
29 Sources
Technology
19 hrs ago
Cloudflare introduces a new tool allowing website owners to charge AI companies for content scraping, aiming to balance content creation and AI innovation.
10 Sources
Technology
3 hrs ago
10 Sources
Technology
3 hrs ago
Elon Musk's AI company, xAI, has raised $10 billion in a combination of debt and equity financing, signaling a major expansion in AI infrastructure and development amid fierce industry competition.
5 Sources
Business and Economy
11 hrs ago
5 Sources
Business and Economy
11 hrs ago
Google announces a major expansion of AI tools for education, including Gemini for Education and NotebookLM, aimed at enhancing learning experiences for students and supporting educators in classroom management.
8 Sources
Technology
19 hrs ago
8 Sources
Technology
19 hrs ago
NVIDIA's upcoming GB300 Blackwell Ultra AI servers, slated for release in the second half of 2025, are poised to become the most powerful AI servers globally. Major Taiwanese manufacturers are vying for production orders, with Foxconn securing the largest share.
2 Sources
Technology
11 hrs ago
2 Sources
Technology
11 hrs ago