8 Sources
[1]
Lawyers face sanctions for citing fake cases with AI, warns UK judge
LONDON, June 6 (Reuters) - Lawyers who use artificial intelligence to cite non-existent cases can be held in contempt of court or even face criminal charges, London's High Court warned on Friday, in the latest example of generative AI leading lawyers astray. A senior judge lambasted lawyers in two cases who apparently used AI tools when preparing written arguments, which referred to fake case law, and called on regulators and industry leaders to ensure lawyers know their ethical obligations. "There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused," Judge Victoria Sharp said in a written ruling, opens new tab. "In those circumstances, practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities ... and by those with the responsibility for regulating the provision of legal services." The ruling comes after lawyers around the world have been forced to explain themselves for relying on false authorities, since ChatGPT and other generative AI tools became widely available more than two years ago. Sharp warned in her ruling that lawyers who refer to non-existent cases will be in breach of their duty to not mislead the court, which could also amount to contempt of court. She added that "in the most egregious cases, deliberately placing false material before the court with the intention of interfering with the administration of justice amounts to the common law criminal offence of perverting the course of justice". Sharp noted that legal regulators and the judiciary had issued guidance about the use of AI by lawyers, but said that "guidance on its own is insufficient to address the misuse of artificial intelligence". Reporting by Sam Tobin; Editing by Sachin Ravikumar Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial Intelligence
[2]
UK judge warns of risk to justice after lawyers cited fake AI-generated cases in court
LONDON (AP) -- Lawyers have cited fake cases generated by artificial intelligence in court proceedings in England, a judge has said -- warning that attorneys could be prosecuted if they don't check the accuracy of their research. High Court justice Victoria Sharp said the misuse of AI has "serious implications for the administration of justice and public confidence in the justice system." In the latest example of how judicial systems around the world are grappling with how to handle the increasing presence of artificial intelligence in court, Sharp and fellow judge Jeremy Johnson chastised lawyers in two recent cases in a ruling on Friday. They were asked to rule after lower court judges raised concerns about "suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked," leading to false information being put before the court. In a ruling written by Sharp, the judges said that in a 90 million pound ($120 million) lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist. The client in the case, Hamad Al-Haroun, apologized for unintentionally misleading the court with false information produced by publicly available AI tools, and said he was responsible, rather than his solicitor Abid Hussain. But Sharp said it was "extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around." In the other incident, a lawyer cited five fake cases in a tenant's housing claim against the London Borough of Haringey. Barrister Sarah Forey denied using AI, but Sharp said she had "not provided to the court a coherent explanation for what happened." The judges referred the lawyers in both cases to their professional regulators, but did not take more serious action. Sharp said providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison. She said in the judgment that AI is a "powerful technology" and a "useful tool" for the law. "Artificial intelligence is a tool that carries with it risks as well as opportunities," the judge said. "Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained."
[3]
UK Court Warns Lawyers Can Be Prosecuted Over A.I. Tools That 'Hallucinate' Fake Material
The High Court of England and Wales warned lawyers on Friday that they could face criminal prosecution for presenting to judges false material generated by artificial intelligence, after a series of cases cited made-up quotes and rulings that did not exist. In a rare intervention, one of the country's most senior judges said that existing guidance to lawyers had proved "insufficient to address the misuse of artificial intelligence" and that further steps were urgently needed. The ruling by Victoria Sharp, president of the King's Bench Division of the High Court, and a second judge, Jeremy Johnson, detailed two recent cases in which fake material was used in written legal arguments that were presented in court. Subscribe to The Times to read as many articles as you like.
[4]
High court tells UK lawyers to 'urgently' stop misuse of AI in legal work
Ruling follows two cases blighted by actual or suspected use of artificial intelligence that created fake case-law citations The high court has told senior lawyers to take urgent action to prevent the misuse of artificial intelligence after dozens of fake case-law citations were put before the courts that were either completely fictitious or contained made-up passages. Lawyers are increasingly using AI systems to help them build legal arguments, but two cases this year were blighted by made-up case-law citations which were either definitely or suspected to have been generated by AI. In a Β£89m damages case against the Qatar National Bank, the claimants made 45 case-law citations, 18 of which turned out to be fictitious, with quotes in many of the others also bogus. The claimant admitted using publicly available AI tools and his solicitor accepted he cited the sham authorities. When Haringey Law Centre challenged the London borough of Haringey over its alleged failure to provide its client with temporary accommodation, its lawyer cited phantom case law five times. Suspicions were raised when the solicitor defending the council had to repeatedly query why they could not find any trace of the supposed authorities. It resulted in a legal action for wasted legal costs and a court found the law centre and its lawyer, a pupil barrister, were negligent. The barrister denied using AI in that case but said she may have inadvertently done so while using Google or Safari in preparation for a separate case where she also cited phantom authorities. In that case she said she may have taken account of AI summaries without realising what they were. In a regulatory ruling responding to the cases on Friday, Dame Victoria Sharp, the president of the King's bench division, said there were "serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused" and that lawyers misusing AI could face sanctions, from public admonishment to facing contempt of court proceedings and referral to the police. She called on the Bar Council and the Law Society to consider steps to curb the problem "as a matter of urgency" and told heads of barristers' chambers and managing partners of solicitors to ensure all lawyers know their professional and ethical duties if using AI. "Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect," she wrote. "The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source." Ian Jeffery, the chief executive of the Law Society of England and Wales, said the ruling "lays bare the dangers of using AI in legal work". "Artificial intelligence tools are increasingly used to support legal service delivery," he added. "However, the real risk of incorrect outputs produced by generative AI requires lawyers to check, review and ensure the accuracy of their work." The cases are not the first to have been blighted by AI-created hallucinations. In a UK tax tribunal in 2023, an appellant who claimed to have been helped by "a friend in a solicitor's office" provided nine bogus historical tribunal decisions as supposed precedents. She admitted it was "possible" she had used ChatGPT, but said it surely made no difference as there must be other cases that made her point. The appellants in a β¬5.8m (Β£4.9m) Danish case this year narrowly avoided contempt proceedings when they relied on a made-up ruling that the judge spotted. And a 2023 case in the US district court for the southern district of New York descended into chaos when a lawyer was challenged to produce the seven apparently fictitious cases they had cited. The simply asked ChatGPT to summarise the cases it had already made up and the result, said the judge was "gibberish" and fined the two lawyers and their firm $5,000.
[5]
UK judge warns of risk to justice after lawyers cited fake AI-generated cases in court
LONDON (AP) -- Lawyers have cited fake cases generated by artificial intelligence in court proceedings in England, a judge has said -- warning that attorneys could be prosecuted if they don't check the accuracy of their research. High Court justice Victoria Sharp said the misuse of AI has "serious implications for the administration of justice and public confidence in the justice system." In the latest example of how judicial systems around the world are grappling with how to handle the increasing presence of artificial intelligence in court, Sharp and fellow judge Jeremy Johnson chastised lawyers in two recent cases in a ruling on Friday. They were asked to rule after lower court judges raised concerns about "suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked," leading to false information being put before the court. In a ruling written by Sharp, the judges said that in a 90 million pound ($120 million) lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist. The client in the case, Hamad Al-Haroun, apologized for unintentionally misleading the court with false information produced by publicly available AI tools, and said he was responsible, rather than his solicitor Abid Hussain. But Sharp said it was "extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around." In the other incident, a lawyer cited five fake cases in a tenant's housing claim against the London Borough of Haringey. Barrister Sarah Forey denied using AI, but Sharp said she had "not provided to the court a coherent explanation for what happened." The judges referred the lawyers in both cases to their professional regulators, but did not take more serious action. Sharp said providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison. She said in the judgment that AI is a "powerful technology" and a "useful tool" for the law. "Artificial intelligence is a tool that carries with it risks as well as opportunities," the judge said. "Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained."
[6]
UK judge raises alarm after lawyers submit fake legal cases produced by AI tools
Lawyers in England have been caught citing fake cases generated by artificial intelligence in court, prompting a judge to warn of potential prosecution for failing to verify research accuracy. The misuse of AI raises serious concerns about the integrity of the justice system.Lawyers have cited fake cases generated by artificial intelligence in court proceedings in England, a judge has said - warning that attorneys could be prosecuted if they don't check the accuracy of their research. High Court justice Victoria Sharp said the misuse of AI has "serious implications for the administration of justice and public confidence in the justice system." In the latest example of how judicial systems around the world are grappling with how to handle the increasing presence of artificial intelligence in court, Sharp and fellow judge Jeremy Johnson chastised lawyers in two recent cases in a ruling on Friday. They were asked to rule after lower court judges raised concerns about "suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked," leading to false information being put before the court. In a ruling written by Sharp, the judges said that in a 90 million pound (USD 120 million) lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist. The client in the case, Hamad Al-Haroun, apologised for unintentionally misleading the court with false information produced by publicly available AI tools, and said he was responsible, rather than his solicitor Abid Hussain. But Sharp said it was "extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around." In the other incident, a lawyer cited five fake cases in a tenant's housing claim against the London Borough of Haringey. Barrister Sarah Forey denied using AI, but Sharp said she had "not provided to the court a coherent explanation for what happened." The judges referred the lawyers in both cases to their professional regulators, but did not take more serious action. Sharp said providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison. She said in the judgment that AI is a "powerful technology" and a "useful tool" for the law. "Artificial intelligence is a tool that carries with it risks as well as opportunities," the judge said. "Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained."
[7]
U.K. judge warns of risk to justice after lawyers cited fake AI-generated cases in court
LONDON -- Lawyers have cited fake cases generated by artificial intelligence in court proceedings in England, a judge has said -- warning that attorneys could be prosecuted if they don't check the accuracy of their research. High Court justice Victoria Sharp said the misuse of AI has "serious implications for the administration of justice and public confidence in the justice system." In the latest example of how judicial systems around the world are grappling with how to handle the increasing presence of artificial intelligence in court, Sharp and fellow judge Jeremy Johnson chastised lawyers in two recent cases in a ruling on Friday. They were asked to rule after lower court judges raised concerns about "suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked," leading to false information being put before the court. In a ruling written by Sharp, the judges said that in a 90 million pound (US$120 million) lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist. The client in the case, Hamad Al-Haroun, apologized for unintentionally misleading the court with false information produced by publicly available AI tools, and said he was responsible, rather than his solicitor Abid Hussain. But Sharp said it was "extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around." In the other incident, a lawyer cited five fake cases in a tenant's housing claim against the London Borough of Haringey. Barrister Sarah Forey denied using AI, but Sharp said she had "not provided to the court a coherent explanation for what happened." The judges referred the lawyers in both cases to their professional regulators, but did not take more serious action. Sharp said providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison. She said in the judgment that AI is a "powerful technology" and a "useful tool" for the law. "Artificial intelligence is a tool that carries with it risks as well as opportunities," the judge said. "Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained."
[8]
Lawyers face sanctions for citing fake cases with AI, warns UK judge
LONDON (Reuters) -Lawyers who use artificial intelligence to cite non-existent cases can be held in contempt of court or even face criminal charges, London's High Court warned on Friday, in the latest example of generative AI leading lawyers astray. A senior judge lambasted lawyers in two cases who apparently used AI tools when preparing written arguments, which referred to fake case law, and called on regulators and industry leaders to ensure lawyers know their ethical obligations. "There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused," Judge Victoria Sharp said in a written ruling. "In those circumstances, practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities ... and by those with the responsibility for regulating the provision of legal services." The ruling comes after lawyers around the world have been forced to explain themselves for relying on false authorities, since ChatGPT and other generative AI tools became widely available more than two years ago. Sharp warned in her ruling that lawyers who refer to non-existent cases will be in breach of their duty to not mislead the court, which could also amount to contempt of court. She added that "in the most egregious cases, deliberately placing false material before the court with the intention of interfering with the administration of justice amounts to the common law criminal offence of perverting the course of justice". Sharp noted that legal regulators and the judiciary had issued guidance about the use of AI by lawyers, but said that "guidance on its own is insufficient to address the misuse of artificial intelligence". (Reporting by Sam Tobin; Editing by Sachin Ravikumar)
Share
Copy Link
The High Court of England and Wales has issued a stern warning to lawyers about the misuse of AI in legal work, following incidents where fake cases generated by AI were cited in court proceedings.
In a landmark ruling, the High Court of England and Wales has sounded the alarm on the misuse of artificial intelligence (AI) in legal work. Justice Victoria Sharp, president of the King's Bench Division, warned that lawyers who cite non-existent cases generated by AI could face severe consequences, including contempt of court charges and potential criminal prosecution 1.
Source: The New York Times
The ruling comes in response to two recent cases where lawyers presented fake material in court, presumably generated by AI tools. In one instance, a Β£90 million lawsuit involving Qatar National Bank saw a lawyer cite 18 non-existent cases 2. Another case involved a tenant's housing claim against the London Borough of Haringey, where five fictitious cases were referenced 4.
Justice Sharp emphasized the gravity of the situation, stating, "There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused" 1. The court has called for urgent action from legal regulators and industry leaders to ensure lawyers understand their ethical obligations when using AI tools 3.
Source: Reuters
The High Court ruling outlines a range of potential consequences for lawyers who misuse AI:
This is not an isolated issue in the UK. Similar incidents have occurred in other jurisdictions:
Source: Economic Times
Justice Sharp acknowledged AI as a "powerful technology" and a "useful tool" for the legal profession. However, she stressed the need for appropriate oversight and a regulatory framework to ensure compliance with professional and ethical standards 5.
The Law Society of England and Wales responded to the ruling, with CEO Ian Jeffery stating, "The ruling lays bare the dangers of using AI in legal work" 4. He emphasized the need for lawyers to rigorously check and review AI-generated outputs to ensure accuracy.
As the legal profession grapples with the integration of AI tools, this ruling serves as a stark reminder of the potential pitfalls and the paramount importance of maintaining the integrity of the justice system in the face of rapidly advancing technology.
Summarized by
Navi
[3]
A multi-billion dollar deal to build one of the world's largest AI data center hubs in the UAE, involving major US tech companies, is far from finalized due to persistent security concerns and geopolitical complexities.
4 Sources
Technology
23 hrs ago
4 Sources
Technology
23 hrs ago
A new PwC study challenges common fears about AI's impact on jobs, showing that AI is actually creating jobs, boosting wages, and increasing worker value across industries.
2 Sources
Business and Economy
23 hrs ago
2 Sources
Business and Economy
23 hrs ago
Runway's AI Film Festival in New York highlights the growing role of artificial intelligence in filmmaking, showcasing innovative short films and sparking discussions about AI's impact on the entertainment industry.
5 Sources
Technology
23 hrs ago
5 Sources
Technology
23 hrs ago
ElevenLabs has launched Eleven v3 (alpha), a groundbreaking text-to-speech model that offers unprecedented expressiveness and realism in AI-generated voices across multiple languages.
3 Sources
Technology
23 hrs ago
3 Sources
Technology
23 hrs ago
Google rolls out "scheduled actions" for Gemini, allowing AI Pro and Ultra subscribers to automate recurring tasks and one-off reminders, positioning the AI assistant as a more proactive and personalized tool.
4 Sources
Technology
23 hrs ago
4 Sources
Technology
23 hrs ago