13 Sources
13 Sources
[1]
Australian lawyer apologizes for AI-generated errors in murder case
MELBOURNE, Australia (AP) -- A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and non-existent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott told lawyers on Thursday. "The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice," Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and non-existent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes," court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.
[2]
WA lawyer referred to regulator after preparing documents with AI-generated citations for nonexistent cases
Judge warns of 'inherent dangers' of lawyers relying solely on AI amid growing number of fake citations or other errors due to the technology A lawyer has been referred to Western Australia's legal regulator after using artificial intelligence in preparing court documents for an immigration case. The documents contained AI-generated case citations for cases that did not exist. It is one of more than 20 cases so far in Australia in which AI use has resulted in fake citations or other errors in court submissions, with warnings from judges across the country to be wary of using the technology in the legal profession. In a federal court judgment published this week, the anonymised lawyer was referred to the Legal Practice Board of Western Australia for consideration and ordered to pay the federal government's costs of $8,371.30 after submissions to an immigration case were found by the representative for the immigration minister to include four case citations that did not exist. Justice Arran Gerrard said the incident "demonstrates the inherent dangers associated with practitioners solely relying on the use of artificial intelligence in the preparation of court documents and the way in which that interacts with a practitioner's duty to the court". The lawyer told the court in an affidavit that he had relied on Anthropic's Claude AI "as a research tool to identify potentially relevant authorities and to improve my legal arguments and position", and then used Microsoft Copilot to validate the submissions. The lawyer said he had "developed an overconfidence in relying on AI tools and failed to adequately verify the generated results". "I had an incorrect assumption that content generated by AI tools would be inherently reliable, which led me to neglect independently verifying all citations through established legal databases," the lawyer said in the affidavit. The lawyer unreservedly apologised to the court and the minister's solicitors for the errors. Gerrard said the court "does not adopt a luddite approach" to the use of generative AI, and understood why the complexity of migration law might make using an AI tool attractive. But he warned there was now a "concerning number" of cases where AI had led to citation of fictitious cases. Gerrard said it risked "a good case to be undermined by rank incompetence" and the prevalence of such cases "significantly wastes the time and resources of opposing parties and the court". He said it also risked damage to the legal profession. Gerrard said the lawyer did "not fully comprehend what was required of him" and it was not sufficient to merely check that the cases cited were not fake, but to review those cases thoroughly. "Legal principles are not simply slogans which can be affixed to submissions without context or analysis." There have been at least 20 cases of AI hallucinations reported in Australian courts since generative AI tools exploded in popularity in 2023. Last week, a Victorian supreme court judge criticised lawyers acting for a boy accused of murder for filing misleading information with the courts after failing to check documents created using AI. The documents included references to nonexistent case citations and inaccurate quotes from a parliamentary speech. There have also been similar cases involving lawyers in New South Wales and Victoria in the past year, who were referred to their state's regulatory bodies. However, the spate of cases is not just limited to qualified lawyers. In a NSW supreme court decision this month, a self-represented litigant in a trusts case admitted to the chief justice, Andrew Bell, to have used AI to prepare her speech for the appeal hearing. Bell said in his judgment that he was not criticising the person, who he said was doing her best to represent herself. But he said problems with using AI in preparing submissions were exacerbated when the technology was used by unrepresented litigants "who are not subject to the professional and ethical responsibilities of legal practitioners". He said the use of generative AI tools "may introduce added costs and complexity" to proceedings and "add to the burden of other parties and the court in responding to it". "Notwithstanding the fact that generative AI may contribute to improved access to justice, which is itself an obviously laudable goal, the present case illustrates the need for judicial vigilance in its use, especially but not only, by unrepresented litigants." The Law Council of Australia's president, Juliana Warner, said sophisticated AI tools offered unique opportunities to support the legal profession in administrative tasks, but reliance on AI tools did not diminish the professional judgment a legal practitioner was expected to bring to a client's matter. "Where these tools are utilised by lawyers, this must be done with extreme care," she said. "Lawyers must always keep front of mind their professional and ethical obligations to the court and to their clients." Warner said courts were regarding cases where AI had generated fake citations as a "serious concern", but added that given the widespread use of generative AI, a broadly framed prohibition on its use in legal proceedings would be "neither practical nor proportionate, and risks hindering innovation and access to justice".
[3]
Lawyers File AI Slop in Murder Case
Yet another team of lawyers was found leaving AI slop in court documents. It's the latest example of white-collar professionals outsourcing their work to confidently wrong AI tools -- and this time, it's not just about any old frivolous lawsuit. As The Guardian reports, a pair of Australian lawyers named Rishi Nathwani and Amelia Beech, who are representing a 16-year-old defendant in a murder case, were caught using AI after documents they submitted to prosecutors proved to be riddled with a series of bizarre errors, including made-up citations and a misquoted parliamentary speech. The hallucinations caused a series of mishaps, highlighting how even just one AI hallucination in this setting can have a Domino-like effect. Per The Guardian, the prosecution didn't double-check the accuracy of the defense's references, which caused them to draw up arguments based on AI-fabricated misinformation. It was the judge who finally noticed that something was amiss, and when the defense was confronted about the wild array of mistakes in court, they admitted to using generative AI to whip up the documents. Worse yet, that wasn't even the end of the defense's inadmissible behavior. As The Guardian explains, the defense re-submitted purportedly revised documents -- only for those documents to include more AI-generated errors, including completely nonexistent laws. "It is not acceptable for AI to be used unless the product of that use is independently and thoroughly verified," justice James Elliott told Melbourne's Supreme Court, as quoted by the newspaper, adding that "the manner in which these events have unfolded is unsatisfactory." The stakes are incredibly high in this case. Nathwania and Beech are defending a minor accused of murdering a 41-year-old woman while attempting to steal her car (per the newspaper, the teen was ultimately found not guilty of murder on grounds that he was cognitively impaired at the time of the killing). Elliott expressed concern that the "use of AI without careful oversight of counsel would seriously undermine this court's ability to deliver justice," according to the Guardian, as AI-generated misinformation could stand to "mislead" the legal system. The incident is a worrying indictment of the widespread use of a tech that's still plagued by constant hallucinations. Wielded without sufficient oversight by legal professionals, they could stand to alter the course of the court. Real decisions, in other words, could be made based on the nonsensical musings of a hallucinating AI.
[4]
Australian lawyer apologises for AI-generated errors in murder case
The fake submissions included fabricated quotes from a speech to the state legislature and nonexistent case citations purportedly from the Supreme Court. A barrister was forced to apologise after AI created fake quotes and made-up judgments in submissions he filed in a murder case in front of an Australian court. Defence lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder in front of the Supreme Court of Victoria state in Melbourne. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on Wednesday, on behalf of the defence team. The fake submissions included fabricated quotes from a speech to the state legislature and nonexistent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who could not find the cases and requested that defence lawyers provide copies. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes," court documents say. The lawyers explained that they had checked the initial citations and assumed the others were also accurate. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Judge Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott told lawyers on Thursday. "The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice." The submissions were also sent to prosecutor Daniel Porceddu, who did not check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott said. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines (€4,270) on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P Kevin Castel said they acted in bad faith, but he accepted their apologies and remedial steps in lieu of a harsher sentence. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for US President Donald Trump. Cohen took the blame, saying he did not realise that the Google tool he was using for legal research was also capable of so-called AI hallucinations. UK High Court Justice Victoria Sharp warned in June that providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison. In a regulatory ruling following dozens of AI-generated fake citations put before courts across several cases in the UK, Sharp said the issue raised "serious implications for the ... public confidence in the justice system if artificial intelligence is misused."
[5]
Australian lawyer sorry for AI errors in murder case, including fake quotes and made up cases
MELBOURNE, Australia -- A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and nonexistent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott told lawyers on Thursday. "The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice," Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and nonexistent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes," court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations. British High Court Justice Victoria Sharp warned in June that providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison.
[6]
Australia murder case court filings include fake quotes and nonexistent judgments generated by AI
A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and nonexistent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott told lawyers on Thursday. "The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice," Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and nonexistent case citations purportedly from the Supreme Court. The errors were discovered by the Elliot's associates, who couldn't find the cases cited and requested that defense lawyers provide copies, the Australian Broadcasting Corporation reported. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes," court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations. British High Court Justice Victoria Sharp warned in June that providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison. The use of artificial intelligence is making its way into U.S. courtrooms in other ways. In April, a man named Jerome Dewald appeared before a New York court and submitted a video that featured an AI-generated avatar to deliver an argument on his behalf. In May, a man who was killed in a road rage incident in Arizona "spoke" during his killer's sentencing hearing after his family used artificial intelligence to create a video of him reading a victim impact statement.
[7]
AI-generated errors set back this murder case in an Australian Supreme Court
According to court documents, defense lawyer Rishi Nathwani, took 'full responsibility' for filing incorrect submissions in a murder case. A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and nonexistent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott told lawyers on Thursday. "The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice," Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and nonexistent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes," court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations. British High Court Justice Victoria Sharp warned in June that providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison.
[8]
Australian lawyer apologizes for AI-generated errors in murder case
MELBOURNE, Australia (AP) -- A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and non-existent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott told lawyers on Thursday. "The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice," Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and non-existent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes," court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.
[9]
Australian lawyer apologizes for AI-generated errors in murder case
MELBOURNE, Australia -- A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and non-existent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott told lawyers on Thursday. "The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice," Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and non-existent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes," court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.
[10]
Australian Lawyer Apologizes for AI-Generated Errors in Murder Case
MELBOURNE, Australia (AP) -- A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and non-existent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott told lawyers on Thursday. "The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice," Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and non-existent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes," court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.
[11]
Australian lawyer apologises for AI-generated errors in murder case - The Economic Times
MELBOURNE, Australia - A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and non-existent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott told lawyers on Thursday. "The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice," Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and non-existent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes," court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.
[12]
Australian lawyer apologizes for AI-generated errors in murder case
MELBOURNE, Australia -- A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and nonexistent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott told lawyers on Thursday. "The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice," Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and nonexistent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes," court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed US$5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations. British High Court Justice Victoria Sharp warned in June that providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison.
[13]
Lawyer 'deeply sorry' for submitting fake, AI-generated quotes in...
A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and nonexistent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott told lawyers on Thursday. "The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice," Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and nonexistent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes," court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for US President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations. British High Court Justice Victoria Sharp warned in June that providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison.
Share
Share
Copy Link
A senior Australian lawyer apologizes for submitting AI-generated fake quotes and nonexistent case citations in a murder trial, sparking debate on AI use in legal systems worldwide.
In a significant development highlighting the risks of artificial intelligence in legal proceedings, a senior Australian lawyer has apologized for submitting AI-generated fake quotes and nonexistent case citations in a high-profile murder trial. The incident, which occurred in the Supreme Court of Victoria, has sparked a broader debate on the use of AI in legal systems worldwide
1
.Source: AP NEWS
Defense lawyer Rishi Nathwani, who holds the prestigious title of King's Counsel, took "full responsibility" for filing incorrect information in submissions for a case involving a teenager charged with murder. The AI-generated errors caused a 24-hour delay in resolving the case and led to a stern rebuke from Justice James Elliott
1
.The fake submissions included:
These errors were discovered by Justice Elliott's associates, who couldn't find the cited cases and requested that defense lawyers provide copies. The lawyers subsequently admitted that the citations "do not exist" and that the submission contained "fictitious quotes"
1
.Source: Futurism
Justice Elliott emphasized the gravity of the situation, stating, "The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice"
4
. This incident has raised serious concerns about the potential for AI to undermine the integrity of legal proceedings if not properly verified and monitored.This case is not isolated, as similar incidents have occurred in other jurisdictions:
In the United States, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for submitting fictitious legal research in an aviation injury claim
1
.Lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump, cited fictitious court rulings invented by AI in legal papers
4
.In Western Australia, a lawyer was referred to the state's legal regulator after using AI to prepare court documents containing citations for nonexistent cases in an immigration matter
2
.Related Stories
Source: ABC News
In response to these incidents, courts and legal bodies are issuing warnings and guidelines:
The Supreme Court of Victoria released guidelines last year on how lawyers should use AI
1
.UK High Court Justice Victoria Sharp warned that providing false material generated by AI could be considered contempt of court or, in extreme cases, perverting the course of justice
4
.The Law Council of Australia emphasized that reliance on AI tools does not diminish the professional judgment expected from legal practitioners
2
.As AI continues to evolve and integrate into various professional fields, this incident serves as a cautionary tale for the legal profession. It underscores the need for rigorous verification processes and ethical guidelines to ensure that AI tools enhance rather than undermine the integrity of legal proceedings. The challenge lies in balancing the potential benefits of AI in improving access to justice with the risks of misinformation and errors that could have far-reaching consequences in the legal system.
Summarized by
Navi
[2]
[3]
15 May 2025•Policy and Regulation
19 Feb 2025•Technology
07 Jun 2025•Technology
1
Business and Economy
2
Business and Economy
3
Policy and Regulation