9 Sources
[1]
Australian lawyer apologizes for AI-generated errors in murder case
MELBOURNE, Australia (AP) -- A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and non-existent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott told lawyers on Thursday. "The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice," Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and non-existent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes," court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.
[2]
Australian lawyer apologises for AI-generated errors in murder case
The fake submissions included fabricated quotes from a speech to the state legislature and nonexistent case citations purportedly from the Supreme Court. A barrister was forced to apologise after AI created fake quotes and made-up judgments in submissions he filed in a murder case in front of an Australian court. Defence lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder in front of the Supreme Court of Victoria state in Melbourne. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on Wednesday, on behalf of the defence team. The fake submissions included fabricated quotes from a speech to the state legislature and nonexistent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who could not find the cases and requested that defence lawyers provide copies. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes," court documents say. The lawyers explained that they had checked the initial citations and assumed the others were also accurate. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Judge Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott told lawyers on Thursday. "The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice." The submissions were also sent to prosecutor Daniel Porceddu, who did not check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott said. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines (€4,270) on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P Kevin Castel said they acted in bad faith, but he accepted their apologies and remedial steps in lieu of a harsher sentence. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for US President Donald Trump. Cohen took the blame, saying he did not realise that the Google tool he was using for legal research was also capable of so-called AI hallucinations. UK High Court Justice Victoria Sharp warned in June that providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison. In a regulatory ruling following dozens of AI-generated fake citations put before courts across several cases in the UK, Sharp said the issue raised "serious implications for the ... public confidence in the justice system if artificial intelligence is misused."
[3]
Australia murder case court filings include fake quotes and nonexistent judgments generated by AI
A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and nonexistent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott told lawyers on Thursday. "The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice," Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and nonexistent case citations purportedly from the Supreme Court. The errors were discovered by the Elliot's associates, who couldn't find the cases cited and requested that defense lawyers provide copies, the Australian Broadcasting Corporation reported. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes," court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations. British High Court Justice Victoria Sharp warned in June that providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison. The use of artificial intelligence is making its way into U.S. courtrooms in other ways. In April, a man named Jerome Dewald appeared before a New York court and submitted a video that featured an AI-generated avatar to deliver an argument on his behalf. In May, a man who was killed in a road rage incident in Arizona "spoke" during his killer's sentencing hearing after his family used artificial intelligence to create a video of him reading a victim impact statement.
[4]
Australian lawyer apologizes for AI-generated errors in murder case
MELBOURNE, Australia (AP) -- A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and non-existent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott told lawyers on Thursday. "The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice," Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and non-existent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes," court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.
[5]
Australian lawyer apologizes for AI-generated errors in murder case
MELBOURNE, Australia -- A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and non-existent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott told lawyers on Thursday. "The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice," Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and non-existent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes," court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.
[6]
Australian Lawyer Apologizes for AI-Generated Errors in Murder Case
MELBOURNE, Australia (AP) -- A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and non-existent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott told lawyers on Thursday. "The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice," Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and non-existent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes," court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.
[7]
Australian lawyer apologises for AI-generated errors in murder case - The Economic Times
MELBOURNE, Australia - A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and non-existent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott told lawyers on Thursday. "The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice," Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and non-existent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes," court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.
[8]
Australian lawyer apologizes for AI-generated errors in murder case
MELBOURNE, Australia -- A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and nonexistent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott told lawyers on Thursday. "The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice," Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and nonexistent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes," court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed US$5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations. British High Court Justice Victoria Sharp warned in June that providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison.
[9]
Lawyer 'deeply sorry' for submitting fake, AI-generated quotes in...
A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that included fake quotes and nonexistent case judgments generated by artificial intelligence. The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King's Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team. The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Elliott ruled on Thursday that Nathwani's client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott told lawyers on Thursday. "The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice," Elliott added. The fake submissions included fabricated quotes from a speech to the state legislature and nonexistent case citations purportedly from the Supreme Court. The errors were discovered by Elliott's associates, who couldn't find the cases and requested that defense lawyers provide copies. The lawyers admitted the citations "do not exist" and that the submission contained "fictitious quotes," court documents say. The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct. The submissions were also sent to prosecutor Daniel Porceddu, who didn't check their accuracy. The judge noted that the Supreme Court released guidelines last year for how lawyers use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott said. The court documents do not identify the generative artificial intelligence system used by the lawyers. In a comparable case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim. Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won't again let artificial intelligence tools prompt them to produce fake legal history in their arguments. Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for US President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations. British High Court Justice Victoria Sharp warned in June that providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison.
Share
Copy Link
A senior Australian lawyer apologizes for submitting AI-generated fake quotes and non-existent case judgments in a murder trial, causing a 24-hour delay and raising concerns about AI use in legal proceedings.
In a significant incident highlighting the risks of artificial intelligence in legal proceedings, a senior Australian lawyer has apologized for submitting AI-generated fake quotes and non-existent case judgments in a murder trial. The blunder, which occurred in the Supreme Court of Victoria state, caused a 24-hour delay in resolving the case and raised serious concerns about the use of AI in the justice system 1.
Source: AP NEWS
Defense lawyer Rishi Nathwani, who holds the prestigious title of King's Counsel, took "full responsibility" for filing incorrect information in submissions for a teenager charged with murder. "We are deeply sorry and embarrassed for what occurred," Nathwani told Justice James Elliott on behalf of the defense team 2.
The fake submissions included:
These errors were discovered by Justice Elliott's associates, who couldn't find the cited cases and requested that defense lawyers provide copies. The lawyers subsequently admitted that the citations "do not exist" and that the submission contained "fictitious quotes" 3.
Justice Elliott expressed his dissatisfaction with the incident, stating, "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory." He emphasized the fundamental importance of the court's ability to rely on the accuracy of submissions made by counsel for the due administration of justice 4.
The judge noted that the Supreme Court had released guidelines last year for how lawyers should use AI. "It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott stated 5.
This incident is not isolated, as similar mishaps involving AI have occurred in justice systems around the world:
In the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim 1.
Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump 2.
In the UK, High Court Justice Victoria Sharp warned that providing false material as if it were genuine could be considered contempt of court or, in the most egregious cases, perverting the course of justice, which carries a maximum sentence of life in prison 2.
Despite the AI-generated errors, Justice Elliott ruled that Nathwani's client, a minor who cannot be identified, was not guilty of murder due to mental impairment 3.
Source: ABC News
This incident has sparked discussions about the need for stricter guidelines and verification processes when using AI in legal proceedings. It also serves as a cautionary tale for legal professionals worldwide about the potential pitfalls of relying on AI-generated content without thorough human oversight and verification.
MIT researchers use generative AI to create novel antibiotics effective against drug-resistant bacteria, including gonorrhea and MRSA, potentially ushering in a new era of antibiotic discovery.
8 Sources
Science and Research
18 hrs ago
8 Sources
Science and Research
18 hrs ago
Canadian AI startup Cohere secures $500 million in funding, reaching a $6.8 billion valuation, and appoints former Meta AI research head Joelle Pineau as Chief AI Officer, positioning itself as a secure enterprise AI solution provider.
13 Sources
Business and Economy
18 hrs ago
13 Sources
Business and Economy
18 hrs ago
Scientists have developed a brain-computer interface that can decode inner speech with up to 74% accuracy, using a password system to protect user privacy. This breakthrough could revolutionize communication for people with severe speech impairments.
9 Sources
Science and Research
18 hrs ago
9 Sources
Science and Research
18 hrs ago
TeraWulf, a Bitcoin mining company, has signed a major AI infrastructure hosting deal with Fluidstack, backed by Google. This pivot could significantly boost the company's revenue and marks a shift in strategy for cryptocurrency miners facing challenges.
7 Sources
Business and Economy
18 hrs ago
7 Sources
Business and Economy
18 hrs ago
Anthropic introduces learning modes for Claude AI and Claude Code, transforming the chatbot into an educational tool that guides users through problem-solving rather than providing direct answers.
6 Sources
Technology
18 hrs ago
6 Sources
Technology
18 hrs ago