2 Sources
[1]
Why a US court allowed a dead man to deliver his own victim impact statement - via an AI avatar
In November 2021, in the city of Chandler, Arizona, Chris Pelkey was shot and killed by Gabriel Horcasitas in a road rage altercation. Horcasitas was tried and convicted of reckless manslaughter. When it was time for Horcasitas to be sentenced by a judge, Pelkey's family knew they wanted to make a statement - known as a "victim impact statement" - explaining to the judge who Pelkey had been when he was alive. They found they couldn't get the words right. The solution for them turned out to be having Pelkey speak for himself by creating an AI-generated avatar that used his face and voice, allowing him to "talk" directly to the judge. This marked the first time a United States court had allowed an AI-generated victim to make this kind of beyond-the-grave statement, and likely the first time something like this had occurred anywhere in the world. How was the AI avatar made and received? The AI avatar was created by Pelkey's sister Stacey Wales and her husband Tim, with Stacey writing the words "spoken" by Pelkey - words that were not taken from anything he actually said when he was alive but based on what she believed he would have said. The avatar was created by using samples of Pelkey's voice from videos that had been recorded before his death and photos the family had of him - specifically a photo used at his funeral. In the video, Pelkey "says" he believes in forgiveness and "a God who forgives", and that "in another life" he and Horcasitas could have been friends. After the video was played in court, Judge Todd Lang, who had allowed the AI statement to be delivered, stated he "loved" the AI, adding he "heard the forgiveness" contained in it. He further stated he felt the forgiveness was "genuine". In the end, Horcasitas was sentenced to the maximum of ten-and-a-half years - more than the nine years the prosecution was seeking but equal to what Pelkey's family asked for in their own victim impact statements. Could this happen in Australia? In general, court rules are similar across Australian states and territories and it would be unlikely these technological advances would be acceptable in Australian sentencing courts. These rules allow victims or their families to read their statement to courts, but this is limited to written statements usually edited by the prosecution, although victims may include drawings and photos where approved. A victim will generally read their own statement to the court. However, where the victim has died, family members can make a statement speaking to their own trauma and loss. Sometimes victims ask the prosecutor to read their statement, or the prosecutor merely hands over a written statement to the judge. To date, no Australian court has permitted family members to speak for the deceased victim personally and family members are generally limited to describing harms they have directly suffered. Victims may also be cross-examined by defence counsel on the statements' content. Creating an AI avatar would be time-consuming and expensive for prosecutors to edit. Cross-examination by the defence would be impossible. Compared to the US, there is generally far less tolerance in Australian courts for dramatic readings of statements or using audio-visual materials. In the US, victims enjoy greater freedom to invoke emotions, explore personal narratives and even show videos of the deceased, all to give the court a better sense of the victim as a person. The use of an AI avatar, therefore, is not too far from what is already allowed in most US courts. Despite these allowances, there is still concern the emotional impact of a more direct statement from an AI victim could be used to manipulate the court by putting words into the victim's virtual mouth. As can be seen in the Arizona sentencing, Judge Lang was clearly affected by the emotions generated by the AI Pelkey. Changes to Australian law would be needed to ban use of AI recordings specifically. But even without such changes, Australian sentencing practice is already so restrictive as to essentially preclude such technology. It seems Australia is some ways from joining Arizona in allowing an AI avatar of a deceased person speaking from "beyond the grave".
[2]
Why a US court allowed a dead man to deliver his own victim impact statement -- via an AI avatar
In November 2021, in the city of Chandler, Arizona, Chris Pelkey was shot and killed by Gabriel Horcasitas in a road rage altercation. Horcasitas was tried and convicted of reckless manslaughter. When it was time for Horcasitas to be sentenced by a judge, Pelkey's family knew they wanted to make a statement -- known as a "victim impact statement" -- explaining to the judge who Pelkey had been when he was alive. They found they couldn't get the words right. The solution for them turned out to be having Pelkey speak for himself by creating an AI-generated avatar that used his face and voice, allowing him to "talk" directly to the judge. This marked the first time a United States court had allowed an AI-generated victim to make this kind of beyond-the-grave statement, and likely the first time something like this had occurred anywhere in the world. How was the AI avatar made and received? The AI avatar was created by Pelkey's sister Stacey Wales and her husband Tim, with Stacey writing the words "spoken" by Pelkey -- words that were not taken from anything he actually said when he was alive but based on what she believed he would have said. The avatar was created by using samples of Pelkey's voice from videos that had been recorded before his death and photos the family had of him -- specifically a photo used at his funeral. In the video, Pelkey "says" he believes in forgiveness and "a God who forgives," and that "in another life" he and Horcasitas could have been friends. After the video was played in court, Judge Todd Lang, who had allowed the AI statement to be delivered, stated he "loved" the AI, adding he "heard the forgiveness" contained in it. He further stated he felt the forgiveness was "genuine." In the end, Horcasitas was sentenced to the maximum of ten-and-a-half years -- more than the nine years the prosecution was seeking but equal to what Pelkey's family asked for in their own victim impact statements. Could this happen in Australia? In general, court rules are similar across Australian states and territories and it would be unlikely these technological advances would be acceptable in Australian sentencing courts. These rules allow victims or their families to read their statement to courts, but this is limited to written statements usually edited by the prosecution, although victims may include drawings and photos where approved. A victim will generally read their own statement to the court. However, where the victim has died, family members can make a statement speaking to their own trauma and loss. Sometimes victims ask the prosecutor to read their statement, or the prosecutor merely hands over a written statement to the judge. To date, no Australian court has permitted family members to speak for the deceased victim personally and family members are generally limited to describing harms they have directly suffered. Victims may also be cross-examined by defense counsel on the statements' content. Creating an AI avatar would be time-consuming and expensive for prosecutors to edit. Cross-examination by the defense would be impossible. Compared to the US, there is generally far less tolerance in Australian courts for dramatic readings of statements or using audio-visual materials. In the US, victims enjoy greater freedom to invoke emotions, explore personal narratives and even show videos of the deceased, all to give the court a better sense of the victim as a person. The use of an AI avatar, therefore, is not too far from what is already allowed in most US courts. Despite these allowances, there is still concern the emotional impact of a more direct statement from an AI victim could be used to manipulate the court by putting words into the victim's virtual mouth. As can be seen in the Arizona sentencing, Judge Lang was clearly affected by the emotions generated by the AI Pelkey. Changes to Australian law would be needed to ban the use of AI recordings specifically. But even without such changes, Australian sentencing practice is already so restrictive as to essentially preclude such technology. It seems Australia is some ways from joining Arizona in allowing an AI avatar of a deceased person speaking from "beyond the grave."
Share
Copy Link
In a groundbreaking decision, a US court permitted an AI-generated avatar of a deceased victim to deliver a victim impact statement, raising questions about the use of AI in legal proceedings and its potential implications for the justice system.
In a groundbreaking legal development, a United States court has allowed an AI-generated avatar to deliver a victim impact statement on behalf of a deceased individual. This marks the first instance of such technology being used in a courtroom setting, not only in the US but potentially worldwide 12.
The case revolves around Chris Pelkey, who was fatally shot by Gabriel Horcasitas during a road rage incident in Chandler, Arizona, in November 2021. Horcasitas was subsequently convicted of reckless manslaughter 12.
Source: The Conversation
Pelkey's sister, Stacey Wales, and her husband Tim took the innovative approach of creating an AI-generated avatar to represent Chris Pelkey during the sentencing phase. The avatar was constructed using samples of Pelkey's voice from pre-existing videos and a photograph used at his funeral 12.
Stacey Wales wrote the statement "spoken" by the AI avatar, basing it on what she believed her brother would have said. In the video, the AI Pelkey expressed beliefs in forgiveness and suggested that under different circumstances, he and Horcasitas could have been friends 12.
Judge Todd Lang, who permitted the use of the AI statement, responded positively to the presentation. He stated that he "loved" the AI and "heard the forgiveness" in the statement, which he felt was "genuine" 12.
The use of AI in this context raises important questions about the potential influence of technology on legal proceedings. Despite concerns about emotional manipulation, the judge's reaction suggests that the AI-generated statement had a significant impact 12.
Ultimately, Horcasitas received the maximum sentence of ten-and-a-half years, which exceeded the prosecution's request but aligned with the wishes of Pelkey's family 12.
The stark contrast between this US case and the Australian legal system highlights the diverse approaches to victim impact statements globally. Australian courts generally have more restrictive practices regarding such statements 12.
In Australia:
While this case represents a significant milestone in the use of AI within the US legal system, it also prompts a broader discussion about the role of technology in courtrooms worldwide. The emotional impact of AI-generated statements and their potential to influence legal decisions are areas that will likely require careful consideration as this technology evolves 12.
As countries grapple with the rapid advancement of AI, legal systems may need to adapt and establish new guidelines to address the use of such technology in court proceedings. The contrast between the US and Australian approaches underscores the need for a nuanced, jurisdiction-specific examination of these issues 12.
Thinking Machines Lab, a secretive AI startup founded by former OpenAI CTO Mira Murati, has raised $2 billion in seed funding, valuing the company at $10 billion. The startup's focus remains unclear, but it has attracted significant investor interest.
2 Sources
Startups
21 hrs ago
2 Sources
Startups
21 hrs ago
The ongoing Israel-Iran conflict has unleashed an unprecedented wave of AI-generated disinformation, marking a new phase in digital warfare. Millions of people are being exposed to fabricated images and videos, making it increasingly difficult to distinguish fact from fiction in real-time.
3 Sources
Technology
21 hrs ago
3 Sources
Technology
21 hrs ago
A UN survey unveils a stark contrast in AI trust levels between developing and developed nations, with China showing the highest confidence. The study highlights the complex global attitudes towards AI adoption and its perceived societal benefits.
2 Sources
Technology
21 hrs ago
2 Sources
Technology
21 hrs ago
Reddit is reportedly considering the use of World's iris-scanning orbs for user verification, aiming to balance anonymity with authenticity in response to AI-generated content concerns and regulatory pressures.
3 Sources
Technology
21 hrs ago
3 Sources
Technology
21 hrs ago
Nvidia has reportedly booked all available capacity at Wistron's new server plant in Taiwan through 2026, focusing on the production of Blackwell and Rubin AI servers. This move highlights the increasing demand for AI hardware and Nvidia's strategy to maintain its market leadership.
2 Sources
Business and Economy
13 hrs ago
2 Sources
Business and Economy
13 hrs ago