Curated by THEOUTPOST
On Wed, 20 Nov, 12:10 AM UTC
2 Sources
[1]
AI and criminal justice: How AI can support -- not undermine -- justice
Interpol Secretary General Jürgen Stock recently warned that artificial intelligence (AI) is facilitating crime on an "industrial scale" using deepfakes, voice simulation and phony documents. Concerns about AI bias and discrimination are well documented. Without safeguards, AI risks undermining the very principles of truth, fairness, and accountability that our justice system depends on. In a recent report from the University of British Columbia's School of Law, Artificial Intelligence & Criminal Justice: A Primer, we highlighted the myriad ways AI is already impacting people in the criminal justice system. Here are a few examples that reveal the significance of this evolving phenomenon. The promises and perils of police using AI In 2020, an investigation by The New York Times exposed the sweeping reach of Clearview AI, an American company that had built a facial recognition database using more than three billion images scraped from the internet, including social media, without users' consent. Policing agencies worldwide that used the program, including several in Canada, faced public backlash. Regulators in multiple countries found the company had violated privacy laws. It was asked to cease operations in Canada. Clearview AI continues to operate, citing success stories of helping to exonerate a wrongfully convicted person by identifying a witness at a crime scene; identifying someone who exploited a child, which led to their rescue; and even detecting potential Russian soldiers seeking to infiltrate Ukrainian checkpoints. There are longstanding and persistent concerns, however, that facial recognition is prone to false positives and other errors, particularly when it comes to identifying Black and other racialized people, exacerbating systemic racism, bias and discrimination. Some law enforcement agencies in Canada that were caught up in the Clearview AI controversy have since responded with new measures, such as the Toronto Police Service's policies on AI use and the RCMP's transparency program. Others, however, like the Vancouver Police Department, promised to develop policies but haven't, while at the same time seeking access to city traffic camera footage. The regulation of police uses of AI is a pressing concern if we are to safely navigate the promise and perils of AI use. Deepfake evidence in court Another area where AI is presenting challenges in the criminal justice system is deepfake evidence, including AI-generated documents, audio, photos, and videos. The phenomenon has already led to cases where one party alleges that the other party's evidence is a deepfake, casting doubt on it, even if it's legitimate. This has been dubbed the "liar's dividend." A high-profile example of allegations involving deepfake evidence arose in the case of Joshua Doolin, who faced charges related to the January 6, 2021, insurrection at the U.S. Capitol, for which he was ultimately convicted. Doolin's attorney contended that prosecutors should be required to authenticate video evidence sourced from YouTube, raising concerns about the potential use of deepfakes. Jurors could be especially prone to doubts about potential deepfakes given high-profile deepfake incidents involving celebrities or their own use of AI technologies. Judges are also sounding the alarm about the challenges of detecting increasingly sophisticated deepfake evidence admitted in court. There are concerns that a wrongful conviction or acquittal could result. I've personally heard from a number of legal practitioners, including judges and lawyers, that they are struggling to address this issue. It is a frequent subject at legal seminars and judicial training events. Until we have a clear statement from appellate courts on the matter, legal uncertainty will remain. Risk assessment algorithms Imagine an AI algorithm that you couldn't understand deemed you a flight risk or at high risk of re-offending, and that information was used by a judge or parole board to deny your release from custody. This dystopian reality isn't a fiction, but a reality in many parts of the world. Automated algorithmic decision-making is already being used in various countries for decisions on access to government benefits and housing, assessing domestic violence risk, making immigration determinations and a host of criminal justice applications from bail decisions to sentencing to prison classification to parole outcomes. People impacted by these algorithms typically fail to gain access to their underlying proprietary software. Even if they could, they are often "black boxes" that are impossible to penetrate. Even worse, research into some algorithms has found serious concerns about racial bias. A key reason for this problem is that AI models are trained on data from societies that are already embedded with systemic racism. "Garbage in, garbage out" is a commonly used adage to explain this. Fostering innovation while safeguarding justice The need for legal and ethical AI in high-risk situations pertaining to criminal justice is paramount. There is undoubtedly a need for new laws, regulations and policies specifically designed to address these challenges. The European Union's AI Act bans AI for uses such as untargeted scraping images off the internet or CCTV, real-time remote biometric identification in public (with limited exceptions), and assessing recidivism risk based solely on profiling or personality traits. Canada's laws have not kept pace, and those that have been proposed have challenges. At the federal level, Bill C-27 (which includes an Artificial Intelligence and Data Act) has been stuck in committee for over a year, and it is unlikely to be adopted by this Parliament. Ontario's proposed AI legislation, Bill 194, would exempt police from its application and fails to include provisions for ensuring respect for human rights. Canada should vigorously enforce existing laws and policies that already apply to AI use by public authorities. The Canadian Charter of Rights and Freedoms includes numerous fundamental freedoms, legal rights and equality protections that bear directly on these issues. Likewise, privacy legislation, human rights legislation, consumer protection legislation and tort law all set important standards for AI use. The potential impact of AI on people in the criminal justice system is immense. Without thoughtful and rigorous oversight, it risks undermining public confidence in the justice system and perpetuating existing problems with real human consequences. Fortunately, Canada has not yet gone as far down the road of widespread AI adoption in criminal justice as other countries. We still have time to get ahead of it. Policymakers, courts and civil society must act swiftly to ensure that AI serves justice rather than undermines it.
[2]
AI and criminal justice: How AI can support -- not undermine -- justice
University of British Columbia provides funding as a member of The Conversation CA-FR. Interpol Secretary General Jürgen Stock recently warned that artificial intelligence (AI) is facilitating crime on an "industrial scale" using deepfakes, voice simulation and phony documents. Police around the world are also turning to AI tools such as facial recognition, automated licence plate readers, gunshot detection systems, social media analysis and even police robots. AI use by lawyers is similarly "skyrocketing" as judges adopt new guidelines for using AI. While AI promises to transform criminal justice by increasing operational efficiency and improving public safety, it also comes with risks related to privacy, accountability, fairness and human rights. Concerns about AI bias and discrimination are well documented. Without safeguards, AI risks undermining the very principles of truth, fairness, and accountability that our justice system depends on. In a recent report from the University of British Columbia's School of Law, Artificial Intelligence & Criminal Justice: A Primer, we highlighted the myriad ways AI is already impacting people in the criminal justice system. Here are a few examples that reveal the significance of this evolving phenomenon. The promises and perils of police using AI In 2020, an investigation by The New York Times exposed the sweeping reach of Clearview AI, an American company that had built a facial recognition database using more than three billion images scraped from the internet, including social media, without users' consent. Policing agencies worldwide that used the program, including several in Canada, faced public backlash. Regulators in multiple countries found the company had violated privacy laws. It was asked to cease operations in Canada. Clearview AI continues to operate, citing success stories of helping to exonerate a wrongfully convicted person by identifying a witness at a crime scene; identifying someone who exploited a child, which led to their rescue; and even detecting potential Russian soldiers seeking to infiltrate Ukrainian checkpoints. There are longstanding and persistent concerns, however, that facial recognition is prone to false positives and other errors, particularly when it comes to identifying Black and other racialized people, exacerbating systemic racism, bias and discrimination. Some law enforcement agencies in Canada that were caught up in the Clearview AI controversy have since responded with new measures, such as the Toronto Police Service's policies on AI use and the RCMP's transparency program. Others, however, like the Vancouver Police Department, promised to develop policies but haven't, while at the same time seeking access to city traffic camera footage. The regulation of police uses of AI is a pressing concern if we are to safely navigate the promise and perils of AI use. Deepfake evidence in court Another area where AI is presenting challenges in the criminal justice system is deepfake evidence, including AI-generated documents, audio, photos, and videos. The phenomenon has already led to cases where one party alleges that the other party's evidence is a deepfake, casting doubt on it, even if it's legitimate. This has been dubbed the "liar's dividend." A high-profile example of allegations involving deepfake evidence arose in the case of Joshua Doolin, who faced charges related to the January 6, 2021, insurrection at the U.S. Capitol for which he was ultimately convicted. Doolin's attorney contended that prosecutors should be required to authenticate video evidence sourced from YouTube, raising concerns about the potential use of deepfakes. Jurors could be especially prone to doubts about potential deepfakes given high-profile deepfake incidents involving celebrities or their own use of AI technologies. Judges are also sounding the alarm about the challenges of detecting increasingly sophisticated deepfake evidence admitted in court. There are concerns that a wrongful conviction or acquittal could result. I've personally heard from a number of legal practitioners, including judges and lawyers, that they are struggling to address this issue. It is a frequent subject at legal seminars and judicial training events. Until we have a clear statement from appellate courts on the matter, legal uncertainty will remain. Risk assessment algorithms Imagine an AI algorithm that you couldn't understand deemed you a flight risk or at high risk to re-offend, and that information was used by a judge or parole board to deny your release from custody. This dystopian reality isn't a fiction but a reality in many parts of the world. Automated algorithmic decision-making is already being used in various countries for decisions on access to government benefits and housing, assessing domestic violence risk, making immigration determinations and a host of criminal justice applications from bail decisions to sentencing to prison classification to parole outcomes. People impacted by these algorithms typically fail to gain access to their underlying proprietary software. Even if they could, they are often "black boxes" that are impossible to penetrate. Even worse, research into some algorithms has found serious concerns about racial bias. A key reason for this problem is that AI models are trained on data from societies that are already embedded with systemic racism. "Garbage in, garbage out" is a commonly used adage to explain this. Fostering innovation while safeguarding justice The need for legal and ethical AI in high-risk situations pertaining to criminal justice is paramount. There is undoubtedly a need for new laws, regulations and policies specifically designed to address these challenges. The European Union's AI Act bans AI for uses such as untargeted scraping images off the internet or CCTV, real-time remote biometric identification in public (with limited exceptions), and assessing recidivism risk based solely on profiling or personality traits. Canada's laws have not kept pace, and those that have been proposed have challenges. At the federal level, Bill C-27 (which includes an Artificial Intelligence and Data Act) has been stuck in committee for over a year, and it is unlikely to be adopted by this Parliament. Ontario's proposed AI legislation, Bill 194, would exempt police from its application and fails to include provisions on ensuring respect for human rights. Canada should vigorously enforce existing laws and policies that already apply to AI use by public authorities. The Canadian Charter of Rights and Freedoms includes numerous fundamental freedoms, legal rights and equality protections that bear directly on these issues. Likewise, privacy legislation, human rights legislation, consumer protection legislation and tort law all set important standards for AI use. The potential impact of AI on people in the criminal justice system is immense. Without thoughtful and rigorous oversight, it risks undermining public confidence in the justice system and perpetuating existing problems with real human consequences. Fortunately, Canada has not yet gone as far down the road of widespread AI adoption in criminal justice as other countries. We still have time to get ahead of it. Policymakers, courts and civil society must act swiftly to ensure that AI serves justice rather than undermines it.
Share
Share
Copy Link
An exploration of how AI is impacting the criminal justice system, highlighting both its potential benefits and significant risks, including issues of bias, privacy, and the challenges of deepfake evidence.
Artificial Intelligence (AI) is rapidly transforming the landscape of criminal justice, offering both promising advancements and significant challenges. Interpol Secretary General Jürgen Stock has warned that AI is facilitating crime on an "industrial scale" through deepfakes, voice simulation, and forged documents 1. This technological revolution is not only empowering criminals but also changing how law enforcement and the justice system operate.
One of the most controversial AI applications in law enforcement is facial recognition technology. The case of Clearview AI exemplifies both the potential and pitfalls of this technology. Clearview AI built a massive facial recognition database by scraping over three billion images from the internet without user consent 12.
While Clearview AI claims successes in exonerating the wrongfully convicted and rescuing exploited children, concerns persist about false positives and racial bias. The technology has faced backlash and legal challenges in multiple countries, including Canada, where it was asked to cease operations 12.
AI-generated deepfakes are posing new challenges for the justice system. The emergence of sophisticated fake evidence has led to a phenomenon known as the "liar's dividend," where legitimate evidence can be cast into doubt by claims of it being a deepfake 12.
A high-profile example is the case of Joshua Doolin, charged in relation to the January 6, 2021, U.S. Capitol insurrection. His attorney argued for the need to authenticate YouTube-sourced video evidence, highlighting the growing concern about potential deepfakes in court 12.
Perhaps one of the most concerning applications of AI in criminal justice is its use in risk assessment algorithms. These systems are being employed to make critical decisions about bail, sentencing, prison classification, and parole 12.
The opacity of these algorithms, often protected as proprietary software, raises serious questions about transparency and accountability. Moreover, research has uncovered racial biases in some of these systems, attributed to training data that reflects existing societal biases 12.
The rapid integration of AI in criminal justice has outpaced regulatory frameworks. While the European Union's AI Act proposes bans on certain high-risk AI applications, countries like Canada are struggling to keep up legislatively 1.
There is a growing consensus on the need for new laws, regulations, and policies specifically designed to address the challenges posed by AI in criminal justice. These measures must balance the potential benefits of AI with the fundamental principles of fairness, accountability, and human rights 12.
As AI continues to evolve, the criminal justice system faces the complex task of harnessing its potential while safeguarding against its risks. The path forward requires careful consideration, robust regulation, and ongoing dialogue between technologists, legal professionals, and policymakers to ensure that AI supports, rather than undermines, justice.
Reference
[2]
The American Civil Liberties Union (ACLU) has raised alarm over the increasing use of AI in drafting police reports, highlighting potential threats to civil liberties and the integrity of the justice system.
3 Sources
3 Sources
A critical examination of AI's use in social services, highlighting potential benefits and risks, with a focus on preventing trauma and ensuring responsible implementation.
2 Sources
2 Sources
A study of 170 local governments worldwide reveals widespread AI adoption in public services without proper oversight, leading to potential ethical violations and public unawareness.
2 Sources
2 Sources
As artificial intelligence continues to evolve at an unprecedented pace, experts debate its potential to revolutionize industries while others warn of the approaching technological singularity. The manifestation of unusual AI behaviors raises concerns about the widespread adoption of this largely misunderstood technology.
2 Sources
2 Sources
Some US police departments are experimenting with AI chatbots to write crime reports, aiming to save time and improve efficiency. However, this practice has sparked debates about accuracy, racial bias, and the potential impact on the justice system.
11 Sources
11 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved