Curated by THEOUTPOST
On Fri, 13 Dec, 12:02 AM UTC
3 Sources
[1]
ACLU condemns police use of AI to draft reports
COMMENT AI use by law enforcement to identify suspects is already problematic enough, but civil liberties groups have a new problem to worry about: the technology being employed to draft police reports. The American Civil Liberties Union published a report this week detailing its concerns with law enforcement tech provider Axon's Draft One, a ChatGPT-based system that translates body camera recordings into drafts of police reports that officers need only edit and flesh out to ostensibly save them time spent on desk work. Given the importance of police reports to investigations and prosecutions and the unreliability already noted in other forms of law enforcement AI, the ACLU has little faith that Draft One will avoid leading to potential civil rights violations and civil liberty issues. "Police reports play a crucial role in our justice system," ACLU speech, privacy and technology senior policy analyst and report author Jay Stanley wrote. "Concerns include the unreliability and biased nature of AI, evidentiary and memory issues when officers resort to this technology, and issues around transparency. "In the end, we do not think police departments should use this technology," Stanley concluded. It's worth pointing out that Axon doesn't have the best reputation when it comes to thinking critically about innovations: Most of the company's ethics board resigned in 2022 when Axon announced plans to equip remote-control drones with tasers. Axon later paused the program following public blow-back. Draft One, however, has already been in the hands of US law enforcement agencies since it was launched in April. It's not clear how many agencies are using Draft One, and Axon didn't respond to questions for this story. This vulture can personally attest to the misery that is writing police reports. In my time as a Military Policeman in the US Army, I spent plenty of time on shifts writing boring, formulaic, and necessarily granular reports on incidents, and it was easily the worst part of my job. I can definitely sympathize with police in the civilian world, who deal with far worse - and more frequent - crimes than I had to address on small bases in South Korea. That said, I've also had a chance to play with modern AI and report on many of its shortcomings, and the ACLU seems to definitely be on to something in Stanley's report. After all, if we can't even trust AI to write something as legally low-stakes as news or a bug report, how can we trust it to do decent police work? LLMs, while amazingly advanced at imitating human writing, are prone to unpredictable errors [that] may be compounded by transcription errors, including those resulting from garbled or otherwise unclear audio in a body camera video That's one of the ACLU's prime concerns, especially given report drafts are being compiled from body camera recordings that are often low-quality and hard to hear clearly. "LLMs, while amazingly advanced at imitating human writing, are prone to unpredictable errors [that] may be compounded by transcription errors, including those resulting from garbled or otherwise unclear audio in a body camera video," Stanley noted. In an ideal world, Stanley added, police would be carefully reviewing AI-generated drafts, but that very well may not be the case. The report notes that Draft One includes a feature that can intentionally insert silly sentences into AI-produced drafts as a test to ensure officers are thoroughly reviewing and revising the drafts. However, Axon's CEO mentioned in a video about Draft One that most agencies are choosing not to enable this feature. The ACLU also points out privacy issues with using a large language model to process body camera footage: That's sensitive police data, so who exactly is going to be handling it? According to Axon's website, all Draft One data, including camera transcripts and draft reports, are "securely stored and managed within the Axon Network," but there's no indication of what that network entails. Despite Microsoft's insistence that police aren't allowed to use Azure AI for face recognition, that apparently doesn't apply to letting an AI write police reports, as Axon indicated in an April press release that Draft One "was built on top of Microsoft's Azure OpenAI Service platform." Not exactly confidence inspiring given Microsoft's and Azure's security track record of late. "When a user (such as Axon) uploads a document or enters a prompt, both of those are transmitted to the LLM's operator (such as OpenAI), and what that operator does with that information is not subject to any legal privacy protections," the ACLU report states. "Axon claims here that 'no customer [ie, police] data is going to OpenAI,' but normally in order to have an LLM like ChatGPT analyze a block of text such as the transcript of a bodycam video, you normally send that text to the company running the LLM, like OpenAI, so I'm not sure how that would work in the case of Draft One," Stanley told The Register in an emailed statement. We've asked Axon where data is processed and stored, but again, we haven't heard back. If OpenAI isn't getting access, Microsoft may be, at the very least. The ACLU is also concerned that using AI to write police reports lacks transparency, especially if the modified version of ChatGPT used as the basis of Draft One has system prompts instructing it to behave in a certain way, which it likely does like most LLMs. "That's an example of the kind of element of an AI tool that ought to be public," the ACLU report argued. "If it's not, a police AI system could well contain an instruction such as, 'Make sure that the narrative is told in a way that doesn't portray the officer as violating the Constitution.'" We've asked Axon for a look at Draft One's system prompts. "This elasticity of human memory is why we believe it's vital that officers give their statement about what took place in an incident before they are allowed to see any video or other evidence," the ACLU stated in the report. Draft One bypasses that issue by generating a draft report primarily based on audio captured by body cameras, which officers ideally should not rely on exclusively to provide their own testimony. If an officer reviewing an AI-generated report notices, for example, that something illegal they did wasn't captured by their camera, they never need to testify to that fact in their report. Conversely, if an officer lacked probable cause to detain or arrest a suspect, but their camera picks up audio in the background that justifies their action, then post-hoc probable cause could again disguise police misconduct. "The body camera video and the police officer's memory are two separate pieces of evidence," Stanley wrote. "But if the police report is just an AI rehash of the body camera video, then you no longer have two separate pieces of evidence - you have one, plus a derivative summary of it." Along with potentially assisting police to cover up misconduct or create after-the-fact justifications for illegal actions, the ACLU also pointed out another issue identified by American University law professor Andrew Guthrie Ferguson: It makes them less accountable for their actions. In a paper written earlier this year covering many of the same concerns raised by the ACLU, and cited as inspiration for its report, Ferguson pointed out that making police officers write reports can serve as a disciplinary check on their use of power. Police have to justify the use of discretionary power in reports, which Ferguson and the ACLU pointed out serves as a way to remind them of the legal limits of their authority. "A shift to AI-drafted police reports would sweep away these important internal roles that reports play within police departments and within the minds of officers," the ACLU wrote. "This is an additional reason to be skeptical of this technology." At the end of the day, some police are using this technology now, though Stanley believes its use is likely confined to only a few agencies around the US. Axon isn't the only company offering similar products either, with Policereports.ai and Truleo both offering similar services. The ACLU told us it's not aware of any cases involving the use of AI police reports that have been used to prosecute a defendant, so we have yet to see these reports stand up to the legal scrutiny of a courtroom. ®
[2]
ACLU highlights the rise of AI-generated police reports -- what could go wrong?
It says the technology could exacerbate existing problems around bias and transparency. The American Civil Liberties Association (ACLU) is sounding a warning about the use of AI in creating police reports, saying the tech could produce errors that affect evidence and court cases. The nonprofit highlighted the dangers of the tech in a white paper, following news that police departments in California are using a program called Draft One from Axon to transcribe body camera recording and create a first draft of police reports. One police department in Fresno said that it's using Draft One under a pilot program, but only for misdemeanor reports. "It's nothing more than a template," deputy chief Rob Beckwith told Industry Insider. "It's not designed to have an officer push a button and generate a report." He said that the department has seen any errors with transcriptions and that it consulted with the Fresno County DA's office in training the force, However, the ACLU noted four issues with the use of AI. First off, it said that AI is "quirky and unreliable and prone to making up fact... [and] is also biased." Secondly, it said that an officer's memories of an incident should be memorialized "before they are contaminated by an AI's body camera based storytelling." It added that if a police report is just an AI rehash of body camera video, certain facts might be omitted and it may even allow officers to lie if they did something illegal that wasn't captured on camera. The third point was around transparency, as the public needs to understand exactly how it works based on analysis by independent experts, according to the ACLU. Defendants in criminal cases also need to be able to interrogate the evidence, "yet much of the operation of these systems remains mysterious." Finally, the group noted that the use of AI transcriptions might remove accountability around the use of discretionary power. "For these reasons, the ACLU does not believe police departments should allow officers to use AI to generate draft police reports," it said.
[3]
Rising use of generative AI by police is a threat to Americans' civil liberties, ACLU warns
The ACLU warns that AI-generated police reports are a civil rights black box. Credit: kali9 / E+ via Getty Images The ACLU is sounding the alarm on the nation's police forces increasingly leaning on artificial intelligence. In a six-page white paper released Dec. 10, the nation's largest civil rights group warns that the implementation of popular generative AI tech, like chatbots and drafting tools, is an overstep of technological advancement and poses a threat to American civil liberties. The organization specifically singles out the use of Draft One, a controversial generative AI tool that assists police officers in drafting reports based on body cam audio and relies on OpenAI's GPT-4 model. Several police departments around the nation have gradually tested AI tools over the last year, including Draft One. That number is on the rise, with cities around the U.S. pitching generative AI features as a solution to budget and staffing constraints. Experts have criticized the implementation of this tech, citing the essential role police reports play across judicial decision making, from investigation and discovery to sentencing. "The forcing function of writing out a justification, and then swearing to its truth, and publicizing that record to other legal professionals (prosecutors/judges) is a check on police power," wrote law expert Andrew Guthrie Ferguson in one of the first law reviews of AI-drafted police reports. The report is cited by the ACLU in its white paper. The ACLU outlines four major areas of concern in its white paper, including the necessary accountability process of human-written reports Ferguson describes. In its analysis, the organization emphasizes the uncertainty stoked by biases and hallucinations inherent to the tech itself, and questions the transparency of these processes to the public at large -- not to mention its data privacy implications. According to the organization, relying more on the interpretation of generative AI tools than human memory and a police officer's subjective observations is to the detriment of a fair judicial process. If AI is the next frontier for police officers, the ACLU urges, it should be used only after human memory has been recorded. AI tools could transcribe audio-recorded verbal narratives created by the officers involved, the organization explains, which could be submitted in tandem for review. But, contrary to the recommendations of civil rights groups, AI's leaders are steadily investing in police and military applications of their technology. "Police reports play a crucial role in our justice system. They are central to the criminal proceedings that determine people's innocence, guilt, and punishment, and are often the only official account of what took place during a particular incident," the ACLU writes. "AI report-writing technology removes important human elements from police procedures and is too new, too untested, too unreliable, too opaque, and too biased to be inserted into our criminal justice system."
Share
Share
Copy Link
The American Civil Liberties Union (ACLU) has raised alarm over the increasing use of AI in drafting police reports, highlighting potential threats to civil liberties and the integrity of the justice system.
The American Civil Liberties Union (ACLU) has published a report highlighting significant concerns about the use of artificial intelligence (AI) in drafting police reports. This development comes as law enforcement agencies across the United States begin adopting AI tools like Axon's Draft One, which uses ChatGPT to translate body camera recordings into initial report drafts 12.
Axon's Draft One, launched in April, has already been deployed by several police departments, including one in Fresno, California. The system aims to save officers time by automating the report-writing process, a task often considered tedious and time-consuming 13.
The ACLU's report, authored by senior policy analyst Jay Stanley, outlines several critical issues:
Reliability and Bias: AI systems are prone to unpredictable errors and biases, which could be exacerbated by poor-quality audio from body cameras 1.
Evidentiary and Memory Issues: The ACLU argues that officers' memories should be recorded before being potentially influenced by AI-generated narratives 2.
Transparency Problems: The inner workings of these AI systems remain largely opaque, raising concerns about their scrutiny in legal proceedings 23.
Privacy Concerns: Questions arise about the handling and storage of sensitive police data used to train these AI models 1.
Police reports play a crucial role in criminal proceedings, often serving as the primary official account of incidents. The ACLU warns that AI-generated reports could omit important details or even provide cover for potential misconduct not captured on camera 23.
While some police departments, like the one in Fresno, claim to use Draft One only for misdemeanor reports and as a template rather than a final product, concerns persist about the broader implications of this technology 2.
The ACLU strongly advises against the use of AI for generating draft police reports. Instead, they suggest that if AI must be used, it should only transcribe audio-recorded verbal narratives created by officers, which could be submitted alongside human-written reports for review 3.
As AI continues to advance, the debate over its role in law enforcement and the justice system is likely to intensify, with civil liberties groups calling for careful consideration of the technology's impact on transparency, accountability, and fairness in policing.
Some US police departments are experimenting with AI chatbots to write crime reports, aiming to save time and improve efficiency. However, this practice has sparked debates about accuracy, racial bias, and the potential impact on the justice system.
11 Sources
11 Sources
An exploration of how AI is impacting the criminal justice system, highlighting both its potential benefits and significant risks, including issues of bias, privacy, and the challenges of deepfake evidence.
2 Sources
2 Sources
As the volume of police body camera footage grows, human reviewers struggle to keep up. Law enforcement agencies are turning to AI for assistance, sparking debates about privacy and accountability.
2 Sources
2 Sources
A Washington Post investigation reveals widespread use of facial recognition technology by police departments, often without disclosure to defendants, raising concerns about transparency and potential false arrests.
2 Sources
2 Sources
The International Association of Chiefs of Police conference showcases AI's growing role in law enforcement, from virtual reality training to integrated data systems, highlighting the push for widespread adoption.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved