3 Sources
3 Sources
[1]
Concerns about AI-written police reports spur states to regulate the emerging practice
Andrew Guthrie Ferguson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment. Police are getting a boost from artificial intelligence, with algorithms now able to draft police reports in minutes. The technology promises to make police reports more accurate and comprehensive, as well as save officers time. The idea is simple: Take the audio transcript from a body camera worn by a police officer and use the predictive text capabilities of large language models to write a formal police report that could become the basis of a criminal prosecution. Mirroring other fields that have allowed ChatGPT-like systems to write on behalf of people, police can now get an AI assist to automate much dreaded paperwork. The catch is that instead of writing the first draft of your college English paper, this document can determine someone's liberty in court. An error, omission or hallucination can risk the integrity of a prosecution or, worse, justify a false arrest. While police officers must sign off on the final version, the bulk of the text, structure and formatting is AI-generated. Who - or what - wrote it Up until October 2025, only Utah had required that police even admit they were using an AI assistant to draft their reports. On Oct. 10, that changed when California became the second state to require transparent notice that AI was used to draft a police report. Governor Gavin Newsom signed SB 524 into law, requiring all AI-assisted police reports to be marked as being written with the help of AI. The law also requires law enforcement agencies to maintain an audit trail that identifies the person who used AI to create a report and any video and audio footage used in creating the report. It also requires agencies to retain the first draft created with AI for as long as the official report is retained, and prohibits a draft created with AI from constituting an officer's official statement. The law is a significant milestone in the regulation of AI in policing, but its passage also signifies that AI is going to become a major part of the criminal justice system. If you are sitting behind bars based on a police report, you might have some questions. The first question that Utah and California now answer is "Did AI write this?" Basic transparency that an algorithm helped write an arrest report might seem the minimum a state could do before locking someone up. And, even though leading police technology companies like Axon recommend such disclaimers be included in their reports, they are not required. Police departments in Lafayette, Indiana and Fort Collins, Colorado, were intentionally turning off the transparency defaults on the AI report generators, according to an investigative news report. Similarly, police chiefs using Axon's Draft One products did not even know which reports were drafted by AI and which were not because the officers were just cutting and pasting the AI narrative into reports they indicated they wrote themselves. The practice bypassed all AI disclaimers and audit trails. Many questions Transparency is only the first step. Understanding the risks of relying on AI for police reports is the second. Technological questions arise about how the AI models were trained and the possible biases baked into a reliance on past police reports. Transcription questions arise about errors, omissions and mistranslations because police stops take place in chaotic, loud and frequently emotional contexts amid a host of languages. Finally, trial questions arise about how an attorney is supposed to cross-examine an AI-generated document, or whether the audit logs need to be retained for expert analysis or turned over to the defense. Risks and consequences The significance of the California law is not simply that the public needs to be aware of AI risks, but that California is embracing AI risk in policing. I believe it's likely that people will lose their liberty based on a document that was largely generated by AI and without the hard questions satisfactorily answered. Worse, in a criminal justice system that relies on plea bargaining for more than 95% of cases and is overwhelmingly dominated by misdemeanor offenses, there may never be a chance to check whether the AI report accurately captured the scene. In fact, in many of those lower-level cases, the police report will be the basis of charging decisions, pretrial detention, motions, plea bargains, sentencing and even probation revocations. I believe that a criminal legal system that relies so heavily on police reports has a responsibility to ensure that police departments are embracing not just transparency but justice. At a minimum, this means more states following Utah and California to pass laws regulating the technology, and police departments following the best practices recommended by the technology companies. But even that may not be enough without critical assessments by courts, legal experts and defense lawyers. The future of AI policing is just starting, but the risks are already here.
[2]
Concerns about AI-written police reports spur states to regulate the emerging practice
Police are getting a boost from artificial intelligence, with algorithms now able to draft police reports in minutes. The technology promises to make police reports more accurate and comprehensive, as well as save officers time. The idea is simple: Take the audio transcript from a body camera worn by a police officer and use the predictive text capabilities of large language models to write a formal police report that could become the basis of a criminal prosecution. Mirroring other fields that have allowed ChatGPT-like systems to write on behalf of people, police can now get an AI assist to automate much dreaded paperwork. The catch is that instead of writing the first draft of your college English paper, this document can determine someone's liberty in court. An error, omission or hallucination can risk the integrity of a prosecution or, worse, justify a false arrest. While police officers must sign off on the final version, the bulk of the text, structure and formatting is AI-generated. Who -- or what -- wrote it? Up until October 2025, only Utah had required that police even admit they were using an AI assistant to draft their reports. On Oct. 10, that changed when California became the second state to require transparent notice that AI was used to draft a police report. Governor Gavin Newsom signed SB 524 into law, requiring all AI-assisted police reports to be marked as being written with the help of AI. The law also requires law enforcement agencies to maintain an audit trail that identifies the person who used AI to create a report and any video and audio footage used in creating the report. It also requires agencies to retain the first draft created with AI for as long as the official report is retained, and prohibits a draft created with AI from constituting an officer's official statement. The law is a significant milestone in the regulation of AI in policing, but its passage also signifies that AI is going to become a major part of the criminal justice system. If you are sitting behind bars based on a police report, you might have some questions. The first question that Utah and California now answer is "Did AI write this?" Basic transparency that an algorithm helped write an arrest report might seem the minimum a state could do before locking someone up. Even though leading police technology companies like Axon recommend such disclaimers be included in their reports, they are not required. Police departments in Lafayette, Indiana and Fort Collins, Colorado, were intentionally turning off the transparency defaults on the AI report generators, according to an investigative news report. Similarly, police chiefs using Axon's Draft One products did not even know which reports were drafted by AI and which were not, because the officers were just cutting and pasting the AI narrative into reports they indicated they wrote themselves. The practice bypassed all AI disclaimers and audit trails. Many questions Transparency is only the first step. Understanding the risks of relying on AI for police reports is the second. Technological questions arise about how the AI models were trained and the possible biases baked into a reliance on past police reports. Transcription questions arise about errors, omissions and mistranslations because police stops take place in chaotic, loud and frequently emotional contexts amid a host of languages. Finally, trial questions arise about how an attorney is supposed to cross-examine an AI-generated document, or whether the audit logs need to be retained for expert analysis or turned over to the defense. Risks and consequences The significance of the California law is not simply that the public needs to be aware of AI risks, but that California is embracing AI risk in policing. I believe it's likely that people will lose their liberty based on a document that was largely generated by AI and without the hard questions satisfactorily answered. Worse, in a criminal justice system that relies on plea bargaining for more than 95% of cases and is overwhelmingly dominated by misdemeanor offenses, there may never be a chance to check whether the AI report accurately captured the scene. In fact, in many of those lower-level cases, the police report will be the basis of charging decisions, pretrial detention, motions, plea bargains, sentencing and even probation revocations. I believe that a criminal legal system that relies so heavily on police reports has a responsibility to ensure that police departments are embracing not just transparency but justice. At a minimum, this means more states following Utah and California to pass laws regulating the technology, and police departments following the best practices recommended by the technology companies. But even that may not be enough without critical assessments by courts, legal experts and defense lawyers. The future of AI policing is just starting, but the risks are already here. This article is republished from The Conversation under a Creative Commons license. Read the original article.
[3]
As AI writes more police reports, states are raising red flags
The idea is simple: Take the audio transcript from a body camera worn by a police officer and use the predictive text capabilities of large language models to write a formal police report that could become the basis of a criminal prosecution. Mirroring other fields that have allowed ChatGPT-like systems to write on behalf of people, police can now get an AI assist to automate much dreaded paperwork. The catch is that instead of writing the first draft of your college English paper, this document can determine someone's liberty in court. An error, omission, or hallucination can risk the integrity of a prosecution or, worse, justify a false arrest. While police officers must sign off on the final version, the bulk of the text, structure, and formatting is AI-generated. Up until October 2025, only Utah had required that police even admit they were using an AI assistant to draft their reports. On Oct. 10, that changed when California became the second state to require transparent notice that AI was used to draft a police report.
Share
Share
Copy Link
California joins Utah in mandating transparency for AI-assisted police reports. The move highlights the increasing use of AI in law enforcement and raises questions about its impact on the criminal justice system.
In a significant development at the intersection of artificial intelligence and law enforcement, police departments are now utilizing AI algorithms to draft police reports in a matter of minutes
1
. This technological advancement promises to enhance the accuracy and comprehensiveness of police reports while saving officers valuable time2
.
Source: Fast Company
The AI-assisted report writing process is straightforward: audio transcripts from body cameras worn by police officers are fed into large language models, which then generate formal police reports
3
. While this automation mirrors similar practices in other fields, the stakes in law enforcement are considerably higher. These AI-generated documents can potentially determine an individual's liberty in court, making the accuracy and reliability of the content crucial1
.
Source: The Conversation
Recognizing the potential risks and the need for transparency, California has become the second state after Utah to regulate the use of AI in police report writing. On October 10, 2025, Governor Gavin Newsom signed SB 524 into law
2
. This legislation mandates that all AI-assisted police reports must be clearly marked as such, ensuring transparency about the involvement of AI in the report generation process1
.The new law introduces several important requirements:
2
Related Stories
Despite the potential benefits, the use of AI in police report writing raises several concerns:
Technological Biases: Questions arise about how AI models are trained and potential biases inherent in relying on past police reports
1
.Transcription Accuracy: The chaotic and emotional nature of police encounters may lead to errors, omissions, or mistranslations in AI-generated reports
2
.Legal Implications: The use of AI-generated reports raises questions about how attorneys can cross-examine these documents and whether audit logs should be made available to the defense
1
.
Source: Phys.org
As AI becomes increasingly integrated into the criminal justice system, experts argue that more comprehensive regulation and critical assessment are necessary. With over 95% of criminal cases relying on plea bargaining, there may be limited opportunities to verify the accuracy of AI-generated reports in many instances
2
.The introduction of AI in police report writing marks a significant shift in law enforcement practices. While it offers potential benefits in efficiency and comprehensiveness, it also introduces new challenges that require careful consideration and regulation to ensure justice and transparency in the criminal legal system.
Summarized by
Navi
[1]
[3]
26 Aug 2024

13 Dec 2024•Policy and Regulation

16 Oct 2025•Technology