Federal Judge Exposes ICE Agent's Use of ChatGPT for Use-of-Force Reports

Reviewed byNidhi Govil

7 Sources

Share

A federal judge revealed that an ICE agent used ChatGPT to write use-of-force reports during immigration raids in Chicago, raising serious concerns about accuracy, credibility, and privacy in law enforcement documentation.

Court Ruling Exposes Problematic AI Use in Law Enforcement

A federal judge's recent ruling has brought to light a concerning practice within Immigration and Customs Enforcement (ICE), revealing that at least one agent used ChatGPT to generate use-of-force reports during immigration operations in Chicago. The discovery, buried in a footnote of a 223-page court opinion, has sparked widespread concern among legal experts and AI researchers about the implications of artificial intelligence use in critical law enforcement documentation

1

.

Source: Gizmodo

Source: Gizmodo

US District Judge Sara Ellis made the revelation while examining "Operation Midway Blitz," an immigration enforcement operation that resulted in more than 3,300 arrests and over 600 individuals held in ICE custody. The judge's analysis of body camera footage revealed significant discrepancies between what actually occurred and what was documented in official reports, leading her to question the reliability of the documentation process

2

.

The Specific AI Misuse Incident

According to Judge Ellis's findings, body camera footage showed an agent asking ChatGPT to "compile a narrative for a report based off of a brief sentence about an encounter and several images." The agent then submitted the AI-generated output as an official use-of-force report, despite providing the system with extremely limited information

3

. This practice particularly troubled the judge, who noted that "to the extent that agents use ChatGPT to create their use of force reports, this further undermines their credibility and may explain the inaccuracy of these reports when viewed in light of the body-worn camera footage"

1

.

Expert Reactions and Legal Implications

Law enforcement and AI experts have characterized this incident as representing the worst possible application of artificial intelligence in policing. Ian Adams, an assistant criminology professor at the University of South Carolina who serves on an AI task force through the Council for Criminal Justice, described the practice as a "nightmare scenario." Adams emphasized that providing ChatGPT with just "a single sentence and a few pictures" goes "against every bit of advice we have out there"

4

.

The legal implications are particularly serious because courts rely on the "objective reasonableness" standard when evaluating use-of-force incidents. This standard requires detailed documentation of the specific officer's perspective and thought process during the encounter. "We need the specific articulated events of that event and the specific thoughts of that specific officer to let us know if this was a justified use of force," Adams explained

5

.

Privacy and Security Concerns

Beyond accuracy issues, the incident raises significant privacy and security concerns. Katie Kinsey, chief of staff and tech policy counsel at the Policing Project at NYU School of Law, pointed out that if the agent used the public version of ChatGPT, he likely "lost control of the images the moment he uploaded them," potentially making sensitive law enforcement materials part of the public domain and accessible to bad actors

2

.

This privacy breach is particularly concerning given that the images likely contained evidence from active law enforcement operations and potentially identifiable information about individuals involved in the incidents.

Policy Gaps and Industry Response

The Department of Homeland Security has not responded to requests for comment about whether clear policies exist regarding AI use by agents. While DHS maintains a dedicated page about AI use at the agency and has deployed internal chatbots for routine tasks, the incident suggests these guidelines may not adequately address the use of external AI tools for critical documentation

1

.

Kinsey noted that most law enforcement departments are "building the plane as it's being flown" when it comes to AI implementation, often waiting until problems arise before establishing proper guidelines. She advocated for proactive policies similar to those recently implemented in Utah and California, which require AI-generated police reports to be clearly labeled

3

.

Contrast with Professional AI Tools

The incident stands in stark contrast to how established technology companies approach AI in law enforcement. Companies like Axon have developed AI components for body cameras that operate on closed systems and primarily use audio rather than visual inputs for report generation. These systems avoid visual analysis because, as experts note, "there are many different ways to describe a color, or a facial expression or any visual component," leading to inconsistent and potentially inaccurate results

4

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo