7 Sources
7 Sources
[1]
Judge Says ICE Used ChatGPT to Write Use-of-Force Reports
Last week, a judge handed down a 223-page opinion that lambasted the Department of Homeland Security for how it has carried out raids targeting undocumented immigrants in Chicago. Buried in a footnote were two sentences that revealed at least one member of law enforcement used ChatGPT to write a report that was meant to document how the officer used force against an individual. The ruling, written by US District Judge Sara Ellis, took issue with the way members of Immigration and Customs Enforcement and other agencies comported themselves while carrying out their so-called "Operation Midway Blitz" that saw more than 3,300 people arrested and more than 600 held in ICE custody, including repeated violent conflicts with protesters and citizens. Those incidents were supposed to be documented by the agencies in use-of-force reports, but Judge Ellis noted that there were often inconsistencies between what appeared on tape from the officers' body-worn cameras and what ended up in the written record, resulting in her deeming the reports unreliable. More than that, though, she said at least one report was not even written by an officer. Instead, per her footnote, body camera footage revealed that an agent "asked ChatGPT to compile a narrative for a report based off of a brief sentence about an encounter and several images." The officer reportedly submitted the output from ChatGPT as the report, despite the fact that it was provided with extremely limited information and likely filled in the rest with assumptions. "To the extent that agents use ChatGPT to create their use of force reports, this further undermines their credibility and may explain the inaccuracy of these reports when viewed in light of the [body-worn camera] footage," Ellis wrote in the footnote. Per the Associated Press, it is unknown if the Department of Homeland Security has a clear policy regarding the use of generative AI tools to create reports. One would assume that, at the very least, it is far from best practice, considering generative AI will fill in gaps with completely fabricated information when it doesn't have anything to draw from in its training data. The DHS does have a dedicated page regarding the use of AI at the agency, and has deployed its own chatbot to help agents complete "day-to-day activities" after undergoing test runs with commercially available chatbots, including ChatGPT, but the footnote doesn't indicate that the agency's internal tool is what was used by the officer. It suggests the person filling out the report went to ChatGPT and uploaded the information to complete the report. No wonder one expert told the Associated Press this is the "worst case scenario" for AI use by law enforcement.
[2]
ICE agents using AI 'may explain the inaccuracy of these reports,' judge writes, noting a body cam video shows an agent asking ChatGPT for help | Fortune
Tucked in a two-sentence footnote in a voluminous court opinion, a federal judge recently called out immigration agents using artificial intelligence to write use-of-force reports, raising concerns that it could lead to inaccuracies and further erode public confidence in how police have handled the immigration crackdown in the Chicago area and ensuing protests. U.S. District Judge Sara Ellis wrote the footnote in a 223-page opinion issued last week, noting that the practice of using ChatGPT to write use-of-force reports undermines the agents' credibility and "may explain the inaccuracy of these reports." She described what she saw in at least one body camera video, writing that an agent asks ChatGPT to compile a narrative for a report after giving the program a brief sentence of description and several images. The judge noted factual discrepancies between the official narrative about those law enforcement responses and what body camera footage showed. But experts say the use of AI to write a report that depends on an officer's specific perspective without using an officer's actual experience is the worst possible use of the technology and raises serious concerns about accuracy and privacy. Law enforcement agencies across the country have been grappling with how to create guardrails that allow officers to use the increasingly available AI technology while maintaining accuracy, privacy and professionalism. Experts said the example recounted in the opinion didn't meet that challenge. "What this guy did is the worst of all worlds. Giving it a single sentence and a few pictures -- if that's true, if that's what happened here -- that goes against every bit of advice we have out there. It's a nightmare scenario," said Ian Adams, assistant criminology professor at the University of South Carolina who serves on a task force on artificial intelligence through the Council for Criminal Justice, a nonpartisan think tank. The Department of Homeland Security did not respond to requests for comment, and it was unclear if the agency had guidelines or policies on the use of AI by agents. The body camera footage cited in the order has not yet been released. Adams said few departments have put policies in place, but those that have often prohibit the use of predictive AI when writing reports justifying law enforcement decisions, especially use-of-force reports. Courts have established a standard referred to as objective reasonableness when considering whether a use of force was justified, relying heavily on the perspective of the specific officer in that specific scenario. "We need the specific articulated events of that event and the specific thoughts of that specific officer to let us know if this was a justified use of force," Adams said. "That is the worst case scenario, other than explicitly telling it to make up facts, because you're begging it to make up facts in this high-stakes situation." Besides raising concerns about an AI-generated report inaccurately characterizing what happened, the use of AI also raises potential privacy concerns. Katie Kinsey, chief of staff and tech policy counsel at the Policing Project at NYU School of Law, said if the agent in the order was using a public ChatGPT version, he probably didn't understand he lost control of the images the moment he uploaded them, allowing them to be part of the public domain and potentially used by bad actors. Kinsey said from a technology standpoint most departments are building the plane as it's being flown when it comes to AI. She said it's often a pattern in law enforcement to wait until new technologies are already being used and in some cases mistakes being made to then talk about putting guidelines or policies in place. "You would rather do things the other way around, where you understand the risks and develop guardrails around the risks," Kinsey said. "Even if they aren't studying best practices, there's some lower hanging fruit that could help. We can start from transparency." Kinsey said while federal law enforcement considers how the technology should be used or not used, it could adopt a policy like those put in place in Utah or California recently, where police reports or communications written using AI have to be labeled. The photographs the officer used to generate a narrative also caused accuracy concerns for some experts. Well-known tech companies like Axon have begun offering AI components with their body cameras to assist in writing incident reports. Those AI programs marketed to police operate on a closed system and largely limit themselves to using audio from body cameras to produce narratives because the companies have said programs that attempt to use visuals are not effective enough for use. "There are many different ways to describe a color, or a facial expression or any visual component. You could ask any AI expert and they would tell you prompts return very different results between different AI applications, and that gets complicated with a visual component," said Andrew Guthrie Ferguson, a law professor at George Washington University Law School. "There's also a professionalism questions. Are we OK with police officers using predictive analytics?" he added. "It's about what the model thinks should have happened, but might not be what actually happened. You don't want it to be what ends up in court, to justify your actions."
[3]
Illinois judge's footnote on ICE agents using AI raises accuracy and privacy concerns
Tucked in a two-sentence footnote in a voluminous court opinion, a federal judge recently called out immigration agents using artificial intelligence to write use-of-force reports, raising concerns that it could lead to inaccuracies and further erode public confidence in how police have handled the immigration crackdown in the Chicago area and ensuing protests. U.S. District Judge Sara Ellis wrote the footnote in a 223-page opinion issued last week, noting that the practice of using ChatGPT to write use-of-force reports undermines agents' credibility and "may explain the inaccuracy of these reports." She described what she saw in at least one body camera video, writing that an agent asks ChatGPT to compile a narrative for a report after giving the program a brief description and several images. The judge noted factual discrepancies between the official narrative about those law enforcement responses and what body camera footage showed. But experts say the use of AI to write a report that depends on an officer's specific perspective without using an officer's actual experience is the worst possible use of the technology and raises serious concerns about accuracy and privacy. Law enforcement agencies across the country have been grappling with how to create guardrails that allow officers to use the increasingly available AI technology while maintaining accuracy, privacy and professionalism. Experts said the example recounted in the opinion didn't meet that challenge. "What this guy did is the worst of all worlds. Giving it a single sentence and a few pictures -- if that's true, if that's what happened here -- that goes against every bit of advice we have out there. It's a nightmare scenario," said Ian Adams, assistant criminology professor at the University of South Carolina who serves on a task force on artificial intelligence at the Council on Criminal Justice, a nonpartisan think tank. The Department of Homeland Security did not respond to requests for comment, and it was unclear if the agency had guidelines or policies on the use of AI by agents. The body camera footage cited in the order has not yet been released. Adams said few departments have put policies in place, but those that have often prohibit the use of predictive AI when writing reports justifying law enforcement decisions, especially use-of-force reports. Courts have established a standard referred to as objective reasonableness when considering whether a use of force was justified, relying heavily on the perspective of the officer. "We need the specific articulated events of that event and the specific thoughts of that specific officer to let us know if this was a justified use of force," Adams said. "That is the worst case scenario, other than explicitly telling it to make up facts, because you're begging it to make up facts in this high-stakes situation." Besides raising concerns about an AI-generated report inaccurately characterizing what happened, the use of AI also raises potential privacy issues. Katie Kinsey, chief of staff and tech policy counsel at the Policing Project at NYU School of Law, said if the agent in the order was using a public ChatGPT version, he probably didn't understand that he lost control of the images the moment he uploaded them, allowing them to be part of the public domain and potentially used by bad actors. Kinsey said from a technology standpoint most departments are building the plane as it's being flown when it comes to AI. She said it's often a pattern in law enforcement to wait until new technologies are already being used -- and in some cases, mistakes being made -- to then talk about putting guidelines or policies in place. "You would rather do things the other way around, where you understand the risks and develop guardrails around the risks," Kinsey said. "Even if they aren't studying best practices, there's some lower-hanging fruit that could help. We can start from transparency." Kinsey said while federal law enforcement considers how the technology should be used or not used, it could adopt a policy like those put in place in Utah or California recently, where police reports or communications written using AI have to be labeled. The photographs the officer used to generate a narrative also caused accuracy concerns for some experts. Well-known tech companies like Axon have begun offering AI components with their body cameras to assist in writing incident reports. Those AI programs marketed to police operate on a closed system and largely limit themselves to using audio from body cameras to produce narratives because the companies have said programs that attempt to use visuals are not effective enough for use. "There are many different ways to describe a color, or a facial expression or any visual component. You could ask any AI expert and they would tell you prompts return very different results between different AI applications, and that gets complicated with a visual component," said Andrew Guthrie Ferguson, a law professor at George Washington University Law School. "There's also a professionalism question. Are we OK with police officers using predictive analytics?" he added. "It's about what the model thinks should have happened, but might not be what actually happened. You don't want it to be what ends up in court, to justify your actions."
[4]
An immigration agent's use of ChatGPT for reports is raising alarms. Experts explain why
Law enforcement agencies across the country have been grappling with how to create guardrails that allow officers to use the increasingly available AI technology while maintaining accuracy, privacy and professionalism. Experts said the example recounted in the opinion didn't meet that challenge. "What this guy did is the worst of all worlds. Giving it a single sentence and a few pictures -- if that's true, if that's what happened here -- that goes against every bit of advice we have out there. It's a nightmare scenario," said Ian Adams, assistant criminology professor at the University of South Carolina who serves on a task force on artificial intelligence through the Council for Criminal Justice, a nonpartisan think tank. The Department of Homeland Security did not respond to requests for comment, and it was unclear if the agency had guidelines or policies on the use of AI by agents. The body camera footage cited in the order has not yet been released. Adams said few departments have put policies in place, but those that have often prohibit the use of predictive AI when writing reports justifying law enforcement decisions, especially use-of-force reports. Courts have established a standard referred to as objective reasonableness when considering whether a use of force was justified, relying heavily on the perspective of the specific officer in that specific scenario. "We need the specific articulated events of that event and the specific thoughts of that specific officer to let us know if this was a justified use of force," Adams said. "That is the worst case scenario, other than explicitly telling it to make up facts, because you're begging it to make up facts in this high-stakes situation." Private information and evidence Besides raising concerns about an AI-generated report inaccurately characterizing what happened, the use of AI also raises potential privacy concerns. Katie Kinsey, chief of staff and tech policy counsel at the Policing Project at NYU School of Law, said if the agent in the order was using a public ChatGPT version, he probably didn't understand he lost control of the images the moment he uploaded them, allowing them to be part of the public domain and potentially used by bad actors. Kinsey said from a technology standpoint most departments are building the plane as it's being flown when it comes to AI. She said it's often a pattern in law enforcement to wait until new technologies are already being used and in some cases mistakes being made to then talk about putting guidelines or policies in place. "You would rather do things the other way around, where you understand the risks and develop guardrails around the risks," Kinsey said. "Even if they aren't studying best practices, there's some lower hanging fruit that could help. We can start from transparency." Kinsey said while federal law enforcement considers how the technology should be used or not used, it could adopt a policy like those put in place in Utah or California recently, where police reports or communications written using AI have to be labeled. Careful use of new tools The photographs the officer used to generate a narrative also caused accuracy concerns for some experts. Well-known tech companies like Axon have begun offering AI components with their body cameras to assist in writing incident reports. Those AI programs marketed to police operate on a closed system and largely limit themselves to using audio from body cameras to produce narratives because the companies have said programs that attempt to use visuals are not effective enough for use. "There are many different ways to describe a color, or a facial expression or any visual component. You could ask any AI expert and they would tell you prompts return very different results between different AI applications, and that gets complicated with a visual component," said Andrew Guthrie Ferguson, a law professor at George Washington University Law School. "There's also a professionalism questions. Are we OK with police officers using predictive analytics?" he added. "It's about what the model thinks should have happened, but might not be what actually happened. You don't want it to be what ends up in court, to justify your actions."
[5]
Judge's footnote on immigration agents using AI raises accuracy and privacy concerns
Tucked in a two-sentence footnote in a voluminous court opinion, a federal judge recently called out immigration agents using artificial intelligence to write use-of-force reports, raising concerns that it could lead to inaccuracies and further erode public confidence in how police have handled the immigration crackdown in the Chicago area and ensuing protests. U.S. District Judge Sara Ellis wrote the footnote in a 223-page opinion issued last week, noting that the practice of using ChatGPT to write use-of-force reports undermines the agents' credibility and "may explain the inaccuracy of these reports." She described what she saw in at least one body camera video, writing that an agent asks ChatGPT to compile a narrative for a report after giving the program a brief sentence of description and several images. The judge noted factual discrepancies between the official narrative about those law enforcement responses and what body camera footage showed. But experts say the use of AI to write a report that depends on an officer's specific perspective without using an officer's actual experience is the worst possible use of the technology and raises serious concerns about accuracy and privacy. An officer's needed perspective Law enforcement agencies across the country have been grappling with how to create guardrails that allow officers to use the increasingly available AI technology while maintaining accuracy, privacy and professionalism. Experts said the example recounted in the opinion didn't meet that challenge. "What this guy did is the worst of all worlds. Giving it a single sentence and a few pictures -- if that's true, if that's what happened here -- that goes against every bit of advice we have out there. It's a nightmare scenario," said Ian Adams, assistant criminology professor at the University of South Carolina who serves on a task force on artificial intelligence through the Council for Criminal Justice, a nonpartisan think tank. The Department of Homeland Security did not respond to requests for comment, and it was unclear if the agency had guidelines or policies on the use of AI by agents. The body camera footage cited in the order has not yet been released. Adams said few departments have put policies in place, but those that have often prohibit the use of predictive AI when writing reports justifying law enforcement decisions, especially use-of-force reports. Courts have established a standard referred to as objective reasonableness when considering whether a use of force was justified, relying heavily on the perspective of the specific officer in that specific scenario. "We need the specific articulated events of that event and the specific thoughts of that specific officer to let us know if this was a justified use of force," Adams said. "That is the worst case scenario, other than explicitly telling it to make up facts, because you're begging it to make up facts in this high-stakes situation." Private information and evidence Besides raising concerns about an AI-generated report inaccurately characterizing what happened, the use of AI also raises potential privacy concerns. Katie Kinsey, chief of staff and tech policy counsel at the Policing Project at NYU School of Law, said if the agent in the order was using a public ChatGPT version, he probably didn't understand he lost control of the images the moment he uploaded them, allowing them to be part of the public domain and potentially used by bad actors. Kinsey said from a technology standpoint most departments are building the plane as it's being flown when it comes to AI. She said it's often a pattern in law enforcement to wait until new technologies are already being used and in some cases mistakes being made to then talk about putting guidelines or policies in place. "You would rather do things the other way around, where you understand the risks and develop guardrails around the risks," Kinsey said. "Even if they aren't studying best practices, there's some lower hanging fruit that could help. We can start from transparency." Kinsey said while federal law enforcement considers how the technology should be used or not used, it could adopt a policy like those put in place in Utah or California recently, where police reports or communications written using AI have to be labeled. Careful use of new tools The photographs the officer used to generate a narrative also caused accuracy concerns for some experts. Well-known tech companies like Axon have begun offering AI components with their body cameras to assist in writing incident reports. Those AI programs marketed to police operate on a closed system and largely limit themselves to using audio from body cameras to produce narratives because the companies have said programs that attempt to use visuals are not effective enough for use. "There are many different ways to describe a color, or a facial expression or any visual component. You could ask any AI expert and they would tell you prompts return very different results between different AI applications, and that gets complicated with a visual component," said Andrew Guthrie Ferguson, a law professor at George Washington University Law School. "There's also a professionalism questions. Are we OK with police officers using predictive analytics?" he added. "It's about what the model thinks should have happened, but might not be what actually happened. You don't want it to be what ends up in court, to justify your actions."
[6]
Judge's Footnote on Immigration Agents Using AI Raises Accuracy and Privacy Concerns
Tucked in a two-sentence footnote in a voluminous court opinion, a federal judge recently called out immigration agents using artificial intelligence to write use-of-force reports, raising concerns that it could lead to inaccuracies and further erode public confidence in how police have handled the immigration crackdown in the Chicago area and ensuing protests. U.S. District Judge Sara Ellis wrote the footnote in a 223-page opinion issued last week, noting that the practice of using ChatGPT to write use-of-force reports undermines the agents' credibility and "may explain the inaccuracy of these reports." She described what she saw in at least one body camera video, writing that an agent asks ChatGPT to compile a narrative for a report after giving the program a brief sentence of description and several images. The judge noted factual discrepancies between the official narrative about those law enforcement responses and what body camera footage showed. But experts say the use of AI to write a report that depends on an officer's specific perspective without using an officer's actual experience is the worst possible use of the technology and raises serious concerns about accuracy and privacy. An officer's needed perspective Law enforcement agencies across the country have been grappling with how to create guardrails that allow officers to use the increasingly available AI technology while maintaining accuracy, privacy and professionalism. Experts said the example recounted in the opinion didn't meet that challenge. "What this guy did is the worst of all worlds. Giving it a single sentence and a few pictures -- if that's true, if that's what happened here -- that goes against every bit of advice we have out there. It's a nightmare scenario," said Ian Adams, assistant criminology professor at the University of South Carolina who serves on a task force on artificial intelligence through the Council for Criminal Justice, a nonpartisan think tank. The Department of Homeland Security did not respond to requests for comment, and it was unclear if the agency had guidelines or policies on the use of AI by agents. The body camera footage cited in the order has not yet been released. Adams said few departments have put policies in place, but those that have often prohibit the use of predictive AI when writing reports justifying law enforcement decisions, especially use-of-force reports. Courts have established a standard referred to as objective reasonableness when considering whether a use of force was justified, relying heavily on the perspective of the specific officer in that specific scenario. "We need the specific articulated events of that event and the specific thoughts of that specific officer to let us know if this was a justified use of force," Adams said. "That is the worst case scenario, other than explicitly telling it to make up facts, because you're begging it to make up facts in this high-stakes situation." Private information and evidence Besides raising concerns about an AI-generated report inaccurately characterizing what happened, the use of AI also raises potential privacy concerns. Katie Kinsey, chief of staff and tech policy counsel at the Policing Project at NYU School of Law, said if the agent in the order was using a public ChatGPT version, he probably didn't understand he lost control of the images the moment he uploaded them, allowing them to be part of the public domain and potentially used by bad actors. Kinsey said from a technology standpoint most departments are building the plane as it's being flown when it comes to AI. She said it's often a pattern in law enforcement to wait until new technologies are already being used and in some cases mistakes being made to then talk about putting guidelines or policies in place. "You would rather do things the other way around, where you understand the risks and develop guardrails around the risks," Kinsey said. "Even if they aren't studying best practices, there's some lower hanging fruit that could help. We can start from transparency." Kinsey said while federal law enforcement considers how the technology should be used or not used, it could adopt a policy like those put in place in Utah or California recently, where police reports or communications written using AI have to be labeled. Careful use of new tools The photographs the officer used to generate a narrative also caused accuracy concerns for some experts. Well-known tech companies like Axon have begun offering AI components with their body cameras to assist in writing incident reports. Those AI programs marketed to police operate on a closed system and largely limit themselves to using audio from body cameras to produce narratives because the companies have said programs that attempt to use visuals are not effective enough for use. "There are many different ways to describe a color, or a facial expression or any visual component. You could ask any AI expert and they would tell you prompts return very different results between different AI applications, and that gets complicated with a visual component," said Andrew Guthrie Ferguson, a law professor at George Washington University Law School. "There's also a professionalism questions. Are we OK with police officers using predictive analytics?" he added. "It's about what the model thinks should have happened, but might not be what actually happened. You don't want it to be what ends up in court, to justify your actions."
[7]
US federal judge pulls up immigration agents for using ChatGPT to write use-of-force reports
A federal judge criticized immigration agents for using AI like ChatGPT to write use-of-force reports, citing potential inaccuracies and privacy concerns. Experts warn this practice, especially with limited input, undermines officer credibility and the legal standard of "objective reasonableness" in use-of-force justifications. Tucked in a two-sentence footnote in a voluminous court opinion, a federal judge recently called out immigration agents using artificial intelligence to write use-of-force reports, raising concerns that it could lead to inaccuracies and further erode public confidence in how police have handled the immigration crackdown in the Chicago area and ensuing protests. U.S. District Judge Sara Ellis wrote the footnote in a 223-page opinion issued last week, noting that the practice of using ChatGPT to write use-of-force reports undermines the agents' credibility and "may explain the inaccuracy of these reports." She described what she saw in at least one body camera video, writing that an agent asks ChatGPT to compile a narrative for a report after giving the program a brief sentence of description and several images. Also Read| ICE opens door to $281 mn payouts as Trump hands migrant tracking to private contractors The judge noted factual discrepancies between the official narrative about those law enforcement responses and what body camera footage showed. But experts say the use of AI to write a report that depends on an officer's specific perspective without using an officer's actual experience is the worst possible use of the technology and raises serious concerns about accuracy and privacy. An officer's needed perspective Law enforcement agencies across the country have been grappling with how to create guardrails that allow officers to use the increasingly available AI technology while maintaining accuracy, privacy and professionalism. Experts said the example recounted in the opinion didn't meet that challenge. "What this guy did is the worst of all worlds. Giving it a single sentence and a few pictures - if that's true, if that's what happened here - that goes against every bit of advice we have out there. It's a nightmare scenario," said Ian Adams, assistant criminology professor at the University of South Carolina who serves on a task force on artificial intelligence through the Council for Criminal Justice, a nonpartisan think tank. The Department of Homeland Security did not respond to requests for comment, and it was unclear if the agency had guidelines or policies on the use of AI by agents. The body camera footage cited in the order has not yet been released. Adams said few departments have put policies in place, but those that have often prohibit the use of predictive AI when writing reports justifying law enforcement decisions, especially use-of-force reports. Courts have established a standard referred to as objective reasonableness when considering whether a use of force was justified, relying heavily on the perspective of the specific officer in that specific scenario. "We need the specific articulated events of that event and the specific thoughts of that specific officer to let us know if this was a justified use of force," Adams said. "That is the worst case scenario, other than explicitly telling it to make up facts, because you're begging it to make up facts in this high-stakes situation." Private information and evidence Besides raising concerns about an AI-generated report inaccurately characterizing what happened, the use of AI also raises potential privacy concerns. Katie Kinsey, chief of staff and tech policy counsel at the Policing Project at NYU School of Law, said if the agent in the order was using a public ChatGPT version, he probably didn't understand he lost control of the images the moment he uploaded them, allowing them to be part of the public domain and potentially used by bad actors. Kinsey said from a technology standpoint most departments are building the plane as it's being flown when it comes to AI. She said it's often a pattern in law enforcement to wait until new technologies are already being used and in some cases mistakes being made to then talk about putting guidelines or policies in place. "You would rather do things the other way around, where you understand the risks and develop guardrails around the risks," Kinsey said. "Even if they aren't studying best practices, there's some lower hanging fruit that could help. We can start from transparency." Kinsey said while federal law enforcement considers how the technology should be used or not used, it could adopt a policy like those put in place in Utah or California recently, where police reports or communications written using AI have to be labeled. Careful use of new tools The photographs the officer used to generate a narrative also caused accuracy concerns for some experts. Well-known tech companies like Axon have begun offering AI components with their body cameras to assist in writing incident reports. Those AI programs marketed to police operate on a closed system and largely limit themselves to using audio from body cameras to produce narratives because the companies have said programs that attempt to use visuals are not effective enough for use. "There are many different ways to describe a color, or a facial expression or any visual component. You could ask any AI expert and they would tell you prompts return very different results between different AI applications, and that gets complicated with a visual component," said Andrew Guthrie Ferguson, a law professor at George Washington University Law School. "There's also a professionalism questions. Are we OK with police officers using predictive analytics?" he added. "It's about what the model thinks should have happened, but might not be what actually happened. You don't want it to be what ends up in court, to justify your actions."
Share
Share
Copy Link
A federal judge revealed that an ICE agent used ChatGPT to write use-of-force reports during immigration raids in Chicago, raising serious concerns about accuracy, credibility, and privacy in law enforcement documentation.
A federal judge's recent ruling has brought to light a concerning practice within Immigration and Customs Enforcement (ICE), revealing that at least one agent used ChatGPT to generate use-of-force reports during immigration operations in Chicago. The discovery, buried in a footnote of a 223-page court opinion, has sparked widespread concern among legal experts and AI researchers about the implications of artificial intelligence use in critical law enforcement documentation
1
.
Source: Gizmodo
US District Judge Sara Ellis made the revelation while examining "Operation Midway Blitz," an immigration enforcement operation that resulted in more than 3,300 arrests and over 600 individuals held in ICE custody. The judge's analysis of body camera footage revealed significant discrepancies between what actually occurred and what was documented in official reports, leading her to question the reliability of the documentation process
2
.According to Judge Ellis's findings, body camera footage showed an agent asking ChatGPT to "compile a narrative for a report based off of a brief sentence about an encounter and several images." The agent then submitted the AI-generated output as an official use-of-force report, despite providing the system with extremely limited information
3
. This practice particularly troubled the judge, who noted that "to the extent that agents use ChatGPT to create their use of force reports, this further undermines their credibility and may explain the inaccuracy of these reports when viewed in light of the body-worn camera footage"1
.Law enforcement and AI experts have characterized this incident as representing the worst possible application of artificial intelligence in policing. Ian Adams, an assistant criminology professor at the University of South Carolina who serves on an AI task force through the Council for Criminal Justice, described the practice as a "nightmare scenario." Adams emphasized that providing ChatGPT with just "a single sentence and a few pictures" goes "against every bit of advice we have out there"
4
.The legal implications are particularly serious because courts rely on the "objective reasonableness" standard when evaluating use-of-force incidents. This standard requires detailed documentation of the specific officer's perspective and thought process during the encounter. "We need the specific articulated events of that event and the specific thoughts of that specific officer to let us know if this was a justified use of force," Adams explained
5
.Beyond accuracy issues, the incident raises significant privacy and security concerns. Katie Kinsey, chief of staff and tech policy counsel at the Policing Project at NYU School of Law, pointed out that if the agent used the public version of ChatGPT, he likely "lost control of the images the moment he uploaded them," potentially making sensitive law enforcement materials part of the public domain and accessible to bad actors
2
.This privacy breach is particularly concerning given that the images likely contained evidence from active law enforcement operations and potentially identifiable information about individuals involved in the incidents.
Related Stories
The Department of Homeland Security has not responded to requests for comment about whether clear policies exist regarding AI use by agents. While DHS maintains a dedicated page about AI use at the agency and has deployed internal chatbots for routine tasks, the incident suggests these guidelines may not adequately address the use of external AI tools for critical documentation
1
.Kinsey noted that most law enforcement departments are "building the plane as it's being flown" when it comes to AI implementation, often waiting until problems arise before establishing proper guidelines. She advocated for proactive policies similar to those recently implemented in Utah and California, which require AI-generated police reports to be clearly labeled
3
.The incident stands in stark contrast to how established technology companies approach AI in law enforcement. Companies like Axon have developed AI components for body cameras that operate on closed systems and primarily use audio rather than visual inputs for report generation. These systems avoid visual analysis because, as experts note, "there are many different ways to describe a color, or a facial expression or any visual component," leading to inconsistent and potentially inaccurate results
4
.Summarized by
Navi
[4]
26 Aug 2024

13 Dec 2024•Policy and Regulation

15 Oct 2025•Policy and Regulation

1
Business and Economy

2
Technology

3
Technology
