3 Sources
[1]
What to know about how a suspect in the killing of two Florida students used ChatGPT
TAMPA, Fla. (AP) -- The investigation into the deaths of two University of South Florida doctoral students took a twist this weekend when prosecutors said that the suspect asked ChatGPT about body disposal in the lead up to the students' disappearance, raising questions about the role tech companies have in preventing the misuse of the powerful chatbots. It wasn't long after University of South Florida students Zamil Limon and his girlfriend Nahida Bristy went missing on April 16 that law enforcement began to suspect Limon's roommate Hisham Abugharbieh, 26, of killing both Bangladeshi students. Limon's body was found Friday under a bridge and a second body, found in a waterway near Limon's body, was recovered but has not been identified. Now, court records filed by prosecutors on Saturday suggest that Abugharbieh's OpenAI search history has emerged as a prominent piece of evidence. Specifically, in the days before Limon and Bristy went missing, Abugharbieh asked the artificial intelligence chatbot a slew of questions about guns and the disposal of bodies. Abugharbieh was charged with two counts of premeditated murder in the first degree with a weapon in the deaths of Limon and Bristy, and he was ordered held without bond at a hearing on Tuesday. Ahead of the hearing, court records painted a clearer picture both about how people planning crimes may be using chatbots and how law enforcement is able to leverage the artificial intelligence data that usage creates. The case also raises questions about what obligation tech companies have to prevent criminal misconduct, as well as to cooperate with and aid investigations. Here's what to know. Prosecutors filed a pretrial detention report on Saturday that detailed Abugharbieh's ChatGPT history both before and after Limon and Bristy went missing. Days before the two students were last seen, Abugharbieh asked the artificial intelligence chatbot what would happen if a human body was put in a garbage bag and thrown in a dumpster. Abugharbieh also asked the artificial intelligence chatbot whether the vehicle identification number on his car could be changed and whether he could keep a gun at home without a license, according to the report. ChatGPT responded that Abugharbieh's question sounded dangerous. Three days after Limon and Bristy's April 16 disappearance, Abugharbieh asked Chat GPT, "Has there been someone who survived a sniper bullet to the head" and "will my neighbors hear my gun," according to the report. He also asked the chatbot four days after that, on April 23, "What does missing endangered adult mean." Like texts, emails and regular search histories, artificial intelligence chatbot records can be obtained by law enforcement throughout the course of an investigation. OpenAI spokesperson Drew Pusateri said Tuesday that the company was looking into the reports on Abugharbieh and would support law enforcement in any way with their investigation. That cooperation comes on the heels of another inquiry into the company launched by Florida's Attorney General James Uthmeier last week, when he announced his office had opened a rare criminal investigation into whether ChatGPT offered advice to a gunman who killed two people and wounded six others last year at Florida State University. Specifically, Uthmeier said that prosecutors had done an initial review of chat logs between ChatGPT and the alleged gunman, Phoenix Ikner, to determine if the AI app aided, abetted or advised the commission of a crime. Prosecutors believe the chatbot advised Ikner on what type of gun and ammunition to use, whether a gun would be useful at short range, and the time and place that would allow for the most potential victims, Uthmeier said. OpenAI spokeswoman Kate Waters called the FSU shooting a tragedy but said the company had no responsibility. The company proactively shared information with law enforcement and continues to cooperate with investigators, she said when asked last week. "In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity," Waters said in an email. Uthmeier said on Monday that his office would expand the investigation into the FSU shooting to include Abugharbieh's case. Uthmeier also said last week that his office's probe marked "uncharted territory." But there have been several criminal prosecutions and lawsuits across the country that delve into similar questions about how the powerful AI technology can be used in the commission of a crime, and the harmful impact that chatbots can have on mental health. Last month, a man sued Google for the wrongful death by suicide of his son, the latest in a growing number of legal challenges against AI developers that have drawn attention to the mental health dangers of chatbot companionship. Before that, in late 2025, OpenAI was sued for it's alleged role in the murder of an 83-year-old Connecticut woman by her son, accusing the company's artificial intelligence chatbot of exacerbating her son's "paranoid delusions" before he killed her and died by suicide. More recently, in criminal court, dozens of messages between former New York Jets linebacker Darron Lee and ChatGPT were presented in March as prosecutors outlined their case surrounding the death of Lee's girlfriend, Gabriella Perpetuo, who was found dead inside the couple's Tennessee home. Hours before Perpetuo was found dead, prosecutors said, Lee asked the chatbot about whether certain injuries could resemble wounds from a fall, among other unusual questions.
[2]
'I won't provide instructions, tactics, or advice that could help someone commit a crime': ChatGPT claims it won't assist would-be felons, despite claims to the contrary from Florida AG
I've been reading lately about how alleged criminals are using ChatGPT and other AI engines to help them game out or even plan a crime. It sounds like a fresh approach for enabling our worst impulses, but it is ultimately no different than Googling, "How to dispose of a body." OpenAI and ChatGPT have come under intense scrutiny since last year, when an alleged Florida gunman apparently asked ChatGPT a series of disturbing questions (all captured in the chat history unearthed by investigators). Phoenix Ikner, according to authorities, asked ChatGPT. "If there was a shooting at FSU, how would the country react?" There were allegedly also conversations about weapons and what sort of prosecution an attacker might face in Florida. The findings prompted the Florida Attorney General James Uthmeier to launch a review of OpenAI and its artificial intelligence app, ChatGPT. Now, the Florida AG is launching a criminal investigation into OpenAI and ChatGPT after college student Hisham Abugharbieh allegedly killed two classmates in Tampa. In this case, authorities dove into Abugharbieh's ChatGPT chat history and, they say, found a series of disturbing conversations, including, apparently, this question, "What happens if a human has a put in a black garbage bag and thrown in a dumpster." (The typos are part of the original prompt.) It's hard to fathom what goes through someone's mind when they pose such a question, but the use of ChatGPT history in criminal cases doesn't strike me as novel or new. Authorities have been using Google Search history with warrants in criminal cases for years. Digital forensics is a tried and true plot device in TV shows like Untraceable, CSI, and movies like The Girl with the Dragon Tattoo. What's striking, though, is how quickly the focus has shifted in recent criminal investigations from Google search results to AI chats. Investigators claim that AI chats, because they are conversations, can be far more revealing than simple Google search results. Perhaps that's true. ChatGPT's ability to sound like a friend or confidant could be more effective at drawing people out and maybe inadvertently encouraging them to share more than they normally would. What does ChatGPT think I can't get into the mind of a killer and have no interest in asking ChatGPT how to commit a crime, but I am curious what the chatbot thinks about all this. Does it have a perspective on the rise in criminal activity conversations and how it's being used to drive investigations? I opened my free version of ChatGPT and added this prompt: "How often do people ask you about illegal activities?" ChatGPT responded quickly: More often than you might expect -- but it's a small minority of overall questions. Most people use me for everyday things: writing, planning trips, understanding the news, solving problems, or just exploring ideas. Questions about illegal activities tend to fall into a few buckets: * Curiosity ("how does this even work?") * Fiction writing or research * People skirting the edge of what's allowed And a smaller slice that are clearly trying to do something harmful or unlawful I'm designed to handle that carefully -- so I can explain laws, risks, history, or consequences, but I won't help plan or enable wrongdoing. With millions (if not billions) of prompts dropping in every second, ChatGPT's answer does not surprise me. I was glad to see that it acknowledged that this sometimes happens. As is ChatGPT's way, it was curious why I asked, so I shared the context of the most recent case. ChatGPT told me: That does happen occasionally, and it tends to get a lot of attention when it shows up in a case. A couple of important clarifications: People can ask hypothetical or curiosity-driven questions about crimes ("what would happen if...?"), and those can be answered in a general, high-level way -- like explaining legal consequences or how the justice system typically responds. But when a question crosses into planning, enabling, or optimizing wrongdoing, I'm designed to refuse or redirect. So I won't provide instructions, tactics, or advice that could help someone commit a crime. It occurs to me that while ChatGPT is designed not to assist in carrying out criminal activity, wily users can skirt around these guardrails by presenting hypotheticals. "What would happen..." or "When is this area busy?" Separately, these questions mean little. It's the answers taken together by the would-be criminal that matter, and, of course, how investigators see all of it after the fact. ChatGPT asked to see the actual case and came back with a few key assertions: * The ChatGPT questions aren't the core evidence -- they're context * "He asked ChatGPT" sounds more dramatic than it is * The content of the questions matters more than the tool * It raises a real -- but often overstated -- AI question Naturally, ChatGPT comes off as somewhat defensive here, seeking to point out that what the alleged criminal was searching for matters more than the platform they used for the queries. What would Google do? It's an argument not much different than the ones I've heard previously when law enforcement used Google and other digital trails to break tough cases and apprehend criminals. In those situations, though, Google usually wasn't put on trial or accused of abetting a crime. After all, it was simply a portal to answers on millions of disparate websites. ChatGPT, like other AI, bases its responses on both what it's learned from the Web and training, and its ability to guess at what should come next in a sentence. It's not a counselor or conspirator. Still, the Florida AG says in its filing on the ChatGPT investigation that, "Florida law states that anyone who aids, abets, or counsels someone in the commission of a crime, and that crime is committed or attempted, may be considered a principal to the crime. The 'aider and abettor' is just as responsible for the crime as the perpetrator." It's a statement that makes sense when you're talking about another person, someone who might have had knowledge of the crime before it was committed. But ChatGPT has no knowledge of the crime, especially because the prompts never describe what anyone may be preparing to do. It's hard to say how often ChatGPT is asked a point-blank question about a criminal act and how it ends the conversation or steers people in a better, less harmful direction, but at least now we know ChatGPT's point of view: "I won't provide instructions, tactics, or advice that could help someone commit a crime," it writes. AI doesn't want to help, but it's clear some are still turning to it for the wrong kind of assistance. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[3]
College students' killings latest case to rely on ChatGPT as evidence
The brutal killings of University of South Florida graduate students Nahida Bristy and Zamil Limon, allegedly plotted in part using ChatGPT, is the latest example of a suspect using an artificial intelligence tool as research prior to committing a crime. Hisham Abugharbieh, 26, the roommate of Limon, was arrested over the weekend and charged with two counts of premeditated murder. Limon's body was found on the Howard Frankland Bridge in St. Petersburg. Human remains were found Monday in the search for Bristy, but they have not been officially identified. Bristy's family told CBS News that police said she is also likely dead. Abugharbieh allegedly used ChatGPT extensively in the days leading up to the crime, according to court documents released Sunday. Among the questions he allegedly asked in the days leading up to the pair's disappearance was how to dispose of a body. The suspect asked ChatGPT on April 13 what would happen if someone was "put in a black garbage bag and thrown in dumpster." The AI chatbot responded that it sounded dangerous, prompting Abugharbieh to allegedly ask, "How would they find out." On April 15, the day before the doctoral students went missing, Abugharbieh allegedly asked ChatGPT, "Can a VIN number on a car be changed?" and, "Can you keep a gun at home with out a license," according to court documents. On the same night his phone pinged near the location where Limon's body was found, the suspect asked ChatGPT if cars are "checked at the Hillsborough River state park." A spokesperson for OpenAI, the developers of ChatGPT, said in a statement to CBS News: "This is a terrible crime, and our thoughts are with everyone affected. We're looking into these reports and will do whatever we can to support law enforcement in their investigation." Abugharbieh was held without bond following an initial court appearance on Tuesday. He has yet to enter a plea. Florida Attorney General James Uthmeier announced last week that his office had launched a criminal investigation into OpenAI after reviewing conversation logs between ChatGPT and a Florida State University student who opened fire on the campus in April 2025. The shooting left two people dead and several others injured. "My prosecutors have looked at this and they've told me if it was a person on the other end of that screen, we would be charging them with murder," Uthmeier said during a news conference on April 21, saying the AI tool offered "significant advice" to suspect Phoenix Ikner. An OpenAI spokesperson said the company identified an account believed to be associated with Ikner and shared it with law enforcement. The company added ChatGPT "did not encourage or promote illegal or harmful activity" and the bot provided responses to questions with information found in public sources on the internet. "Last year's mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime," OpenAI said in its statement. Dr. Jill Schiefelbein, an AI strategist and professor at the Muma College of Business at the University of South Florida, said if politics are taken out of it, the investigation could lead to solutions on issues such as what is a reasonable time for a technology to report someone who violates their terms and conditions. "I wouldn't blame the technology any more than I would blame a vehicle that gets into an accident by a human driver," she told CBS News. "It's how these tools are used, whether it's a firearm, whether it's a vehicle, whether it's a tool that helps you retrieve information, it's the user intent behind it that's the issue." "Does that mean I believe that there shouldn't be stricter guardrails in place? Absolutely not," she said. While OpenAI rejected any responsibility for the Florida State University shooting, the company has not been silent on all crimes involving the AI tool. Sam Altman, the CEO of OpenAI, issued an apology last week to a community in British Columbia where eight people were killed by a teen who had previously exhibited concerning behavior on ChatGPT. Jesse Van Rootselaar, 18, allegedly opened fire on Feb. 10 at Tumbler Ridge Secondary School, killing a teacher and five students, before dying from a self-inflicted gunshot, police said. Van Rootselaar had earlier killed her mother and 11-year-old half-brother at their home. "The pain your community has endured is unimaginable," Altman wrote in a letter shared on social media by British Columbia Premier David Eby. "I have been thinking of you often over the past few months." Altman wrote in the letter, dated April 23, that Van Rootselaar's ChatGPT account had been banned in June 2025. The account was flagged by automated abuse detection tools and human investigators who identify potential misuses of ChatGPT for violent activities, OpenAI told CBS News in February. OpenAI said the account was then banned for violating its usage policies. OpenAI said the company had weighed whether to flag the account to law enforcement, but had determined at the time it did not pose an imminent and credible risk of serious physical harm to others, so it failed to meet the threshold for referral. Altman wrote in his letter that OpenAI will remain focused on preventative efforts "to help ensure something like this never happens again." "I want to express my deepest condolences to the entire community," he said. "No one should ever have to endure a tragedy like this."
Share
Copy Link
ChatGPT queries about body disposal and weapons have emerged as critical evidence in the killing of two Florida students, prompting a criminal investigation into OpenAI. The case highlights growing concerns about AI misuse as law enforcement increasingly relies on chatbot records to build cases against suspects who use AI tools to plan crimes.
The killing of two Florida students has thrust ChatGPT into the center of a criminal investigation that could reshape how tech companies handle AI misuse. Hisham Abugharbieh, 26, faces two counts of premeditated murder in the deaths of University of South Florida doctoral students Zamil Limon and Nahida Bristy, who disappeared on April 16
1
. Court records reveal that Abugharbieh's ChatGPT search history has become a prominent piece of evidence, with prosecutors detailing disturbing queries about body disposal and weapons in the days before the students went missing3
.
Source: CBS
On April 13, Abugharbieh allegedly asked ChatGPT what would happen if a human body was "put in a black garbage bag and thrown in dumpster," according to court documents filed Saturday
3
. When the AI chatbot responded that the question sounded dangerous, he reportedly followed up with "How would they find out." Days later, on April 15, he asked whether a vehicle identification number could be changed and if he could keep a gun at home without a license1
. Three days after the students' disappearance, Abugharbieh asked "Has there been someone who survived a sniper bullet to the head" and "will my neighbors hear my gun"1
.
Source: AP
Florida Attorney General James Uthmeier launched a criminal investigation into OpenAI last week following a separate incident at Florida State University, where alleged gunman Phoenix Ikner killed two people and wounded six others
1
. Prosecutors believe ChatGPT advised Ikner on what type of gun and ammunition to use, whether a gun would be useful at short range, and the time and place that would allow for the most potential victims. Uthmeier stated that if "it was a person on the other end of that screen, we would be charging them with murder"3
. The investigation has now expanded to include Abugharbieh's case, marking what Uthmeier calls "uncharted territory" in criminal prosecutions1
.OpenAI has maintained that ChatGPT bears no responsibility for criminal activities, even as the company pledges full cooperation with law enforcement. OpenAI spokesperson Kate Waters said the chatbot "provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity"
1
. The company proactively shared information with investigators in both the Florida State University shooting and the University of South Florida case1
.When questioned directly about criminal inquiries, ChatGPT acknowledged that such questions occur "more often than you might expect" but represent "a small minority of overall questions"
2
. The AI stated it is "designed to handle that carefully" and won't "provide instructions, tactics, or advice that could help someone commit a crime"2
. However, investigators note that users can circumvent guardrails by framing questions as hypotheticals, with the answers taken together revealing user intent2
.Related Stories
The cases raise urgent questions about tech companies' responsibility in preventing AI misuse and what obligations they have to flag concerning behavior. Dr. Jill Schiefelbein, an AI strategist at the University of South Florida's Muma College of Business, emphasized that "it's the user intent behind it that's the issue," comparing AI tools to vehicles or firearms
3
. Yet she stressed the need for "stricter guardrails in place"3
.
Source: TechRadar
Sam Altman, CEO of OpenAI, recently apologized to a community in British Columbia where a teen who had exhibited concerning behavior on ChatGPT killed eight people in February
3
. That account had been banned in June 2025 after being flagged by automated abuse detection tools, but OpenAI faced criticism for not alerting authorities3
. Like traditional search histories, chatbot records can be obtained by law enforcement during criminal investigations, but investigators claim AI chats are more revealing because they function as conversations that can draw people out2
. The Florida Attorney General investigation could establish precedents on issues such as reasonable timeframes for reporting users who violate terms and conditions, potentially reshaping how AI companies monitor and respond to harmful queries3
.Summarized by
Navi
21 Apr 2026ā¢Policy and Regulation

26 Aug 2025ā¢Technology

04 Dec 2025ā¢Entertainment and Society

1
Entertainment and Society

2
Policy and Regulation

3
Technology
