ChatGPT Used to Plan Cybertruck Explosion in Las Vegas: A First for AI-Assisted Crime in the US

19 Sources

Share

Las Vegas police reveal that ChatGPT was used to plan the explosion of a Tesla Cybertruck outside Trump International Hotel, marking the first known case of AI being used to orchestrate an attack on US soil.

News article

AI-Assisted Attack: ChatGPT Used in Cybertruck Explosion

In a groundbreaking case, Las Vegas police have revealed that ChatGPT, a popular AI language model, was used to plan the explosion of a Tesla Cybertruck outside the Trump International Hotel on New Year's Day. This marks the first known instance of AI being employed to orchestrate an attack on US soil, raising significant concerns about the potential misuse of artificial intelligence for criminal activities

1

2

.

The Incident and the Perpetrator

The explosion was carried out by Matthew Livelsberger, a 37-year-old active-duty US Army Green Beret from Colorado Springs. Livelsberger fatally shot himself just before detonating the Cybertruck, which was loaded with 60 pounds of pyrotechnic material. The blast resulted in minor injuries to seven people

2

3

.

ChatGPT's Role in Planning the Attack

According to Las Vegas Metropolitan Police Department Sheriff Kevin McMahill, Livelsberger used ChatGPT extensively to plan his attack. Over the course of an hour in the days leading up to the incident, he asked the AI more than 17 questions related to his crime

2

. These queries included:

  1. Sourcing explosives for the blast
  2. Relevant laws to be aware of
  3. Where to buy guns in Denver
  4. The legality of fireworks in Arizona
  5. The speed at which a firearm round would need to be fired to ignite the explosives in the truck

    2

    4

Crucially, ChatGPT provided information on the specific firing speed required to ignite the chosen explosive, which was instrumental in ensuring the success of the blast

5

.

Implications for AI Safety and Regulation

This incident has reignited debates about the potential dangers of AI and the need for stronger safeguards. Sheriff McMahill described the use of AI in this context as a "game-changer," highlighting the long-held fears that generative AI could facilitate crimes

2

4

.

OpenAI, the company behind ChatGPT, responded to the incident, stating that they are "committed to seeing AI tools used responsibly" and that their "models are designed to refuse harmful instructions." They also noted that in this case, ChatGPT provided information already publicly available on the internet and included warnings against harmful or illegal activities

3

4

.

Broader Context and Concerns

This case aligns with previous warnings from law enforcement agencies about the potential misuse of AI. In 2023, Europol cautioned about ChatGPT's potential applications in various criminal activities, including phishing, fraud, disinformation, and cybercrime

2

.

The incident has also raised questions about the effectiveness of current safeguards in AI systems. McMahill noted that he was unaware of any mechanisms that would have flagged Livelsberger's suspicious queries to ChatGPT, despite their nature and frequency

2

5

.

As AI continues to advance and become more accessible, this case serves as a stark reminder of the urgent need for robust regulations and ethical guidelines to prevent the misuse of these powerful technologies in criminal activities.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo