AI-Generated Code Hallucinations: A New Frontier in Software Supply Chain Attacks

Curated by THEOUTPOST

On Sun, 13 Apr, 12:01 AM UTC

5 Sources

Share

Researchers uncover a significant security risk in AI-assisted coding: 'package hallucinations' where AI models suggest non-existent software packages, potentially leading to a new type of supply chain attack called 'slopsquatting'.

AI-Generated Code Poses New Security Risks

Researchers from the University of Texas at San Antonio (UTSA) have uncovered a significant security vulnerability in AI-assisted software development. Their study, accepted for publication at the USENIX Security Symposium 2025, reveals that large language models (LLMs) frequently generate insecure code, particularly through a phenomenon known as "package hallucination" 1.

Understanding Package Hallucinations

Package hallucinations occur when an AI model recommends or generates code that includes non-existent third-party software libraries. This seemingly simple error can lead to serious security risks, as explained by Joe Spracklen, the lead researcher:

"It doesn't take a convoluted set of circumstances... It's just typing in one command that most people who work in those programming languages type every day. That's all it takes." 1

Prevalence and Impact

The study's findings are alarming:

  • Up to 97% of software developers incorporate generative AI into their workflow
  • 30% of code written today is AI-generated
  • Across 30 different tests, 440,445 out of 2.23 million code samples referenced hallucinated packages
  • GPT-series models had a 5.2% hallucination rate, while open-source models reached 21.7% 1

The Birth of "Slopsquatting"

This vulnerability has given rise to a new type of supply chain attack dubbed "slopsquatting" by Seth Michael Larson of the Python Software Foundation 2. Malicious actors can exploit these hallucinations by:

  1. Identifying commonly hallucinated package names
  2. Creating malicious packages with these names
  3. Uploading them to popular package registries like PyPI or npm 3

Real-World Implications

The threat is not merely theoretical. Feross Aboukhadijeh, CEO of security firm Socket, warns:

"With AI tools becoming the default assistant for many, 'vibe coding' is happening constantly. Developers prompt the AI, copy the suggestion, and move on. Or worse, the AI agent just goes ahead and installs the recommended packages itself." 2

Amplification by Search Engines

The problem is further exacerbated by search engines. When developers search for these hallucinated package names, they may encounter AI-generated summaries that lend false legitimacy to the non-existent or malicious packages 2.

Mitigation Strategies

To combat this emerging threat, experts recommend:

  1. Manually verifying package names and never assuming AI-generated code snippets are safe
  2. Using dependency scanners, lockfiles, and hash verification
  3. Lowering AI "temperature" settings to reduce hallucinations
  4. Testing AI-generated code in isolated environments before deployment 4

Industry Response

The Python Software Foundation and other organizations are working to make package abuse more difficult, but this requires time and resources 2. Meanwhile, the UTSA researchers have disclosed their findings to major AI model providers including OpenAI, Meta, DeepSeek, and Mistral AI 1.

As AI continues to reshape software development practices, the industry must remain vigilant against these new forms of supply chain attacks. The challenge lies in balancing the productivity gains of AI-assisted coding with robust security measures to protect against increasingly sophisticated threats.

Continue Reading
AI Hallucinations in Code Generation: The Rising Threat of

AI Hallucinations in Code Generation: The Rising Threat of Slopsquatting

A new cybersecurity threat called slopsquatting is emerging, where AI-generated hallucinations in code are exploited by malicious actors to spread malware and compromise software security.

Tom's Guide logoCCN.com logo

2 Sources

Tom's Guide logoCCN.com logo

2 Sources

AI-Generated Malware: A New Frontier in Cybersecurity

AI-Generated Malware: A New Frontier in Cybersecurity Threats

Cybersecurity experts have identified malware attacks using AI-generated code, marking a significant shift in the landscape of digital threats. This development raises concerns about the potential for more sophisticated and harder-to-detect cyberattacks.

PCWorld logoTechRadar logoPC Magazine logoBleeping Computer logo

6 Sources

PCWorld logoTechRadar logoPC Magazine logoBleeping Computer logo

6 Sources

AI-Generated Bug Reports Plague Open Source Projects,

AI-Generated Bug Reports Plague Open Source Projects, Frustrating Developers

Open source project maintainers are facing a surge in low-quality, AI-generated bug reports, leading to wasted time and resources. This trend is causing concern among developers and raising questions about the impact of AI on software development.

theregister.com logoTechRadar logoTechSpot logoGizmodo logo

4 Sources

theregister.com logoTechRadar logoTechSpot logoGizmodo logo

4 Sources

The Dark Side of AI: How Hackers Are Leveraging Generative

The Dark Side of AI: How Hackers Are Leveraging Generative AI for Sophisticated Cyberattacks

Cybersecurity experts warn of the increasing use of generative AI by hackers to create more effective malware, bypass security systems, and conduct personalized phishing attacks, posing significant threats to individuals and organizations.

PCWorld logo

2 Sources

PCWorld logo

2 Sources

OpenAI Confirms ChatGPT Abuse by Hackers for Malware and

OpenAI Confirms ChatGPT Abuse by Hackers for Malware and Election Interference

OpenAI reports multiple instances of ChatGPT being used by cybercriminals to create malware, conduct phishing attacks, and attempt to influence elections. The company has disrupted over 20 such operations in 2024.

Bleeping Computer logoTom's Hardware logoTechRadar logoArs Technica logo

15 Sources

Bleeping Computer logoTom's Hardware logoTechRadar logoArs Technica logo

15 Sources

TheOutpost.ai

Your one-stop AI hub

The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.

© 2025 TheOutpost.AI All rights reserved