AI-Generated Code Hallucinations: A New Frontier in Software Supply Chain Attacks

5 Sources

Share

Researchers uncover a significant security risk in AI-assisted coding: 'package hallucinations' where AI models suggest non-existent software packages, potentially leading to a new type of supply chain attack called 'slopsquatting'.

News article

AI-Generated Code Poses New Security Risks

Researchers from the University of Texas at San Antonio (UTSA) have uncovered a significant security vulnerability in AI-assisted software development. Their study, accepted for publication at the USENIX Security Symposium 2025, reveals that large language models (LLMs) frequently generate insecure code, particularly through a phenomenon known as "package hallucination"

1

.

Understanding Package Hallucinations

Package hallucinations occur when an AI model recommends or generates code that includes non-existent third-party software libraries. This seemingly simple error can lead to serious security risks, as explained by Joe Spracklen, the lead researcher:

"It doesn't take a convoluted set of circumstances... It's just typing in one command that most people who work in those programming languages type every day. That's all it takes."

1

Prevalence and Impact

The study's findings are alarming:

  • Up to 97% of software developers incorporate generative AI into their workflow
  • 30% of code written today is AI-generated
  • Across 30 different tests, 440,445 out of 2.23 million code samples referenced hallucinated packages
  • GPT-series models had a 5.2% hallucination rate, while open-source models reached 21.7%

    1

The Birth of "Slopsquatting"

This vulnerability has given rise to a new type of supply chain attack dubbed "slopsquatting" by Seth Michael Larson of the Python Software Foundation

2

. Malicious actors can exploit these hallucinations by:

  1. Identifying commonly hallucinated package names
  2. Creating malicious packages with these names
  3. Uploading them to popular package registries like PyPI or npm

    3

Real-World Implications

The threat is not merely theoretical. Feross Aboukhadijeh, CEO of security firm Socket, warns:

"With AI tools becoming the default assistant for many, 'vibe coding' is happening constantly. Developers prompt the AI, copy the suggestion, and move on. Or worse, the AI agent just goes ahead and installs the recommended packages itself."

2

Amplification by Search Engines

The problem is further exacerbated by search engines. When developers search for these hallucinated package names, they may encounter AI-generated summaries that lend false legitimacy to the non-existent or malicious packages

2

.

Mitigation Strategies

To combat this emerging threat, experts recommend:

  1. Manually verifying package names and never assuming AI-generated code snippets are safe
  2. Using dependency scanners, lockfiles, and hash verification
  3. Lowering AI "temperature" settings to reduce hallucinations
  4. Testing AI-generated code in isolated environments before deployment

    4

Industry Response

The Python Software Foundation and other organizations are working to make package abuse more difficult, but this requires time and resources

2

. Meanwhile, the UTSA researchers have disclosed their findings to major AI model providers including OpenAI, Meta, DeepSeek, and Mistral AI

1

.

As AI continues to reshape software development practices, the industry must remain vigilant against these new forms of supply chain attacks. The challenge lies in balancing the productivity gains of AI-assisted coding with robust security measures to protect against increasingly sophisticated threats.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo