Curated by THEOUTPOST
On Thu, 24 Apr, 12:04 AM UTC
2 Sources
[1]
Slopsquatting: The worrying AI hallucination bug that could be spreading malware
Software sabotage is rapidly becoming a potent new weapon in the cybercriminal arsenal, augmented by the rising popularity of AI coding. Instead of inserting malware into conventional code, criminals are now using AI-hallucinated software packages and library names to fool unwary programmers. It works like this: AI models, especially the smaller ones, regularly hallucinate (make up) non-existent components while they're being used for coding. Malicious types with coding skills study the hallucinated output from these AI models and then create malware with the same names. The next time an AI requests the fake package, malware is served instead of an error message. At this point, the damage is done, as the malware becomes an integrated part of the final code. A recent research report, which evaluated 16 popular large language models used for code generation, unveiled a staggering 205,474 unique examples of hallucinated package names. These names are completely fictional, but can be used by cyber criminals as a way of inserting malware into Python and JavaScript software projects. Perhaps unsurprisingly, the most common AI culprits for these sorts of package hallucinations are the smaller open-source models, which are used by professionals and homebrew vibe-coders (those who code via AI prompts) on their local computers, rather than using the cloud. CodeLlama, Mistral 7B, and OpenChat 7B were some of the models that generated the most hallucinations. The worst model, CodeLlama 7B, delivered a whopping 25% hallucination rate when generating code in this way. There is, of course, a long and storied history of inserting malware into everyday software products, using what is known as supply chain attacks. This latest iteration follows on from the previous round of typosquatting, where misspellings of common terms are also used to fool coders into utilizing bad code. Programmers who are on a deadline may often mistakenly use libraries, packages, and tools that have been deliberately misspelled and contain a malicious payload. One of the early examples was the use of a misspelled package called 'electorn', which was a twist on the Electron product, a popular application framework. These attacks work because a large percentage of modern application programming involves downloading ready-made components to use in the project. These components, often known as dependencies, can be downloaded and installed with a single simple command. Which makes it trivially easy for a cybercriminal to take advantage of a keyboard slip which requests the wrong name by mistake. Because the integrated malware is extremely subtle, it can go unnoticed in the final product or application. The end result, however, is the same - unwary users triggering malware without understanding or knowing what's under the hood of their application. What has made the arrival of AI more problematic in this regard is the fact that AI coding tools can and will automatically request dependencies as part of their coding process. It may all sound a little random, because it is, but with the volume of coding that is now transitioning over to the AI arena, this type of opportunist attack is likely to rise significantly. Security researchers are now focusing their attention on trying to mitigate this kind of attack by improving the fine-tuning of models. New package verification tools are also coming onto the market, which can catch this type of hallucination before it enters the public arena. In the meantime, the message is, coders beware.
[2]
AI Hallucinations & Slopsquatting: A Caution for Blockchain Devs
AI Hallucinations and Slopsquatting: The Hidden Risk in Autocomplete Coding One of the first topics that entered the mainstream artificial intelligence (AI) debate was AI hallucinations. These plausible outputs follow content standards but are factually or logically incorrect. Despite sounding fluent and convincing, AI models can hallucinate when generating text by producing made-up statistics, misquotes, or fake sources. In autocomplete coding, hallucinations occur when the AI confidently suggests faulty code -- basically, these are all lies, or in the worst cases, they lead to scams. AI autocomplete hallucinations may compile without errors and appear technically sound, but they introduce AI-generated bugs and vulnerabilities -- especially in security-critical systems like smart contracts. This article discusses the growing concern behind faulty AI-generated code and why developer vigilance matters more than ever. What Are AI Hallucinations in Autocomplete Coding? AI hallucinations in autocomplete coding are outputs created by large language models (LLMs) when used for coding assistance. These outputs follow a logic based on pattern recognition, so they often look convincing but are wrong (non-existent or incorrect package names). A recent case involved fake packages mimicking bitcoinlib, which were used to target crypto wallets through malicious Python libraries. AI hallucinations happen because the model does not understand facts. It does not think. It follows statistical patterns from its training data to predict what comes next. As a result, it can generate a hallucination that, compared to most human hallucinations, reads quite convincingly. A hallucinated code snippet may resemble something users expect to see. It might refer to a function that does not exist, misuse an API, or create a logical contradiction. And because it looks polished, it can slip through reviews without anyone noticing. Slopsquatting Explained: A New AI-Generated Threat Slopsquatting, a form of typosquatting, is a deliberate attack strategy that uses AI hallucinations generated by code completion tools. Here is how this attack works: Real-World Examples of AI-Caused Coding Bugs When hallucinations make it into production code, they do not just cause errors -- they open the door to full-blown security failures. These are not theoretical risks -- they have already happened. A 2025 study found that code LLMs suggested over 200,000 fake packages, with open-source models hallucinating at rates four times higher than commercial ones. Some of the real examples highlighted in the study include: Why Vibe Coding Poses Risks To Blockchain Security Vibe coding is an emerging approach to software development that leverages AI to generate code from natural language inputs, enabling developers to focus on high-level design and intent rather than low-level implementation details. It rewards confidence over correctness. If blockchain developers under pressure accept AI-suggested code that feels familiar, even when it lacks context, accuracy, or safety, they might become easy victims of this threat. The devil is in the details when it comes to AI hallucinations and slopsquatting. Some autocomplete coding risks for blockchain developers include: Best Practices To Prevent AI-Generated Coding Vulnerabilities Some of the best practices to avoid damage from AI hallucinations and slopsquatting attacks include: AI cannot replace developers, but it can be used for support. This support must come with better training data, stricter safeguards, and tools aiming to detect hallucinations before they become threats. As models evolve, security must scale with them. The future of secure coding lies in human oversight, smarter AI tuning, regulation, and shared responsibility across development teams, model providers, and open-source communities. Conclusion AI-generated code can significantly accelerate blockchain development -- but it also introduces serious security risks. Hallucinated imports, slop-squatted packages, and flawed logic aren't theoretical -- they're appearing in real-world smart contract projects. Recent research shows that open-source language models hallucinate at alarmingly high rates, producing thousands of fake packages that closely mimic trusted libraries. In the context of blockchain, where immutability and on-chain execution leave little room for error, these risks are amplified. Autocomplete coding may feel like a time-saver, but it's quickly becoming a security blind spot. To build securely with AI tools, developers must enforce strict validations, write precise prompts, and depend only on verified, audited libraries. AI can assist -- but secure smart contracts still require vigilant human oversight.
Share
Share
Copy Link
A new cybersecurity threat called slopsquatting is emerging, where AI-generated hallucinations in code are exploited by malicious actors to spread malware and compromise software security.
In the rapidly evolving landscape of artificial intelligence and cybersecurity, a new threat has emerged: slopsquatting. This sophisticated attack strategy exploits AI hallucinations generated by code completion tools, potentially exposing developers and users to malware and other security risks 1.
AI hallucinations occur when large language models (LLMs) produce outputs that appear plausible but are factually or logically incorrect. In the context of code generation, these hallucinations can manifest as non-existent or incorrect package names, function calls, or API usages 2.
A recent study evaluating 16 popular LLMs used for code generation uncovered a staggering 205,474 unique examples of hallucinated package names. The smaller open-source models, such as CodeLlama, Mistral 7B, and OpenChat 7B, were found to be particularly prone to generating these hallucinations 1.
Slopsquatting works by exploiting these AI-generated hallucinations. Cybercriminals study the output from AI models and create malware with names matching the hallucinated components. When an AI subsequently requests the fake package, malware is served instead of an error message, potentially compromising the entire software project 1.
This attack vector is particularly concerning because:
The blockchain and cryptocurrency sectors are not immune to this threat. In fact, the immutable nature of blockchain technology amplifies the potential risks. A recent case involved fake packages mimicking 'bitcoinlib', which were used to target crypto wallets through malicious Python libraries 2.
To combat the risks associated with AI hallucinations and slopsquatting, security researchers and developers are focusing on several key areas:
As AI continues to play a larger role in software development, the industry must adapt to these new challenges. The future of secure coding lies in a balance between leveraging AI's capabilities and maintaining human oversight. This includes:
While AI can significantly accelerate development processes, it cannot replace the critical thinking and security-conscious approach of human developers. As the landscape of AI-assisted coding evolves, so too must our strategies for ensuring software security and integrity.
Researchers uncover a significant security risk in AI-assisted coding: 'package hallucinations' where AI models suggest non-existent software packages, potentially leading to a new type of supply chain attack called 'slopsquatting'.
5 Sources
5 Sources
An exploration of AI hallucinations, their causes, and potential consequences across various applications, highlighting the need for vigilance and fact-checking in AI-generated content.
8 Sources
8 Sources
Cybersecurity experts have identified malware attacks using AI-generated code, marking a significant shift in the landscape of digital threats. This development raises concerns about the potential for more sophisticated and harder-to-detect cyberattacks.
6 Sources
6 Sources
Open source project maintainers are facing a surge in low-quality, AI-generated bug reports, leading to wasted time and resources. This trend is causing concern among developers and raising questions about the impact of AI on software development.
4 Sources
4 Sources
Researchers discover that fine-tuning AI language models on insecure code leads to "emergent misalignment," causing the models to produce toxic and dangerous outputs across various topics.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved