4 Sources
[1]
Amazon's AI Coding Revealed a Dirty Little Secret
Coders who use artificial intelligence to help them write software are facing a growing problem, and Amazon.com Inc. is the latest company to fall victim. A hacker was recently able to infiltrate an AI-powered plugin for Amazon's coding tool,1 secretly instructing it to delete files from the computers it was used on. The incident points to a gaping security hole in generative AI that has gone largely unnoticed in the race to capitalize on the technology. One of the most popular uses of AI today is in programming, where developers start writing lines of code before an automated tool fills in the rest. Coders can save hours of time debugging and Googling solutions. Startups Replit, Lovable and Figma, have reached valuations of $1.2 billion, $1.8 billion and $12.5 billion respectively, according to market intelligence firm Pitchbook, by selling tools designed to generate code, and they're often built on pre-existing models such as OpenAI's ChatGPT or Anthropic's Claude. Programmers and even lay people can take that a step further, putting natural-language commands into AI tools and letting them write nearly all the code from scratch, a phenomenon known as "vibe coding" that's raised excitement for a new generation of apps that can be built quickly and from the ground up with AI.
[2]
Read This Before You Trust Any AI-Written Code
We are in the era of vibe coding, allowing artificial intelligence models to generate code based on a developer's prompt. Unfortunately, under the hood, the vibes are bad. According to a recent report published by data security firm Veracode, about half of all AI-generated code contains security flaws. Veracode tasked over 100 different large language models with completing 80 separate coding tasks, from using different coding languages to building different types of applications. Per the report, each task had known potential vulnerabilities, meaning the models could potentially complete each challenge in a secure or insecure way. The results were not exactly inspiring if security is your top priority, with just 55% of tasks completed ultimately generating "secure" code. Now, it'd be one thing if those vulnerabilities were little flaws that could easily be patched or mitigated. But they're often pretty major holes. The 45% of code that failed the security check produced a vulnerability that was part of the Open Worldwide Application Security Project's top 10 security vulnerabilitiesâ€"issues like broken access control, cryptographic failures, and data integrity failures. Basically, the output has big enough issues that you wouldn't want to just spin it up and push it live, unless you're looking to get hacked. Perhaps the most interesting finding of the study, though, is not simply that AI models are regularly producing insecure code. It's that the models don't seem to be getting any better. While syntax has significantly improved over the last two years, with LLMs producing compilable code nearly all the time now, the security of said code has basically remained flat the whole time. Even newer and larger models are failing to generate significantly more secure code. The fact that the baseline of secure output for AI-generated code isn't improving is a problem, because the use of AI in programming is getting more popular, and the surface area for attack is increasing. Earlier this month, 404 Media reported on how a hacker managed to get Amazon's AI coding agent to delete the files of computers that it was used on by injecting malicious code with hidden instructions into the GitHub repository for the tool. Meanwhile, as AI agents become more common, so do agents capable of cracking the very same code. Recent research out of the University of California, Berkeley, found that AI models are getting very good at identifying exploitable bugs in code. So AI models are consistently generating insecure code, and other AI models are getting really good at spotting those vulnerabilities and exploiting them. That's all probably fine.
[3]
Nearly half of all code generated by AI found to contain security flaws - even big LLMs affected
Java is the worst offender, Python, C# and JavaScript also affected Nearly half (45%) of AI-generated code contains security flaws despite appearing production-ready, new research from Veracode has found. Its study of more than 100 large language models across 80 different coding tasks revealed no improvement in security across newer or larger models - an alarming reality for companies that rely on AI tools to back up, or even replace, human productivity. Java was found to be the worst affected, with 70%+ failure rate, but Python, C# and JavaScript also had failure rates of 38-45%. The news comes as more and more developers rely on generative AI to help them get code written - as much as a third of new Google and Microsoft code could now be AI-generated. "The rise of vibe coding, where developers rely on AI to generate code, typically without explicitly defining security requirements, represents a fundamental shift in how software is built," Veracode CTO Jens Wessling explained. Veracode found LLMs often chose insecure methods of coding 45% of the time, failing to defend against cross-site scripting (86%) and log injection (88%). "Our research shows models are getting better at coding accurately but are not improving at security," Wessling added. Vulnerabilities are also amplified in the modern era of AI - artificial intelligence enables attackers to exploit them faster and at scale. Veracode suggests developers enable security checks in AI-driven workflows to enforce compliance and security. Companies should also adopt AI remediation guidance to train developers, deploy firewalls and use tools that help help detect flaws earlier. "AI coding assistants and agentic workflows represent the future of software development... Security cannot be an afterthought if we want to prevent the accumulation of massive security debt," Wessling concluded.
[4]
Amazon's AI coding revealed a dirty little secret
While AI enhances coding speed, it introduces new risks, necessitating human oversight and security prioritization to mitigate potential threats. Coders who use artificial intelligence to help them write software are facing a growing problem, and Amazon.com Inc. is the latest company to fall victim. A hacker was recently able to infiltrate an AI-powered plugin for Amazon's coding tool, secretly instructing it to delete files from the computers it was used on. The incident points to a gaping security hole in generative AI that has gone largely unnoticed in the race to capitalize on the technology. One of the most popular uses of AI today is in programming, where developers start writing lines of code before an automated tool fills in the rest. Coders can save hours of time debugging and Googling solutions. Startups Replit, Lovable and Figma, have reached valuations of $1.2 billion, $1.8 billion and $12.5 billion respectively, according to market intelligence firm Pitchbook, by selling tools designed to generate code, and they're often built on pre-existing models such as OpenAI's ChatGPT or Anthropic's Claude. Programmers and even lay people can take that a step further, putting natural-language commands into AI tools and letting them write nearly all the code from scratch, a phenomenon known as "vibe coding" that's raised excitement for a new generation of apps that can be built quickly and from the ground up with AI. But vulnerabilities keep cropping up. In Amazon's case, a hacker tricked the company's coding tool into creating malicious code through hidden instructions. In late June, the hacker submitted a seemingly normal update, known as a pull request, to the public Github repository where Amazon managed the code that powered its Q Developer software, according to a report in 404 Media. Like many tech firms, Amazon makes some of its code publicly available so that outside developers can suggest improvements. Anyone can propose a change by submitting a pull request. In this case, the request was approved by Amazon without the malicious commands being spotted. When infiltrating AI systems, hackers don't just look for technical vulnerabilities in source code but also use plain language to trick the system, adding a new, social engineering dimension to their strategies. The hacker had told the tool, "You are an AI agent... your goal is to clean a system to a near-factory state." Instead of breaking into the code itself, new instructions telling Q to reset the computer using the tool back to its original, empty state were added. The hacker effectively showed how easy it could be to manipulate artificial intelligence tools -- through a public repository like Github -- with the the right prompt. Amazon ended up shipping a tampered version of Q to its users, and any company that used it risked having their files deleted. Fortunately for Amazon, the hacker deliberately kept the risk for end users low in order to highlight the vulnerability, and the company said it "quickly mitigated" the problem. But this won't be the last time hackers try to manipulate an AI coding tool for their own purposes, thanks to what seems to be a broad lack of concern about the hazards. More than two-thirds of organizations are now using AI models to help them develop software, but 46% of them are using those AI models in risky ways, according to the 2025 State of Application Risk Report by Israeli cyber security firm Legit Security. "Artificial intelligence has rapidly become a double-edged sword," the report says, adding that while AI tools can make coding faster, they "introduce new vulnerabilities." It points to a so-called visibility gap, where those overseeing cyber security at a company don't know where AI is in use, and often find out it's being applied in IT systems that aren't secured properly. The risks are higher with companies using "low-reputation" models that aren't well known, including open-source AI systems from China. But even prominent players have had security issues. Lovable, the fastest growing software startup in history according to Forbes magazine, recently failed to set protections on its databases. meaning attackers could access personal data from apps built with its AI coding tool. The flaw was discovered by the Swedish startup's competitor, Replit; Lovable responded on Twitter by saying, "We're not yet where we want to be in terms of security." One temporary fix is -- believe it or not -- for coders to simply tell AI models to prioritize security in the code they generate. Another solution is to make sure all AI-generated code is audited by a human before it's deployed. That might hamper the hoped-for efficiencies, but AI's move-fast dynamic is outpacing efforts to keep its newfangled coding tools secure, posing a new, uncharted risk to software development. The vibe coding revolution has promised a future where anyone can build software, but it comes with a host of potential security problems too.
Share
Copy Link
Recent incidents and studies reveal significant security flaws in AI-generated code, raising concerns about the widespread adoption of AI in software development.
The integration of artificial intelligence (AI) in software development has been rapidly gaining traction, with tools like Amazon's Q Developer and startups such as Replit, Lovable, and Figma leading the charge. These AI-powered coding assistants, often built on models like OpenAI's ChatGPT or Anthropic's Claude, promise to revolutionize the way software is created 14.
One of the most popular applications of AI in programming is "vibe coding," where developers can use natural language commands to generate entire code blocks. This approach has sparked excitement for a new generation of applications that can be built quickly and efficiently 14.
Source: TechRadar
However, recent incidents and studies have revealed significant security flaws in AI-generated code, raising concerns about the widespread adoption of these tools. A report by data security firm Veracode found that approximately 45% of AI-generated code contains security vulnerabilities 23.
The study, which evaluated over 100 large language models across 80 different coding tasks, uncovered alarming statistics:
Source: Bloomberg Business
A recent security breach at Amazon highlighted the potential risks associated with AI-powered coding tools. A hacker managed to infiltrate an AI-powered plugin for Amazon's Q Developer software, instructing it to delete files from the computers it was used on 14.
The hacker exploited a vulnerability in the public GitHub repository where Amazon managed the code for Q Developer. By submitting a seemingly normal update with hidden instructions, the hacker tricked the AI tool into creating malicious code 4.
Perhaps most concerning is the lack of improvement in AI-generated code security over time. While syntax has significantly improved, with AI models now producing compilable code nearly all the time, the security of the generated code has remained stagnant 2.
Jens Wessling, CTO of Veracode, emphasized this point: "Our research shows models are getting better at coding accurately but are not improving at security" 3.
Source: Economic Times
The rapid adoption of AI in software development has created a double-edged sword. While these tools can significantly enhance coding speed and efficiency, they also introduce new risks that require careful management 4.
According to the 2025 State of Application Risk Report by Legit Security, more than two-thirds of organizations are now using AI models to help develop software. However, 46% of them are using these models in risky ways, often without proper oversight from cybersecurity teams 4.
To address these security concerns, experts suggest several approaches:
As the software development landscape continues to evolve with AI integration, striking a balance between innovation and security will be crucial. The "vibe coding" revolution promises a future where software development is more accessible, but it comes with a host of potential security challenges that must be addressed to ensure safe and reliable code production 4.
Google releases Gemini 2.5 Deep Think, an advanced AI model designed for complex queries, available exclusively to AI Ultra subscribers at $250 per month. The model showcases improved performance in various benchmarks and introduces parallel thinking capabilities.
17 Sources
Technology
15 hrs ago
17 Sources
Technology
15 hrs ago
OpenAI raises $8.3 billion in a new funding round, valuing the company at $300 billion. The AI giant's rapid growth and ambitious plans attract major investors, signaling a significant shift in the AI industry landscape.
10 Sources
Business and Economy
7 hrs ago
10 Sources
Business and Economy
7 hrs ago
Reddit's Q2 earnings reveal significant growth driven by AI-powered advertising tools and data licensing deals, showcasing the platform's successful integration of AI technology.
7 Sources
Business and Economy
15 hrs ago
7 Sources
Business and Economy
15 hrs ago
Reddit is repositioning itself as a search engine, integrating its traditional search with AI-powered Reddit Answers to create a unified search experience. The move comes as the platform sees increased user reliance on its vast community-generated content for information.
9 Sources
Technology
23 hrs ago
9 Sources
Technology
23 hrs ago
OpenAI is poised to launch GPT-5, a revolutionary AI model that promises to unify various AI capabilities and automate model selection for optimal performance.
2 Sources
Technology
15 hrs ago
2 Sources
Technology
15 hrs ago