2 Sources
2 Sources
[1]
AI code assistants improve production of security problems
AI coding assistants allow developers to move fast and break things, which may not be ideal. Application security firm Apiiro says that it analyzed code from tens of thousands of repositories and several thousand developers affiliated with Fortune 50 enterprises, to better understand the impact of AI code assistants like Anthropic's Claude Code, OpenAI's GPT-5, and Google's Gemini 2.5 Pro. AI is fixing the typos but creating the timebombs The firm found that AI-assisted developers produced three to four times more code than their unassisted peers, but also generated ten times more security issues. "Security issues" here doesn't mean exploitable vulnerabilities; rather, it covers a broad set of application risks, including added open source dependencies, insecure code patterns, exposed secrets, and cloud misconfigurations. As of June 2025, AI-generated code had introduced over 10,000 new "security findings" per month in Apiiro's repository data set, representing a 10x increase from December 2024, the biz said. "AI is multiplying not one kind of vulnerability, but all of them at once," said Apiiro product manager Itay Nussbaum, in a blog post. "The message for CEOs and boards is blunt: if you're mandating AI coding, you must mandate AI AppSec in parallel. Otherwise, you're scaling risk at the same pace you're scaling productivity." The AI assistants generating code for the repos in question also tended to pack more code into fewer pull requests, making code reviews more complicated because the proposed changes touch more parts of the codebase. In one instance, Nussbaum said, an AI-driven pull request altered an authorization header across multiple services, and when a downstream service wasn't updated, that created a silent authentication failure. The AI code helpers aren't entirely without merit. They reduced syntax errors by 76 percent and logic bugs by 60 percent, but at a greater cost - a 322 percent increase in privilege escalation paths and 153 percent increase in architectural design flaws. "In other words, AI is fixing the typos but creating the timebombs," said Nussbaum. Apiiro's analysis also found that developers relying on AI help exposed sensitive cloud credentials and keys nearly twice as often as their DIY colleagues. The firm's findings echo the work of other researchers. For example, in May 2025, computer scientists from University of San Francisco, Vector Institute for Artificial Intelligence (Canada), and University of Massachusetts Boston determined that allowing AI models to iteratively improve code samples degrades security. This shouldn't be surprising given that AI models ingest vulnerabilities in training data and tend to repeat those flaws when generating code. At the same time, AI models are being used to find zero-day vulnerabilities in Android apps. Apiiro's observation about AI-assisted developers producing code faster than those without appears to contradict recent research from Model Evaluation & Threat Research (METR) that found AI coding tools made software developers slower. It may be however that Apiiro is counting only the time required to generate code, not the time required to iron out the flaws. Apiiro, based in Israel, wasn't immediately available to respond. ®
[2]
AI is creating code faster - but this also means more potential security issues
AI is 4x quicker than humans, and can also reduce syntax errors and logic bugs New research has claimed that despite its promised advances in helping code quicker and more effectively, the use of AI tools in coding may be throwing up security issues across the board. Apiiro has quantified the vulnerabilities that companies could be exposing themselves to by using AI-generated code, finding AI-assisted developers were able to write 3-4x more code than their peers, however their code introduced 10x more security issues. The vulnerabilities varied in severity, and while they weren't always exploitable bugs, code quality still saw a noticeable drop. Apiiro found insecure patterns, exposed secrets, new dependencies and cloud misconfigurations across AI-generated code, as well as the exploitable bugs that pose the biggest risk. By June 2025, AI-generated code was throwing up 10,000 new security findings per month, a tenfold increase in the six months from December 2024. However, artificial intelligence does have many clear benefits, too. According to the data, syntax errors in AI-written code dropped by 76% and logic bugs fell by over 60%. Conversely, privilege escalation paths surged by 322% and architectural design flaws rose by 153% - issues that reviewers struggle to spot. "AI is fixing the typos but creating the timebombs," the researchers summarized. Apiiro also noted AI-assisted developers exposed sensitive keys nearly twice as often as their unassisted peers. "Because assistants generate large, multi-file changes, a single credential can be propagated across multiple services or configs before anyone notices," the explains. All of this in an era when AI coding is actually being mandated - not just supported - by companies like Coinbase and Citi. Big Tech leaders also indicate that around one-third or more of their new code is AI-generated. Companies should consider implementing further safeguards whenever they use AI-generated code, as it also serves as a reminder that human oversight, logic and experience cannot be overlooked.
Share
Share
Copy Link
A new study by Apiiro reveals that AI-assisted developers produce code faster but introduce significantly more security issues, highlighting the need for enhanced security measures in AI-driven software development.
In a groundbreaking study, application security firm Apiiro has uncovered both the promises and perils of AI-assisted coding. The research, which analyzed code from tens of thousands of repositories and several thousand developers affiliated with Fortune 50 enterprises, reveals a significant boost in productivity coupled with an alarming increase in security risks
1
.Source: TechRadar
AI-assisted developers are producing code at an unprecedented rate, outpacing their unassisted counterparts by three to four times. However, this productivity boost comes at a cost: these developers are also generating ten times more security issues
1
. Itay Nussbaum, Apiiro's product manager, warns, "AI is multiplying not one kind of vulnerability, but all of them at once"1
.The "security issues" identified in the study encompass a broad spectrum of application risks, including:
Source: The Register
By June 2025, AI-generated code had introduced over 10,000 new "security findings" per month in Apiiro's repository data set, representing a tenfold increase from December 2024
1
2
.While AI code helpers have shown remarkable improvements in certain areas, they've also introduced new challenges:
Improvements:
New Challenges:
Nussbaum succinctly summarizes this paradox: "AI is fixing the typos but creating the timebombs"
1
2
.Related Stories
The study also highlights how AI-generated code complicates the review process. AI assistants tend to pack more code into fewer pull requests, making reviews more challenging as proposed changes touch multiple parts of the codebase. In one instance, an AI-driven pull request altered an authorization header across multiple services, leading to a silent authentication failure when a downstream service wasn't updated
1
.As companies like Coinbase and Citi mandate AI coding, and Big Tech leaders report that around one-third or more of their new code is AI-generated, the need for robust security measures becomes paramount. Apiiro's findings suggest that companies embracing AI-assisted coding must also implement AI-driven application security measures to balance productivity gains with potential risks
2
.Nussbaum advises, "The message for CEOs and boards is blunt: if you're mandating AI coding, you must mandate AI AppSec in parallel. Otherwise, you're scaling risk at the same pace you're scaling productivity"
1
.As the software development landscape evolves with AI, it's clear that human oversight, logic, and experience remain crucial in maintaining code quality and security. The challenge now lies in harnessing the productivity benefits of AI while mitigating the associated security risks.
Summarized by
Navi
[1]