3 Sources
3 Sources
[1]
AI code assistants improve production of security problems
AI coding assistants allow developers to move fast and break things, which may not be ideal. Application security firm Apiiro says that it analyzed code from tens of thousands of repositories and several thousand developers affiliated with Fortune 50 enterprises, to better understand the impact of AI code assistants like Anthropic's Claude Code, OpenAI's GPT-5, and Google's Gemini 2.5 Pro. AI is fixing the typos but creating the timebombs The firm found that AI-assisted developers produced three to four times more code than their unassisted peers, but also generated ten times more security issues. "Security issues" here doesn't mean exploitable vulnerabilities; rather, it covers a broad set of application risks, including added open source dependencies, insecure code patterns, exposed secrets, and cloud misconfigurations. As of June 2025, AI-generated code had introduced over 10,000 new "security findings" per month in Apiiro's repository data set, representing a 10x increase from December 2024, the biz said. "AI is multiplying not one kind of vulnerability, but all of them at once," said Apiiro product manager Itay Nussbaum, in a blog post. "The message for CEOs and boards is blunt: if you're mandating AI coding, you must mandate AI AppSec in parallel. Otherwise, you're scaling risk at the same pace you're scaling productivity." The AI assistants generating code for the repos in question also tended to pack more code into fewer pull requests, making code reviews more complicated because the proposed changes touch more parts of the codebase. In one instance, Nussbaum said, an AI-driven pull request altered an authorization header across multiple services, and when a downstream service wasn't updated, that created a silent authentication failure. The AI code helpers aren't entirely without merit. They reduced syntax errors by 76 percent and logic bugs by 60 percent, but at a greater cost - a 322 percent increase in privilege escalation paths and 153 percent increase in architectural design flaws. "In other words, AI is fixing the typos but creating the timebombs," said Nussbaum. Apiiro's analysis also found that developers relying on AI help exposed sensitive cloud credentials and keys nearly twice as often as their DIY colleagues. The firm's findings echo the work of other researchers. For example, in May 2025, computer scientists from University of San Francisco, Vector Institute for Artificial Intelligence (Canada), and University of Massachusetts Boston determined that allowing AI models to iteratively improve code samples degrades security. This shouldn't be surprising given that AI models ingest vulnerabilities in training data and tend to repeat those flaws when generating code. At the same time, AI models are being used to find zero-day vulnerabilities in Android apps. Apiiro's observation about AI-assisted developers producing code faster than those without appears to contradict recent research from Model Evaluation & Threat Research (METR) that found AI coding tools made software developers slower. It may be however that Apiiro is counting only the time required to generate code, not the time required to iron out the flaws. Apiiro, based in Israel, wasn't immediately available to respond. ®
[2]
AI is creating code faster - but this also means more potential security issues
AI is 4x quicker than humans, and can also reduce syntax errors and logic bugs New research has claimed that despite its promised advances in helping code quicker and more effectively, the use of AI tools in coding may be throwing up security issues across the board. Apiiro has quantified the vulnerabilities that companies could be exposing themselves to by using AI-generated code, finding AI-assisted developers were able to write 3-4x more code than their peers, however their code introduced 10x more security issues. The vulnerabilities varied in severity, and while they weren't always exploitable bugs, code quality still saw a noticeable drop. Apiiro found insecure patterns, exposed secrets, new dependencies and cloud misconfigurations across AI-generated code, as well as the exploitable bugs that pose the biggest risk. By June 2025, AI-generated code was throwing up 10,000 new security findings per month, a tenfold increase in the six months from December 2024. However, artificial intelligence does have many clear benefits, too. According to the data, syntax errors in AI-written code dropped by 76% and logic bugs fell by over 60%. Conversely, privilege escalation paths surged by 322% and architectural design flaws rose by 153% - issues that reviewers struggle to spot. "AI is fixing the typos but creating the timebombs," the researchers summarized. Apiiro also noted AI-assisted developers exposed sensitive keys nearly twice as often as their unassisted peers. "Because assistants generate large, multi-file changes, a single credential can be propagated across multiple services or configs before anyone notices," the explains. All of this in an era when AI coding is actually being mandated - not just supported - by companies like Coinbase and Citi. Big Tech leaders also indicate that around one-third or more of their new code is AI-generated. Companies should consider implementing further safeguards whenever they use AI-generated code, as it also serves as a reminder that human oversight, logic and experience cannot be overlooked.
[3]
Programmers Using AI Create Way More Glaring Security Issues, Data Shows
As a security firm called Apiiro found in new research, developers who used AI produce ten times more security problems than their counterparts who don't use the technology. Looking at code from thousands of developers and tens of thousand repositories, Apiiro found that AI-assisted devs were indeed producing three or four times more code -- and as the firm's product manager Itay Nussbaum suggested, that breakneck pace seems to be causing the security gaps. "AI is multiplying not one kind of vulnerability," Nussbaum wrote, "but all of them at once." Ironically, some of the "benefits" of AI coding appear to be the vehicles for these issues. Apiiro found that syntax errors fell 76 percent and logic bugs -- faulty code that causes a program to operate incorrectly -- were down 60 percent. The tradeoff has, however, been severe: privilege escalation, or code that allows an attacker to get higher access to a system than they should, increased by a staggering 322 percent. Architectural design problems, meanwhile, were up 153 percent. "In other words," Nussbaum wrote, "AI is fixing the typos but creating the timebombs." It's not exactly a huge surprise that AI coding produces security risks. As flagged by The Register, researchers from the University of San Francisco, Canada's Vector Institute for Artificial Intelligence, and University of Massachusetts Boston found in a recent but not-yet-peer-reviewed study that AI coding "improvements" end up majorly degrading security overall. Still, this new data shows just how massive the problem is. With companies like Coinbase, Shopify, and Duolingo now mandating AI use for their workers, this issue is not only creating way more security vulnerabilities, but also creating more work for those tasked with fixing the issues. It's pretty clear that AI's biggest "transformation" in the workplace so far has been in creating more fixes for remaining human workers to sift through. Still, the integration of AI into coding -- and writing, and audio, and video -- does not appear to be slowing down, which means that the problem will get much worse before it gets better.
Share
Share
Copy Link
A new study by Apiiro reveals that AI-assisted developers produce more code but introduce significantly more security issues, raising concerns about the widespread adoption of AI coding tools in the tech industry.
In a groundbreaking study, application security firm Apiiro has uncovered startling insights into the impact of AI coding assistants on software development. The research, which analyzed code from tens of thousands of repositories and several thousand developers affiliated with Fortune 50 enterprises, reveals a significant increase in both productivity and security risks associated with AI-assisted coding
1
.Source: TechRadar
Apiiro's findings show that developers using AI assistants like Anthropic's Claude Code, OpenAI's GPT-5, and Google's Gemini 2.5 Pro produce three to four times more code than their unassisted counterparts. However, this increased productivity comes with a concerning caveat: AI-assisted developers generate ten times more security issues
2
.The security issues identified in the study encompass a broad range of application risks, including:
By June 2025, AI-generated code had introduced over 10,000 new "security findings" per month in Apiiro's repository data set, representing a tenfold increase from December 2024
1
.Source: Futurism
Interestingly, AI coding assistants demonstrate both positive and negative impacts on code quality:
Itay Nussbaum, Apiiro's product manager, succinctly summarized this paradox: "AI is fixing the typos but creating the timebombs"
3
.The study also highlighted additional challenges posed by AI-assisted coding:
1
.Related Stories
Source: The Register
With companies like Coinbase, Shopify, and Duolingo mandating AI use for their developers, the security risks identified in this study have far-reaching implications for the tech industry
3
.Nussbaum emphasizes the need for a balanced approach: "The message for CEOs and boards is blunt: if you're mandating AI coding, you must mandate AI AppSec in parallel. Otherwise, you're scaling risk at the same pace you're scaling productivity"
1
.The findings align with other recent studies, including research from the University of San Francisco, Vector Institute for Artificial Intelligence, and University of Massachusetts Boston, which found that iterative AI code improvements can degrade security
1
.As AI integration in coding continues to accelerate, the tech industry faces the challenge of harnessing AI's productivity benefits while mitigating the associated security risks. This balancing act will likely shape the future of software development practices and security protocols in the coming years.
Summarized by
Navi
[1]