AI-Assisted Coding Boosts Productivity but Multiplies Security Risks, Study Finds

Reviewed byNidhi Govil

3 Sources

Share

A new study by Apiiro reveals that AI-assisted developers produce more code but introduce significantly more security issues, raising concerns about the widespread adoption of AI coding tools in the tech industry.

AI Coding Assistants: A Double-Edged Sword for Developers

In a groundbreaking study, application security firm Apiiro has uncovered startling insights into the impact of AI coding assistants on software development. The research, which analyzed code from tens of thousands of repositories and several thousand developers affiliated with Fortune 50 enterprises, reveals a significant increase in both productivity and security risks associated with AI-assisted coding

1

.

Source: TechRadar

Source: TechRadar

Productivity Boost Comes at a Security Cost

Apiiro's findings show that developers using AI assistants like Anthropic's Claude Code, OpenAI's GPT-5, and Google's Gemini 2.5 Pro produce three to four times more code than their unassisted counterparts. However, this increased productivity comes with a concerning caveat: AI-assisted developers generate ten times more security issues

2

.

The Nature of Security Risks

The security issues identified in the study encompass a broad range of application risks, including:

  1. Added open source dependencies
  2. Insecure code patterns
  3. Exposed secrets
  4. Cloud misconfigurations

By June 2025, AI-generated code had introduced over 10,000 new "security findings" per month in Apiiro's repository data set, representing a tenfold increase from December 2024

1

.

Source: Futurism

Source: Futurism

The Paradox of AI-Assisted Coding

Interestingly, AI coding assistants demonstrate both positive and negative impacts on code quality:

  • Reduced syntax errors by 76%
  • Decreased logic bugs by 60%
  • Increased privilege escalation paths by 322%
  • Increased architectural design flaws by 153%

Itay Nussbaum, Apiiro's product manager, succinctly summarized this paradox: "AI is fixing the typos but creating the timebombs"

3

.

Challenges in Code Review and Cloud Security

The study also highlighted additional challenges posed by AI-assisted coding:

  1. AI tends to pack more code into fewer pull requests, complicating code reviews.
  2. AI-assisted developers exposed sensitive cloud credentials and keys nearly twice as often as their unassisted peers

    1

    .

Industry Implications and Recommendations

Source: The Register

Source: The Register

With companies like Coinbase, Shopify, and Duolingo mandating AI use for their developers, the security risks identified in this study have far-reaching implications for the tech industry

3

.

Nussbaum emphasizes the need for a balanced approach: "The message for CEOs and boards is blunt: if you're mandating AI coding, you must mandate AI AppSec in parallel. Otherwise, you're scaling risk at the same pace you're scaling productivity"

1

.

Corroborating Research and Future Outlook

The findings align with other recent studies, including research from the University of San Francisco, Vector Institute for Artificial Intelligence, and University of Massachusetts Boston, which found that iterative AI code improvements can degrade security

1

.

As AI integration in coding continues to accelerate, the tech industry faces the challenge of harnessing AI's productivity benefits while mitigating the associated security risks. This balancing act will likely shape the future of software development practices and security protocols in the coming years.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo