AI coding tools boost productivity 10x, but create massive security and review challenges

2 Sources

Share

AI coding tools have supercharged developer productivity, with some companies seeing output jump from 25,000 to 250,000 lines of code per month. But this acceleration comes with a hidden cost: massive backlogs of unreviewed code, heightened security vulnerabilities, and a shortage of application security engineers to manage the flood of AI-generated software.

AI Coding Tools Drive Unprecedented Output Surge

AI coding has transformed software development at a pace few anticipated. According to

The New York Times

1

, one financial services company using Cursor, an AI coding tool, saw its monthly output explode from 25,000 to 250,000 lines of code. This tenfold increase in developer productivity sounds impressive on paper, but it created a backlog of one million lines of unreviewed code that the company struggles to manage

1

.

The phenomenon extends well beyond a single company. AI-assisted coding—sometimes called "vibe coding"—now enables people without significant coding experience to build websites and apps by simply instructing chatbots

2

. Inside leading AI companies like Anthropic and OpenAI, most coding is performed by AI systems themselves. Boris Cherny, Anthropic's head of Claude Code, revealed that AI has written 100% of his code since at least December, noting "I don't even make small edits by hand"

2

.

The Hidden Cost of AI-Assisted Coding Creates Review Bottlenecks

This accelerated software development has exposed a critical weakness in the software development pipeline: the challenge of code review. "The sheer amount of code being delivered, and the increase in vulnerabilities, is something they can't keep up with," said Joni Klippert, CEO of StackHawk, a security startup working with the affected financial firm

1

. The problem has spread across Silicon Valley, where companies now produce more code than they have people to review, creating a growing gap that poses increased security risks.

Application security engineers—the professionals responsible for catching errors in AI-generated code—are in critically short supply. "There are not enough application security engineers on the planet to satisfy what just American companies need," said Joe Sullivan, an adviser to Costanoa Ventures

1

. Jack Cable, CEO and co-founder of cybersecurity consulting firm Corridor, emphasized the magnitude of the issue: "Even if [a large language model] is better at writing code line by line, if it's writing 20 times as much code as a human would be, there is significantly more code to be reviewed"

2

.

Error-Prone Code and Software Security Vulnerabilities Mount

The quality issues extend beyond sheer volume. David Loker, head of AI for CodeRabbit, a company that helps software engineers review and improve code quality, explained that "AI systems don't make typos in the way we make typos, but they make a lot of mistakes across the board, with readability and maintainability of the code chief among them"

2

. AI coding systems often fail to understand entire repositories as fully as experienced human developers, leading to duplicated functionality and inconsistent business logic across different parts of applications

2

.

The consequences of heightened security vulnerabilities can be severe. Recently, AI-generated code caused an Amazon outage that resulted in over 100,000 lost orders and 1.6 million errors

1

. In January, developer Matt Schlicht used AI coding systems to create Moltbook, a social network for AI systems now owned by Meta, but security researchers soon identified critical vulnerabilities that exposed human users' credentials

2

.

Cable warned that AI coding agents are producing "an explosion in complexity," adding that "if there's one thing we know about software, it's that with increased complexity comes increased attack surface and vulnerability"

2

.

AI Slop Phenomenon Spreads to Software Development

The concept of AI slop—originally popularized in 2024 to describe volumes of low-quality, unwanted AI outputs—has now infiltrated software development

2

. Leading developer Daniel Stenberg was forced to shutter a popular effort to find bugs in open-source software in late January because "the never-ending slop submissions take a serious mental toll to manage and sometimes also a long time to debunk"

2

. Though Stenberg later noted the flood "has transitioned from an AI slop tsunami into more of a ... plain security report tsunami," the volume remains overwhelming

2

.

Source: NBC

Source: NBC

Companies are discovering that boosted quantity does not automatically increase quality—often the opposite proves true. "The initial push of all this was developer productivity," Loker explained. "It was about increasing the throughput in terms of feature generation, the ability to build fast and ship things"

2

. Yet this focus on speed has created downstream problems that threaten code maintainability and application security.

Industry Turns to AI Code Reviewers Despite Skepticism

Predictably, Silicon Valley believes more AI is the solution. Companies like Anthropic, OpenAI, and Cursor are building AI-powered review tools to catch errors in AI-generated code. Cursor acquired a code-reviewing startup to integrate this capability into its product. As Cursor's head of engineering put it, "The software development factory kind of broke. We're trying to rearrange the parts in some sense"

1

.

Yet skepticism remains about whether AI code reviewers can fully address the problem. While AI systems are getting better at reviewing their own code and finding security vulnerabilities, human review will likely remain necessary before releasing software to production

1

. The stakes are high: companies must review code from functionality, quality, and cybersecurity perspectives to avoid catastrophic failures

2

.

Another complication: AI coding tools work better on personal laptops than on secure company servers, meaning software engineers are downloading entire codebases onto personal devices. If a laptop goes missing, so does sensitive data

1

. This practice compounds the security risks already inherent in rapid AI-assisted development.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo