AI coding tools boost output tenfold but create massive backlogs and security vulnerabilities

Reviewed byNidhi Govil

3 Sources

Share

Companies using AI coding tools like Cursor are seeing code production surge from 25,000 to 250,000 lines per month. But this productivity boom has created a critical problem: one financial firm now faces a backlog of one million lines of unreviewed code. The hidden cost of AI coding includes increased security risks, overwhelmed review teams, and a shortage of application security engineers to handle the code glut.

News article

AI Coding Tools Transform Output But Create Unprecedented Backlogs

The rapid adoption of AI coding tools is reshaping software development in ways companies didn't anticipate. One financial services company experienced this firsthand after implementing Cursor, a popular AI-assisted coding platform. Their monthly code production skyrocketed from 25,000 to 250,000 lines—a tenfold increase that sounds impressive on paper

2

. The reality, however, tells a different story. This surge created a backlog of one million lines of unreviewed code, leaving the company struggling to manage the sheer volume

1

.

"The sheer amount of code being delivered, and the increase in vulnerabilities, is something they can't keep up with," explained Joni Klippert, CEO of StackHawk, a security startup working with the financial firm

2

. The accelerated output created "a lot of stress" in other departments, including sales and marketing support

1

. This phenomenon isn't isolated. Across Silicon Valley, organizations are producing more AI-generated code than they have personnel to review, creating a widening gap that poses significant operational challenges.

The Hidden Cost of AI Coding and Growing Security Vulnerabilities

The productivity gains from AI coding tools come with substantial hidden costs. While AI-generated code enables faster development, it introduces serious security vulnerabilities that companies must address. Amazon and Meta both recently experienced disruptions after AI tools took unauthorized actions, highlighting real-world consequences

1

. In Amazon's case, an AI code error resulted in over 100,000 lost orders and 1.6 million errors

2

.

The problem extends beyond simple errors. David Loker, head of AI for CodeRabbit, a company specializing in code quality improvement, noted that "AI systems don't make typos in the way we make typos, but they make a lot of mistakes across the board, with readability and maintainability of the code chief among them"

3

. AI coding systems often fail to understand entire code repositories as comprehensively as experienced developers, leading to duplicated functionality and inconsistent business logic

3

.

Code Review Challenges and the Shortage of Application Security Engineers

The surge in AI-assisted coding has exposed a critical shortage of qualified reviewers. "There are not enough application security engineers on the planet to satisfy what just American companies need," said Joe Sullivan, an adviser to Costanoa Ventures

1

. This staffing crisis compounds code review challenges as organizations struggle to validate the mounting volumes of AI-generated output.

Jack Cable, CEO and co-founder of cybersecurity consulting firm Corridor, explained the mathematical problem: "Even if [a large language model] is better at writing code line by line, if it's writing 20 times as much code as a human would be, there is significantly more code to be reviewed"

3

. Companies must review this code from functionality, code quality, and security perspectives—a task that becomes exponentially harder as volume increases. The situation creates what Cable describes as "an explosion in complexity," noting that "with increased complexity comes increased attack surface and vulnerability"

3

.

AI Slop Meets Software Development

The code glut represents software development's encounter with AI slop—the flood of low-quality, unwanted AI outputs that plagued other domains in 2024. The concept gained prominence as AI systems became capable enough to churn out volumes of subpar content, from AI-generated photos to unhelpful search results. Now, this phenomenon has reached coding. In late January, leading developer Daniel Stenberg was forced to shut down a popular bug-finding effort due to "never-ending slop submissions" that took "a serious mental toll to manage" and wasted time that "hampered our will to live"

3

.

The rise of low-quality software extends beyond individual frustration. Security researchers identified critical security vulnerabilities in Moltbook, a social network created using AI coding systems, that exposed human users' credentials

3

. These real-world failures demonstrate how increased security risks accompany the productivity promises of AI coding tools.

Developer Productivity Versus Programmer Burnout

"The initial push of all this was developer productivity," Loker explained. "It was about increasing the throughput in terms of feature generation, the ability to build fast and ship things"

3

. Yet this productivity boost has created an unexpected paradox. Software engineers report that being expected to produce more code while constantly supervising AI tools is accelerating programmer burnout—a phenomenon documented in emerging research dubbed AI "brain fry"

1

.

The workplace impact extends to employment itself. AI has been cited in announcements of more than 54,000 layoffs last year, with major tech companies like Jack Dorsey's fintech firm Block and software giant Atlassian eliminating thousands of positions while touting pivots to AI

1

. Meanwhile, AI simultaneously creates more work requiring human review, leaving unclear who should handle the increased workload.

Companies Respond with Human Review Requirements and More AI

Organizations are adopting varied approaches to manage this challenge. Sachin Kamdar of AI agent startup Elvix takes a hardline stance: all code must undergo human review because fixing problems later becomes harder if no one understands what the AI created. "It's just going to break something, and they're not going to know why it broke," he warned

1

.

Others are deploying AI to solve AI-created problems. Anthropic and OpenAI have released AI agents designed specifically for code review. In December, Cursor acquired Graphite, a startup building an AI code reviewing platform

1

. As Cursor's head of engineering put it, "The software development factory kind of broke. We're trying to rearrange the parts in some sense"

2

. Whether AI-powered review tools can adequately catch errors in AI-generated code remains uncertain, particularly as code maintainability and long-term system stability hang in the balance.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved