Study Reveals AI Coding Tools Slow Down Experienced Developers in Complex Projects

Reviewed byNidhi Govil

13 Sources

Share

A new study by METR finds that AI coding tools unexpectedly increased task completion time by 19% for experienced open-source developers, contradicting expectations of increased efficiency.

AI Coding Tools Unexpectedly Slow Down Experienced Developers

A groundbreaking study conducted by Model Evaluation and Threat Research (METR) has revealed that AI coding tools, contrary to popular belief, can significantly slow down experienced software developers working on complex projects. The research, which involved 16 seasoned open-source developers completing 246 real-world tasks, found that using AI tools increased task completion time by 19%

1

.

Source: Inc. Magazine

Source: Inc. Magazine

Study Methodology and Surprising Results

The randomized controlled trial focused on developers with multiple years of experience working on specific open-source repositories. Tasks included bug fixes, feature implementations, and refactoring work. Half of the tasks were completed using AI tools like Cursor Pro or Anthropic's Claude, while the other half were done without AI assistance

2

.

Surprisingly, developers initially expected AI tools to reduce task completion time by 24%. Even after completing the tasks, they believed the AI had made them 20% faster. However, the actual results showed a 19% increase in completion time when using AI tools

3

.

Factors Contributing to the Slowdown

Source: TIME

Source: TIME

The study identified several factors contributing to the unexpected slowdown:

  1. Time spent reviewing AI outputs: Developers accepted less than 44% of AI-generated code without modification, spending significant time reviewing and correcting suggestions

    4

    .

  2. Prompting and waiting: Considerable time was spent crafting prompts for AI systems and waiting for responses.

  3. Complexity of existing codebases: The AI tools struggled with large, mature repositories averaging 10 years of age and over 1 million lines of code

    1

    .

  4. High quality standards: The repositories had very high quality bars for code contributions, limiting AI's effectiveness

    5

    .

  5. Lack of contextual understanding: AI tools couldn't utilize important tacit knowledge or context about the codebase that human developers possessed

    1

    .

Implications and Future Outlook

The study's findings challenge the widespread belief that AI tools universally enhance coding productivity. However, the researchers caution against broad generalizations, noting that the results may not apply to all software development scenarios

2

.

Source: Ars Technica

Source: Ars Technica

The authors remain optimistic about the future of AI in coding. They suggest that improvements in reliability, latency, and output relevance could lead to efficiency gains. There is already preliminary evidence that newer AI models, such as Claude 3.7, show promise in correctly implementing core functionality for some of the studied repositories

1

.

Broader Context and Industry Impact

This study adds to a growing body of research examining the real-world impact of AI tools on productivity. While some studies have shown significant gains in coding efficiency, others have found that AI can introduce mistakes and even security vulnerabilities

2

.

The findings may have implications for companies investing heavily in AI-powered coding tools and for predictions about AI's impact on the job market. However, it's important to note that the study focused on a specific scenario involving experienced developers working on complex, established codebases

5

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo