Claude AI faces mounting complaints over performance degradation as users report quality decline

Reviewed byNidhi Govil

4 Sources

Share

Anthropic's Claude AI is experiencing a surge in user complaints about declining quality and performance. Developers report that Claude Opus 4.6 and Claude Code are making more errors, abandoning tasks mid-stream, and burning through usage quotas faster than before. The backlash intensified after AMD's AI director published detailed analysis showing reduced reasoning capabilities, while Anthropic attributes changes to product adjustments rather than model degradation.

Claude AI Performance Issues Trigger Wave of Developer Complaints

Anthropic is confronting a mounting crisis of confidence as developers and enterprise users increasingly report that Claude AI has experienced significant performance degradation in recent months. The complaints, which have proliferated across GitHub, Reddit, and social media platforms, center on claims that Claude Opus 4.6 and Claude Code are delivering less reliable results, making more mistakes on complex workflows, and consuming usage quotas at unprecedented rates

1

3

. The growing customer dissatisfaction comes at a critical moment for Anthropic, which recently announced that its annualized recurring revenue has surged to $30 billion, up from $9 billion at the end of 2025

4

.

Source: The Register

Source: The Register

The situation escalated following a major outage on Monday that affected Claude.ai and Claude Code from 15:31 to 16:19 UTC, with elevated error rates compounding existing frustrations

1

. When The Register asked Claude AI itself to analyze quality complaints in the Claude Code GitHub repository since January 2026, the model concluded that "quality complaints have escalated sharply" with April already recording 20+ quality issues in just 13 days, putting it on pace to exceed March's 18 complaints—a 3.5× jump over the January-February baseline

1

.

AMD Senior Director Documents Reduced Reasoning Capabilities

One of the most detailed public complaints came from Stella Laurenzo, whose LinkedIn profile identifies her as Senior Director in AMD's AI group. In an April 2 GitHub issue, Laurenzo presented a comprehensive analysis of 6,852 Claude Code session files, 17,871 thinking blocks, and 234,760 tool calls, arguing that Claude Code had regressed to the point where it could not be trusted for complex engineering work

3

. Her analysis documented a sharp decline in estimated reasoning depth starting in February, accompanied by increased premature stopping, more "simplest fix" behavior, more reasoning loops, and a measurable shift from research-first to edit-first behavior

3

.

The post quickly went viral after being amplified on social media, giving the "Claude is getting worse" narrative concrete data from a senior AI leader at a major chip company rather than just anecdotal frustration

3

. Some users have framed the issue as "AI shrinkflation"—the idea that customers are paying the same price for a weaker product—while others have suggested Anthropic may be engaging in nerfing, or deliberately throttling performance during periods of heavy demand

3

.

Source: VentureBeat

Source: VentureBeat

Anthropic Attributes Changes to Product Adjustments, Not Model Degradation

Boris Cherny, Claude Code's lead at Anthropic, responded to the complaints by disputing the conclusion that the underlying model has degraded. In a pinned GitHub response, Cherny explained that recent changes were product and interface choices rather than secret downgrades

3

. He identified two key changes: Claude Opus 4.6's move to adaptive thinking by default on February 9, and a March 3 shift to medium effort (effort level 85) as the default, which Anthropic viewed as the best balance across intelligence, latency, and cost for most users

3

.

Cherny emphasized that users wanting more extended reasoning can manually switch to higher effort levels by typing in Claude Code terminal sessions

3

. However, many users have complained that Anthropic did not adequately highlight these changes, undermining the company's reputation for transparency—a cornerstone of its brand identity

4

.

Caching Issue and Usage Quota Depletion Compound User Frustration

Adding to the controversy, Anthropic reduced the prompt cache TTL (time to live) for Claude Code from one hour to five minutes for many requests around March 7, following a brief period where it had been extended to one hour

2

. User Sean Swanson documented this change in a detailed bug report, arguing that "the 5m TTL is disproportionately punishing for the long-session, high-context use case that defines Claude Code usage"

2

.

While Jarred Sumner from Anthropic claimed the change made Claude Code cheaper for one-shot calls where cached context is used once and not revisited, Swanson reported that as a $200 per month subscriber for over six months, he had never hit a quota limit until March

2

. The situation has deteriorated to the point where Pro users paying $20 per month may get as few as two prompts in five hours

2

. The large one-million-token context window available on paid plans with Claude Opus 4.6 or Sonnet 4.6 models further increases costs, especially with cache misses

2

.

Speculation About Compute Capacity Shortages Intensifies

The user backlash has fueled widespread speculation that Anthropic may be experiencing compute capacity shortages as adoption of its products has soared. The company has had to take steps to reduce usage during peak hours to balance capacity and demand, and has introduced stricter usage limits

1

4

. In an internal memo reported by CNBC, OpenAI's revenue chief claimed that Anthropic had made a "strategic misstep" by not securing enough compute capacity and was "operating on a meaningfully smaller curve" than competitors

4

.

Anthropic also announced last week that it had trained a new model called Mythos that is significantly more capable than Opus but also larger and more expensive to run, raising questions about whether the company lacks sufficient compute capacity to support a broad Mythos rollout. While an Anthropic spokesperson has publicly denied that the company degrades models to manage demand, the combination of increased errors, usage quota depletion, and AI model quality complaints has created a credibility challenge for a company that has built its reputation on transparency and user alignment

4

.

Despite the complaints, data from Margin Lab suggests that Claude Opus 4.6 has maintained its score on the SWE-Bench-Pro test, with assessments conducted since February showing some variation but no substantive change

1

. This discrepancy between benchmark performance and user experience highlights the gap between what standardized tests measure and what developers encounter in real-world, complex engineering workflows. The situation poses significant risks for Anthropic as it potentially prepares for an IPO, with persistent anger around Claude performance issues threatening to derail the company's momentum just as it has emerged as a leader in enterprise AI

4

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo