4 Sources
4 Sources
[1]
Anthropic tweaks Claude usage limits to manage capacity
AI biz makes some Claude conversations more costly to manage capacity Anthropic on Wednesday adjusted its opaque usage limits for Claude customers by reducing the power of the services it delivers during times of peak demand, in an effort to balance demand with its capacity to deliver service. In a social media post, Thariq Shihipar, a member of Anthropic's technical team, wrote: "To manage growing demand for Claude we're adjusting our five hour session limits for free/Pro/Max subs during peak hours. Your weekly limits remain unchanged." The change means that during peak hours - 05:00 - 11:00 PT or 13:00 - 19:00 GMT - Claude users could burn five hours' worth of usage time in under five hours. At other times of day, a five-hour access allowance will allow five hours of access. That odd definition of timed use is possible because Anthropic ties hourly use to token consumption - without revealing exactly how many tokens it ties to timed use. According to Shihipar, "~7 percent of users will hit session limits they wouldn't have before, particularly for pro tiers. If you run token-intensive background jobs, shifting them to off-peak hours will stretch your session limits further." Anthropic has expanded capacity during other times of day when demand is lower, so there's no net loss in terms of usage limits. "Overall weekly limits stay the same, just how they're distributed across the week is changing," Shihipar explained. "I know this was frustrating. We're continuing to invest in scaling efficiently. I'll keep you posted on progress." Anthropic sells its AI services in two forms: an API and subscriptions. API customers pay a published rate for various forms of token usage - Base Input Tokens, 5m Cache Writes, 1h Cache Writes, Cache Hits & Refreshes, and Output Tokens. Subscription customers - Free, Pro ($20/month), Max 5x ($100/month), and Max 20x ($200/month ) - can use Claude subject to unpublished usage limits. Anthropic does not specify exactly how it calculates those limits and users don't have any way to plan for token usage. "Your usage is affected by several factors, including the length and complexity of your conversations, the features you use, and which Claude model you're chatting with," the company explains Anthropic's documentation. "Different subscription plans (Pro, Max, Team, etc.) have different usage allowances, with paid plans offering higher limits." Claude customers can access a dashboard that shows their progress towards consuming their five-hour daily session limits, and weekly usage limits. If users exceed limits, Claude locks them out ... unless they pay for extra usage. Under this new token allocation regime, developers can expect to get more done during off hours and less at other times. Although, really, what kind of Californian is awake and pounding code at 5 a.m. anyway? ®
[2]
Claude Code users hitting usage limits 'way faster than expected'
Anthropic says it is looking to resolve an issue which is blocking users of its AI coding tool. Claude Code, the AI-powered helper for writing computer code, has become popular in recent months. The company announced on Reddit it was investigating an issue where usage limits were being hit faster than expected. Customers buy tokens to use AI services - but the amount of tokens needed for each task is sometimes opaque. Anthropic said fixing this was the "top priority" for the team. Claude users commented under the post on Reddit, with one user saying they hit the token limit "much later" on their free account compared to their $100 (£75) a month paid account. Another user, talking about bugs that can form in the code created, commented "One session in a loop can drain your daily budget in minutes". And another comment stated that the impact wasn't just on Claude Code, saying "A simple one sentence reply to a conversation just took me from 59% usage to 100%. How??" Just last week, Anthropic introduced peak-hour throttling of its services on Claude, meaning that tokens will get consumed more quickly when demand for the service is higher. Software developers can use Claude Code and other similar applications as part of their daily workflow to help with specific tasks. Any issues with the service can disrupt their work. A Claude Pro subscription costs users $20 a month. Increasing tiers for higher usage can cost $100 or even $200 per month. The company also offers business pricing for larger organisations. Anthropic recently accidentally released part of its internal source code for Claude Code due to "human error". An internal file with 500,000 lines of code was released on GitHub, a popular platform for developers. An Anthropic spokesperson said the release was caused by "human error, not a security breach," and that "no sensitive customer data or credentials were exposed or involved". Claude Code's source code was already partially known, as it had previously been reverse-engineered by independent developers. An earlier version of the source code had also been leaked in February 2025. Anthropic is currently in a legal battle with the US government over how its tools can be used by the Department of Defense. Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.
[3]
Anthropic confirms it's been 'adjusting' Claude usage limits
The changes impact approximately 7% of users across Free, Pro, and Max tiers who now hit session limits faster during high-demand periods. If it feels like you've been hitting your Claude usage limits much more quickly over the past week, you're absolutely right. Anthropic has confirmed that it has been "adjusting" the five-hour usage limits for Claude Free, Pro, and Max users during the peak hours of 5 a.m. to 11 a.m. on weekdays, while leaving overall weekly limits unchanged. The news comes via a Reddit post from an Anthropic representative. I reached out to Anthropic and confirmed the post is authentic. The Anthropic post doesn't specify when the usage limit adjustment took place, but my understanding is the new rate limits kicked in on Monday. Claude users have been complaining bitterly about how quickly they've been hitting their usage limits over the past week or so, and many had suspected a silent reduction in their five-hour usage allotments. Turns out they were right. Anthropic says it imposed the adjusted usage limits to "manage growing demand for Claude." "We've landed a lot of efficiency wins to offset this, but ~7% of users will hit session limits they wouldn't have had before, particularly in Pro tiers," the Anthropic post continues. "If you run token-intensive background jobs, shifting them to off-peak hours will stretch your session limits further." Anthropic acknowledged that the limit adjustment "was frustrating" and that it is "continuing to invest in scaling efficiently." Word of Anthropic throttling peak usage limits comes amid a surge of interest in Claude following its legal standoff with the Defense Department, which has sought to tag Anthropic as a "supply chain risk" after the company balked at signing a military contract. A judge recently stayed the Pentagon's move to apply the "supply chain risk" label. Anthropic's move to adjust its five-hour usage limits speaks to a bigger issue: how the big AI providers treat subscribers on flat-rate plans. In the past, AI users on "plus," "pro," or "max" plans (which cost anywhere from $10-250 a month, depending on the provider) rarely hit usage limits because they were simply chatting with models in a web-based chatbox. But with the rise of agentic AI functionality such as vibe-coding applications and "computer use" abilities, flat-rate AI subscribers are burning far more tokens than ever before, and the big AI providers are struggling to keep up with the demand. The problem is exacerbated by Claude's enormous one-million token context window, which was rolled out earlier this month. What's happening now is that Anthropic and other AI companies are hitting the brakes on flat-rate usage, sometimes silently. And no, it's not cool.
[4]
Claude's new limits are frustrating its most devoted users
Claude chatbot users are getting fed up with the recent rollout of new usage limits as Anthropic, the developer behind Claude, struggles to keep up with surging demand for its AI systems. Starting in late March, users of Claude Code -- Anthropic's AI system for programming and coding tasks -- have taken to social media to share complaints that they are beginning to hit usage limits much more quickly than before for seemingly simple tasks. "Does anyone else feel like their Claude usage gets drained within minutes?" a user wrote Friday on X. "I was literally using it for less than 30 minutes and hit my limit." Other users said they have had similar experiences, one saying in a reply, "wait it's not just me?" The complaints accumulated hundreds of thousands of views on X and Anthropic-focused subreddits. Programmers have embraced the Claude family of AI systems because of the models' top-tier performance on coding and agentic tasks. Last week, Anthropic released a new method for remotely controlling Claude Code running on specific computers, allowing users to use its AI assistant even when they are away from the machines running it. But Claude's recent growth in popularity in the face of limited computing resources led Anthropic to implement more stringent usage caps last week that have drastically slowed projects for many who rely on Claude's coding capabilities. America's ongoing AI boom has caused Anthropic and other AI companies, including OpenAI and Google, to commit trillions of dollars over the coming years to build the data centers that power AI systems. Compared with competitors like OpenAI, however, Anthropic has taken a more cautious approach to expanding its AI chip supply. Anthropic currently relies on millions of chips from Google, Amazon and other companies to meet its computing needs. "Even if the technology goes as fast as I suspect that it will, we don't know exactly how fast it's going to drive revenue," Anthropic CEO Dario Amodei said in an interview with podcaster Dwarkesh Patel in February. "We know it's coming, but with the way you buy these data centers, if you're off by a couple years, that can be ruinous," Amodei said, referring to the possibility that misjudged demand could bankrupt the company. Anthropic has a finite number of computer chips needed to process all user requests at any given time, so it sets limits on how many and what types of requests customers can submit to their AI systems. Anthropic must also juggle customer demand for computing resources with its own internal computing needs, since training new AI models often requires access to the same scarce computer chips. Anthropic establishes usage limits in five-hour periods, so users must wait for their five-hour windows to reset if they exceed their usage limits in that period. Anthropic has said many factors can contribute to a user's usage limit, including the length and complexity of user messages. For example, asking Claude to improve a sentence's grammar will use far fewer resources than asking it to design a custom website from scratch. A users' exact quota depends on the type of subscription they have, which can cost up to $200 a month. Users with more expensive paid subscriptions gain access to higher usage quotas. Users can combine the plans with pay-as-you-go credit to process more tokens, the small chunks of data that AI companies track to measure customers' usage. More advanced Claude models, like the latest Opus 4.6 model, use more tokens than Anthropic's other models, like Sonnet 4.6 and Haiku 4.5. Anthropic has surged in popularity in the past year, because of both the advanced capabilities of its coding models and increased marketing -- whether in bus stations or Super Bowl ads. The company attracted further national attention this year after its spat with the Trump administration, in which the Defense Department designated Anthropic as a threat to national security and President Donald Trump moved to bar its products from federal agencies and contractors. According to a ranking by OpenRouter, an online LLM marketplace, Anthropic's Sonnet 4.6 and Opus 4.6 were in the top 10 most popular AI models on the platform last month. But experts say AI companies like Anthropic will need to continue to scale out their data centers to meet growing demand. In November, Anthropic announced that it would invest $50 billion to build data centers in Texas and New York. "Last year was all about 'Oh, no, we're building out too many AI data centers,' and what you actually see right now is the shortage," said Ethan Mollick, an associate professor at the Wharton School of the University of Pennsylvania who specializes in AI. "That there are not enough tokens to go around, and so that means prices go higher." To address rapidly growing demand, Anthropic has tried to better distribute usage access during peak hours of user activity. Thariq Shihipar, a member of Anthropic's Technical Staff, posted on X last Thursday, after the backlash had begun, that to "manage growing demand for Claude," Anthropic would be "adjusting our 5 hour session limits for free/Pro/Max subs during peak hours." Shihipar added in his post that around "7% of users will hit session limits they wouldn't have before, particularly for pro tiers," suggesting that if users "run token-intensive background jobs, shifting them to off-peak hours will stretch your session limits further." Shihipar didn't respond to a request for comment. Anthropic echoed Shihipar's post, writing in a statement to NBC News that "weekly limits remain unchanged" and that it was "continuing to invest in scaling capacity." One of the users questioning the adjustments was Pranit Garg, a former Chief Marketing Officer of a blockchain infrastructure company called Fermah, who started using Claude Code in November. After having used the system for months and becoming enthralled with its capabilities, he began to notice changes when he went to use the AI assistant. "The limits just started running out all of a sudden. And I know my workflow hasn't changed," he said. He decided to voice his complaints on social media. "I pay $200/mo for Claude Max," Garg wrote in a March 15 X post that accumulated over a million views, referring to the top tier of Anthropic's commercial subscription plans. "My limits have been noticeably worse this past week. Now they announce 2x off-peak usage for two weeks." "But here's what actually happens: limits quietly drop, a temporary 2x makes the reduced limit feel normal, the promo ends, and you're left at a baseline lower than where you started," he added, referring to a temporary announcement on Claude's Support page that the system would double usage limits for users outside the hours of 8 a.m. to 2 p.m. ET from March 13 to 28. Shihipar pushed back and maintained there was no correlation between the temporary bonus and the adjustments in usage limits, writing in a reply on X: "nah it's just a bonus 2x, it's not that deep." An Anthropic spokesperson told NBC News that the temporary bonus and the changes to usage limits weren't connected in any way, adding that the change was made to manage growing demand. Shreya Gupta, who works as a software engineering intern at a fintech startup, says she has used Claude Code for over a year and has it running "almost 24/7," using Anthropic's $20 monthly plan. But in the last month, she started noticing it wasn't functioning as it used to. "I have to save my usage because I need to use it the entire day," she said. "I cannot trade off money for accuracy," she added. "I might even shift on to some other model because of how poorly it's treating its users right now." Lydia Hallie, a member of Anthropic's Technical Staff, said Monday on X that Anthropic was "aware people are hitting usage limits in Claude Code way faster than expected," adding that it was being actively investigated. Jordan Nanos, a member of the Technical Staff at SemiAnalysis -- a research company specializing in semiconductors and AI infrastructure, said he thinks users are most concerned that they might not be getting what they paid for. "They're just complaining that they paid money for something, and they think it's not actually being delivered to them," he said. "[Anthropic] only [has] so much compute, they can only produce so many tokens, and they're making the choice to cut off the users who are consuming the most tokens, as opposed to not accepting new users or not accepting new sign-ups." As complaints about Claude have continued to build up in the past few days, rival OpenAI appeared to jump in on the opportunity to promote its own competing coding AI assistant, Codex, which has also had soaring usage. "We have reset Codex usage limits across all plans to let everyone experiment with the magnificent plugins we just launched, and because it had been a while," Thibault Sottiaux, the engineering lead for Codex, said last Thursday on X. "Growth of codex continues to outpace our prediction (maybe we are not super good at that) and we almost ran out of capacity three days in a row," Sottiaux said Saturday on X. "We have ramped up more significantly ahead of next week." However, on Wednesday, Sottiaux posted that he had again reset usage limits after OpenAI users similarly complained about hitting their use limits at a higher rate. Despite OpenAI's moves to be more flexible with its usage limits than Anthropic, Nanos said he doesn't believe many loyal Claude Code users will switch over to Codex. "I don't think that's going to be a winning strategy for me and for most people that I work with. You want the absolute most intelligent model," Nanos told NBC News. "But there are many people who explore the trade-off of cheaper models for slightly less intelligence or worse performance." It is unclear whether Anthropic's recent usage limits adjustments will be permanent. But Mollick said he doesn't foresee demand going down anytime soon: "If AI keeps getting better, then it's very possible that demand will continue to grow faster than supply." But he believes users will most likely switch between platforms. "There's not enough supply to go around," Mollick said. "Different companies have different ways of allocating supply, though. If somebody has extra compute available, they can lure people away. "Companies have to make decisions about how much effort they're allocating towards building new models versus serving the customer demand," he added. "Things are going to keep getting very complicated."
Share
Share
Copy Link
Anthropic has confirmed it's adjusting Claude usage limits during peak hours to manage surging demand, affecting approximately 7% of users across Free, Pro, and Max tiers. The changes mean subscribers are hitting usage limits faster than before, particularly for token-intensive tasks like Claude Code. While weekly limits remain unchanged, the new peak hour throttling has sparked frustration among developers who rely on the AI coding tool.
Anthropic has confirmed it is adjusting usage limits for Claude subscribers during peak hours to manage growing demand for its AI chatbot services. The company announced the changes through a Reddit post and social media, revealing that approximately 7% of users across Free, Pro, and Max tiers will now hit session limits faster during high-demand periods
1
3
. Thariq Shihipar, a member of Anthropic's technical team, explained that the company is "adjusting our five hour session limits for free/Pro/Max subs during peak hours" while keeping weekly limits unchanged1
.
Source: BBC
The new rate limits kicked in during late March, with peak hours defined as 5:00-11:00 PT (13:00-19:00 GMT) on weekdays
3
. During these periods, Claude users could burn five hours' worth of usage time in under five hours due to how Anthropic ties hourly use to token consumption1
. The company has expanded capacity during off-peak hours when demand is lower, meaning there's no net loss in terms of overall weekly limits1
.The impact has been particularly severe for Claude Code users, Anthropic's AI coding tool that has surged in popularity among software developers. The company announced on Reddit it was investigating an issue where usage limits were being hit "way faster than expected," calling the fix a "top priority" for the team
2
. Developers have taken to social media to share complaints about hitting usage limits much more quickly than before for seemingly simple tasks. "Does anyone else feel like their Claude usage gets drained within minutes? I was literally using it for less than 30 minutes and hit my limit," one user wrote on X4
.
Source: The Register
One Claude user on Reddit reported hitting the token limit "much later" on their free account compared to their $100 per month paid account
2
. Another commented that "one session in a loop can drain your daily budget in minutes" when bugs form in code created by Claude Code2
. The complaints accumulated hundreds of thousands of views on X and Anthropic-focused subreddits, with many subscribers expressing frustration at the lack of transparency around how tokens are calculated4
.The adjusting usage limits reflect Anthropic's struggle to balance surging demand with its finite computing resources. Unlike competitors such as OpenAI, Anthropic has taken a more cautious approach to expanding its AI chips supply, currently relying on millions of chips from Google, Amazon, and other companies to meet its computing needs
4
. Dario Amodei, Anthropic's CEO, explained in a February interview that the company must carefully manage data center investments: "Even if the technology goes as fast as I suspect that it will, we don't know exactly how fast it's going to drive revenue. We know it's coming, but with the way you buy these data centers, if you're off by a couple years, that can be ruinous"4
.The problem is exacerbated by Claude's enormous one-million token context window, which was rolled out earlier in March
3
. With the rise of agentic AI functionality and computer use abilities, flat-rate AI subscribers are burning far more tokens than ever before, and AI providers are struggling to keep up with demand3
. Anthropic must juggle customer demand for computing resources with its own internal computing needs, since training new AI models often requires access to the same scarce computer chips4
.Related Stories
Anthropic offers subscription plans ranging from Free to Max 20x at $200 per month, but the company does not specify exactly how it calculates usage limits, leaving subscribers without any way to plan for token consumption
1
. "Your usage is affected by several factors, including the length and complexity of your conversations, the features you use, and which Claude model you're chatting with," Anthropic's documentation explains1
. More advanced models like Opus 4.6 use more tokens than other models like Sonnet 4.6 and Haiku 4.54
.
Source: NBC
Users can access a dashboard showing their progress toward consuming their five-hour daily session limits and weekly usage limits, but if they exceed limits, Claude locks them out unless they pay for extra usage
1
. Shihipar noted that users running token-intensive background jobs should shift them to off-peak hours to stretch their session limits further1
3
.Experts say AI companies like Anthropic will need to continue scaling out their data centers to meet growing demand. Ethan Mollick, an associate professor at the Wharton School who specializes in AI, observed: "Last year was all about 'Oh, no, we're building out too many AI data centers,' and what you actually see right now is the shortage. That there are not enough tokens to go around, and so that means prices go higher"
4
. In November, Anthropic announced it would invest $50 billion to build data centers in Texas and New York4
.According to OpenRouter, an online LLM marketplace, Anthropic's Sonnet 4.6 and Opus 4.6 were among the top 10 most popular AI models on the platform last month
4
. The company has surged in popularity due to both the advanced capabilities of its coding models and increased marketing, including bus stations and Super Bowl ads4
. Shihipar acknowledged the peak hour throttling "was frustrating" and that Anthropic is "continuing to invest in scaling efficiently"1
3
.Summarized by
Navi
[1]
29 Jul 2025•Technology
04 Apr 2026•Technology

10 Apr 2025•Business and Economy

1
Policy and Regulation

2
Policy and Regulation

3
Technology
