4 Sources
[1]
Anthropic tosses agents into the API billing pool
Anthropic has further restricted access to its Claude model family while framing the limitation as responsive customer service. "We've heard your questions about SDK and claude -p usage sharing your subscription rate limits with Claude Code and chat," the company said in a social media post. "Starting June 15, programmatic usage gets its own dedicated budget instead. Your subscription limits don't change, they're now reserved for interactive use." Subscription usage only applies to interactive use of Claude Code, Claude Cowork, and Claude.ai. Interactive mode involves a user typing a prompt and receiving a response. There's a human in the loop. Programmatic interaction, whether via Anthropic's own Agent SDK, headless mode, or a third-party tool, will be counted against a separate usage pool funded by a credit equal to the customer's subscription fee. So a Pro subscriber paying $20 per month will have two token supply chains - one for interactive usage and one for programmatic usage, which the subscriber must claim to obtain. But programmatic usage gets billed at costlier API rates. And if this credit is exhausted, spillover programmatic tokens get billed at (occasionally discounted) API rates through "extra usage," a separate token allotment that, if enabled, exists mainly as a way to avoid a sudden service cutoff and to set a limit on spending. The questions from users arose because Anthropic's prior efforts to prevent customers from gorging on tokens at the all-you-can-eat subscription trough haven't been comprehensive. The AI biz, mindful that it will need to show a profit eventually, has been trying to push customers toward its metered API and to constrain consumption of flat-rate subscription tokens. Microsoft's GitHub Copilot has embarked on a similar transition. Anthropic initially did so by disallowing the use of Claude subscriptions with third-party harnesses - applications like OpenCode that coordinate communication with the backend model. That policy dates back to February 2024, but Anthropic seldom enforced it until earlier this year when demand for AI inference began to outpace the company's Claude supply. In February this year, growing interest in OpenClaw, an open source agent platform that encourages long-running, token-burning tasks, prompted Anthropic to get serious about its ban on using third-party harnesses with Claude subscriptions. But customers wondered about third-party applications built with Anthropic's own Agent SDK, which hadn't been explicitly disallowed, and about the use of headless mode (claude -p), a way to have Claude work on a task without interaction. They now have their answer. It's worth noting that, if the programmatic credit is not exhausted, it doesn't roll over. It gets lost, or you might say, Anthropic reclaims it. The company refers to the credit using a dollar sign, but it's not redeemable currency. It has already been spent. So customers seeking to get the full value from the new arrangement need to calibrate their programmatic usage to consume the full credit every month, no more and no less. Anthropic's recently announced deal with SpaceX to obtain the compute capacity of its Colossus 1 datacenter, along with its removal of peak-hours usage restrictions, raised hopes among developers that more tolerant usage policies might return. This latest subscription limitation shows that's not happening. ®
[2]
Anthropic reinstates OpenClaw and third-party agent usage on Claude subscriptions -- with a catch
Good news, OpenClaw fans -- you can once again use your Claude AI subscription to power the hit, open source, autonomous AI agentic harness! But, there's a big catch with how it's being enacted. A few hours ago, Anthropic announced via its official developer communications account on X, @ClaudeDevs, that it is changing its Claude paid subscription tiers, introducing a new subcategory of "Agent SDK" credits for all paid subscribers, which they can now allocate specifically for "programmatic" uses, including external, third-party agents such as OpenClaw. The move us a major reversal from the Anthropic's policy introduced in early April 2026 that expressly prohibited its AI subscriptions from being used to power these kind of non-Anthropic agents and harnesses, after Anthropic said they caused capacity and service issues. The problem was that some Claude subscribers were paying $20 to $200 per month under Anthropic's Claude Pro and Max subscriptions, but consuming hundreds, even thousands of dollars of tokens (units of information) above those prices through their OpenClaw (and similar autonomous) agents. This was an unsustainable position for Anthropic's finances and its limited compute infrastructure for inferencing the models to end users. To be clear, even when it enacted the old prohibition against OpenClaw and similar agents last month, Anthropic never fully cut off the capability for Claude to be used in OpenClaw. Rather, it redirected users to pay through the company's application programming interface (API), which is billed by usage (priced per million tokens, rather than a flat monthly rate as the subscriptions offer), or pay for extra usage credits atop their subscriptions. Now, Anthropic is giving Claude subscribers another way to use their subscription bill to pay for third-party agents. However, the restoration comes with a significant catch: programmatic usage is no longer subsidized by the general subscription pool but is instead restricted to a fixed, non-rollover monthly credit. In other words, if you allocate your subscription credits for usage in third-party agents but don't end up using them, they simply expire at the end of the month. Why did Anthropic block Claude subscriptions from OpenClaw (and other third-party agentic AI harnesses) in the first place? To understand why this restoration matters, one must look at the technical friction that led to the initial ban on April 4, 2026. Anthropic's first-party tools, such as Claude Code and Claude Cowork, are engineered to maximize "prompt cache hit rates" -- a method of reusing previously processed text to save on expensive compute cycles. Third-party tools like OpenClaw, which allow users to run autonomous agents through external services like Discord or Telegram, were often unoptimized for these efficiencies.Boris Cherny, Head of Claude Code, noted that these third-party services were "really hard for us to do sustainably" because they bypassed the caching mechanisms that allow Anthropic to offer flat-rate subscriptions. The sheer volume of data being re-processed by inefficient agents was threatening the stability of the system for the broader user base. Even with Anthropic's massive expansion into new hardware -- including access to the 300MW Colossus 1 data center and its 220,000+ GPUs -- the demand for agentic workflows was outpacing sustainable supply. The new "Agent SDK credit" system solves this technical bottleneck by shifting the cost of inefficiency back to the user. By providing a dedicated dollar-amount credit, Anthropic no longer has to "eat the difference" on unoptimized third-party code. If an agent is inefficient and burns through tokens, it simply drains the user's $20 to $200 credit faster, rather than impacting the general compute pool. Anthropic's new programmatic credit system The restoration of third-party access is segmented across Anthropic's billing tiers, creating a new hierarchy of "programmatic power." Here's how much Anthropic is giving each user in terms of the new Agent SDK credits: This system introduces a sharp divide between "interactive" and "programmatic" workflows. If you are chatting with Claude in a browser or using Claude Code in a terminal to write code interactively, you are still drawing from your standard, high-capacity subscription limits. However, the moment you use the command for non-interactive tasks, run a GitHub Action, or connect a third-party tool like OpenClaw, the system switches to the dedicated Agent SDK credit. Once the Agent SDK credit limit ($20 for Pro plans, $100 for Max 5X, etc) is exhausted, programmatic usage stops unless the user has enabled "extra usage" billing, which is charged at standard, pay-as-you-go API rates. Crucially, for those who found the original subscription model to be an infinite resource, this is a hard cap. Credits do not roll over, meaning the "use it or lose it" nature of the system forces a monthly reset of the developer's budget. Strategic implications The licensing implications of this move are profound for the "agentic" ecosystem. By explicitly allowing third-party apps like Conductor and OpenClaw to authenticate via the Agent SDK, Anthropic is legitimizing a workflow it had previously attempted to block. However, in doing so, it has ended the era of "compute arbitrage".In the early part of 2026, a $20 Pro subscription could be leveraged via OpenClaw to run agents that would cost hundreds of dollars on a standard API key. By moving to a metered credit, Anthropic is aligning its subscription model with its Developer Platform (API). While it offers a "free" buffer for subscribers, it ensures that high-volume, production-level automation is moved to predictable, token-based billing. This protects the company's margins while still offering a "sandbox" for developers to experiment without the immediate overhead of an API-first account. Community reactions are perhaps unsurprisingly negative While Anthropic executives framed the update as a "simplification", the developer community has largely branded it as a significant reduction in the value of their subscriptions. The backlash focuses on the sharp disparity between the previous effective usage and the new, metered reality. Popular AI YouTuber and developer Theo Browne (@theo) of T3.gg warned developers that this change constitutes a massive devaluation for those using external tools. "If you use any of the following with your Claude sub, your usage must got cut by 25x," Theo stated, listing T3 Code, Conductor, Zed, and Jean as affected platforms. He concluded with a sharp warning: "They're disguising this as 'free credits'. Don't fall for it". Kun Chen, a solo builder and former L8 engineer at Meta, Microsoft, and Atlassian, interpreted the move as a full surrender of Anthropic's market lead. "it's official. Anthropic pulled the plug on ALL programmatic use of claude subscription," Chen posted, adding that he had found himself "increasingly bullish about OpenAI" as a result. Chen argued that "Anthropic's only lead was on coding, and gpt 5.5 has flipped that already," signaling a potential migration of elite developer talent. Other builders questioned the practical utility of the credits offered. Ben Hylak, co-founder and chief technology officer at AI agent observability and governance startup Raindrop.ai, voiced concern over the sustainability of Anthropic's infrastructure. "this is either really silly, or shows how bad of a spot anthropic is in re: gpus," Hylak noted, before bluntly asking users to "guess how many turns $20 in API credits last". The frustration extended to the marketing of the change. EverNever, creator of inkstone.uk, expressed disbelief at the framing of the policy. "Wait what?! You take away more ways to utilize the subscription I am paying for?! And you dare to make it look like a win?". This sentiment highlights a growing rift between Anthropic and its power-user base, who feel that previously inclusive features are being rescinded under the guise of an "upgrade." The bottom line for Anthropic subscribers and AI builders Anthropic's "restoration" is a tactical move to retain developers while strictly managing the physical limits of compute. By June 15, the "agentic" era for Claude subscribers will be a metered one. The company has successfully reclaimed control over its margins, even if it has cost them some of the goodwill of their most vocal power users. For the individual developer or enterprise AI builder relying on Anthropic models for OpenClaw, however, it's clearly an improvement over the blanket ban from last month.
[3]
Anthropic tightens limits on Claude subscriptions
Why it matters: The fight shows that "all-you-can-eat" AI subscriptions may not survive the agent era, where software can burn through computing resources far faster than humans ever could. Driving the news: Anthropic announced that it's bringing back support for outside agent tools on paid Claude plans. But it is putting that usage behind a separate credit meter. * Subscribers will now get a new monthly credit that they can use with third-party harnesses like OpenClaw. * Anthropic says that new changes should support the ways that the majority of people use Claude. What they're saying: Anthropic's changes didn't go over well. * Claude Code Product Manager Noah Zweben's X post about the new rules was riddled with critical replies, calling the changes "gaslighting" and claiming to be switching to Codex. The intrigue: OpenAI is taking the opposite tack, at least for now. Sam Altman announced on X that OpenAI is giving new business customers two months of free Codex usage. Zoom in: The industry appears to be rediscovering a lesson from earlier eras of computing: humans have built-in limits to how much data they can consume, while automated workloads can explode usage. * A human might send dozens or perhaps hundreds of prompts a day, while an autonomous coding agent can generate thousands of requests, run tests continuously, browse the web, and recursively call models. The other side: Businesses are finding that AI agents can lead to hefty bills. * ServiceNow and Uber are among the companies that have already burned through their AI token budgets for the entire year, per The Information's Laura Bratton. What we're watching: Everyone is facing the same economics and will eventually need to move away from unlimited use.
[4]
Anthropic announces 'programmatic credit pool' as agentic tool use rises - SiliconANGLE
Anthropic announces 'programmatic credit pool' as agentic tool use rises Anthropic PBC, the developer and provider of the Claude artificial intelligence model family, said it's offering a special credit pool for users who want to use agentic tools with its large language models. The move Wednesday comes as centralized AI vendors come to terms with how they want to handle agentic AI for coding and personal use as adoption rises. Examples include open-source projects such as OpenClaw and Hermes, which were built to provide frameworks for AI agents that can act as 24/7 personal assistants and connect to LLMs such as OpenAI Group PBC's GPT models or Claude. Wednesday, on the ClaudeDev X account, Anthropic said that every paid tier, beginning June 15, will gain a special "programmatic credit pool" that will refill every month and allow users to draw token credits for agentic use. Starting at $20 per month for the lowest tier and going up to $200 for Max 20x, this will allow users to connect their third-party agentic tools to Claude and use it as a "brain" until their credits run out. Afterward, they can continue to pay for extra usage to continue what the company calls "interactive" use; users who do not have this enabled will have their usage cut off. It also separates the agentic use case from the subscription chat credits, which power the everyday usage, such as talking to the AI models, summarizing documents, using Cowork to sort through emails and other business or personal tasks on the platform. Originally, AI agents such as OpenClaw connected through the main subscription and did not need credits. However, last month, Boris Cherny, head of Claude Code at Anthropic, announced that the company was cutting off access to third-party agent frameworks via the subscription and directed users to the Anthropic API, which runs on a per-credit basis. This move led to steeper monthly costs for users. At the time, Cherny said that the rise of OpenClaw usage had put strain on Anthropic's servers because of greatly increased demand and that subscription Claude accounts were not permitted to use AI agents in the first place. That change was the first attempt by Anthropic to isolate "interactive" uses from subscription use. Shortly thereafter, the company experimented with removing Claude Code, the company's agentic coding capability, from a small percentage of new Pro signups. It was a test that the company quickly pulled back from after headlines and consumer backlash. Although the credits change opens up OpenClaw users to Claude once again, not all users find Anthropic's move benevolent. Many have painted the change as a regression, which will make the subscription model worse for existing users. "For everyone running real automation, this is a downgrade dressed up as a feature," wrote arckollect on X. In particular, users noted that those using agentic tools would burn quickly through the new cap and that the credits would not roll over month to month when unused. This means that the power users the cap is aimed at could simply hit it within days, while those who use agentic tools dynamically month to month could run out suddenly one month and find credits completely wasted in another. Anthropic is not alone in trying to put a meter around agentic work. GitHub recently announced it's moving Copilot to AI Credits on June 1, and infrastructure providers such as Fireworks AI Inc. and Together Computer Inc. already price serverless inference by token usage rather than by flat subscription. Although Fireworks has been exploring a flat rate subscription called Fire Pass with a monthly token offering within it to scoop up those "prosumer" agentic users. The difference is that Anthropic is making this shift inside a consumer-facing subscription product, where power users had already built habits, tools and expectations around Claude as a persistent agentic "brain." The reason behind this is that agentic usage represents great computational draw, multiple rapid tool calls and high reasoning demand. The number of tokens used escalates quickly while an agent "thinks" through a problem, making flat rates less lucrative for providers. For enterprises, the shift also signals where agentic AI costs are likely headed. Personal subscriptions may be useful for experimentation, but production agents that run continuously, call tools and reason across long workflows may increasingly need to live inside rationed environments with budget controls, monitoring and predictable limits. Anthropic's move puts a clearer wall between casual use and automated compute, and that wall is likely to become more common as AI model and inference suppliers figure out how to price for agents that never sleep.
Share
Copy Link
Anthropic is separating Claude subscriptions into interactive and programmatic usage pools starting June 15, with dedicated credits for AI agents like OpenClaw. The move addresses unsustainable resource consumption but introduces non-rollover credits that expire monthly, drawing user backlash as the industry grapples with how to price autonomous AI workloads.
Anthropichas announced a major restructuring of its Claude subscriptions, creating a separate programmatic credit pool for AI agents and third-party agent tools starting June 15
1
2
. The credit-based system marks a significant shift in how the company handles agentic AI use, separating it from standard interactive usage where users type prompts and receive responses. Pro subscribers paying $20 per month will now have two distinct token supply chains: one for interactive use with Claude Code, Claude Cowork, and Claude.ai, and another for programmatic usage through Anthropic's Agent SDK, headless mode, or platforms like OpenClaw1
. The programmatic pool receives a monthly credit equal to the subscription fee—$20 for Pro, $100 for Max 5X, and up to $200 for Max 20X—but these credits expire at month's end without rolling over2
4
.
Source: The Register
The change addresses a fundamental economic challenge: AI agents consume computational resources at rates that dwarf human usage. While a person might send dozens or hundreds of prompts daily, autonomous coding agents can generate thousands of requests, run continuous tests, and recursively call models
3
. This unsustainable resource consumption became acute when OpenClaw users paying $20 to $200 monthly consumed hundreds or thousands of dollars worth of tokens through their autonomous agents2
. Boris Cherny, Head of Claude Code, explained that third-party services were "really hard for us to do sustainably" because they bypassed caching mechanisms that enable flat-rate subscriptions2
. The token demand from inefficient agents threatened system stability even as Anthropic expanded into massive infrastructure including the 300MW Colossus 1 datacenter with 220,000+ GPUs2
.The announcement triggered significant user backlash, with critical replies flooding Claude Code Product Manager Noah Zweben's post, calling the changes "gaslighting" and threatening to switch to Codex
3
. Power users identified a core problem: the non-rollover credit structure means those running real automation could burn through caps within days, while users with variable monthly needs might waste unused credits or suddenly run out4
. Once Agent SDK credit is exhausted, programmatic usage gets billed at costlier API billing rates through "extra usage" allotments, essentially pushing users toward metered API usage1
. Customers seeking full value must now calibrate their programmatic usage to consume the entire credit monthly—no more, no less1
.
Source: Axios
Related Stories
Anthropicisn't alone in confronting these economics. GitHub recently announced it's moving Copilot to AI Credits on June 1, while companies like ServiceNow and Uber have already burned through their AI token budgets for the entire year
3
. Infrastructure providers such as Fireworks AI and Together Computer already price serverless inference by token usage rather than flat subscriptions, though Fireworks is exploring a hybrid "Fire Pass" subscription model4
. Interestingly, OpenAI is taking the opposite approach for now, with Sam Altman announcing two months of free Codex usage for new business customers3
. However, the industry appears to be rediscovering a fundamental lesson: humans have built-in limits to data consumption, while automated workloads can explode usage exponentially3
.For enterprises, this shift signals where agentic AI costs are headed. Personal subscriptions may serve experimentation, but production agents that run continuously, call tools, and reason across long workflows will increasingly require rationed environments with budget controls and predictable limits
4
. Anthropic's move creates a clearer boundary between casual use and automated compute—a wall that inference suppliers will likely adopt as they determine how to price agents that never sleep4
. Despite Anthropic's recent deal with SpaceX for Colossus 1 datacenter capacity and removal of peak-hours restrictions, more tolerant usage policies aren't returning1
. The fight demonstrates that "all-you-can-eat" AI subscriptions may not survive the agent era, where software burns through computing resources far faster than humans ever could3
.Summarized by
Navi
[1]
[2]
04 Apr 2026•Technology

13 Apr 2026•Business and Economy

27 Mar 2026•Technology

1
Technology

2
Technology

3
Policy and Regulation
