3 Sources
[1]
AI bills can be as big as a postdoc salary. Is the cost worth it?
James Zou has spent "well over US$100,000" on artificial intelligence in the past year. "These models are very useful for researchers, for coding, for analysis, for literature summaries," says Zou, a biomedical-data scientist who leads the AI for Science Laboratory at Stanford University in California. In his view, the fees, which are in the same ballpark as the cost of supporting a postdoctoral fellow at Stanford, are worth it. He says "we're entering into a new golden age of science with AI assistance" that enables fundamental scientific advances because of the "increasing capabilities of these AI scientist agents". But AI assistance is starting to look more expensive for researchers. AI providers have struggled to make the economics work for them on subscription plans, so are hiking up prices and tightening usage limits. In January 2025, Sam Altman, chief executive of the California-based company OpenAI, posted on social-media site X that the firm was losing money on its $200-a-month ChatGPT Pro subscriptions because people were using the chatbot more than the company expected, driving up OpenAI's use of computing power and electricity. GitHub, a platform that allows developers to store and share their code, is the latest provider to change its pricing policy. On 27 April, it announced that it would move GitHub Copilot, its AI tool that helps users to write code, from a subscription-based service to usage-based billing from 1 June, citing the higher demands of agentic AI. Those changes are having an impact on researchers who have become increasingly dependent on AI. Attila Gáspár, an economist at Central European University in Vienna, has been using AI to extract data from historical documents. For around 18 months, Gáspár did everything he wanted with his university-paid subscription to the AI chatbot Claude, but in late April, he encountered a problem. "It said, 'You have hit your limit,'" he explains. Matteo Niccoli, a geoscientist who uses AI for technical research, upgraded from a Claude Pro to a Max subscription and still hits limits on heavy workdays. Serious scientific projects, he says, involve "multi-session" work, with repeated back and forth between coding, reasoning and analysis. And he's finding that even those subscriptions "don't cut it", meaning he has to work by hand -- taking more time and limiting the possibilities of big-data analysis. The decision of whether AI is worth paying for isn't only about price and usage limits. The errors AI introduces into its workflows can add more work to a researcher's load that outweigh the technology's benefits. Niccoli describes the bottleneck he encounters now is the "thinking and the discussion" around the work: checking the model's outputs, noticing when its context has become overloaded and knowing when its answers are beginning to drift. "It's all on you to figure out how to reliably use them," he says. That makes AI useful -- but not necessarily a labour-saver. AI firms tend to price 'inference', or usage, in two ways: flat monthly subscriptions with a capped number of tokens -- the content of the model's input and output (one token is roughly equivalent to two-thirds of one word) -- or per-token charges for using the program interface. GitHub's decision to change its pricing to a usage-based system will affect students working with Sebastian Baltes, a software-engineering researcher at Heidelberg University in Germany. Research published in a March preprint suggests users trading down to inferior models could end up costing more by using more tokens, even if each unit is less costly. "Such changes mean the gap between institutions, groups and individuals that can pay for premium subscriptions and those that can't will widen," he fears. The inequality issue also worries Jeremy Howard, honorary professor at the University of Queensland and founding researcher at Fast.ai. "The idea of being blocked by funding from having access to tools that other people use would be crap," he says. AI models have been a game changer for students in parts of the world where disposable income is low, he says, giving them the sense that "for the first time, they have the same access to the kind of intelligence that rich people do".
[2]
Your AI Is Getting Dumber -- and More Expensive. Here's Why
In recent weeks, AI companies have either tightened up or hiked prices on their subscription services -- or throttled access to the systems they offer. GitHub paused new Copilot Pro signups in April while announcing it would move to a usage-based system from June 1. OpenAI has introduced a wider range of pricing tiers at different levels in order to try and better capture the different ways its products are used. And last month, Anthropic was forced to admit it had accidentally been serving users a cut-down version of Claude compared to the one they thought they were accessing because of an error introduced when trying to make their models more efficient. It's all part of the challenges AI labs face in keeping up with demand, and tackling what's already considered to have been a highly subsidized product. Nvidia, which develops the lion's share of the chips powering the AI revolution, says that AI can now cost more than human workers. User Habits Are Changing "One of the challenges for the for the providers is to be not necessarily more transparent, but somehow align what their limits are with how developers use the tools," says Trisha Gee, a software developer tracking the way in which AI coding tools are being adopted across industry. A big part of the problem is that how end users are utilizing the tools has changed over time.
[3]
Riding the AI buck: Rising API, usage-based costs force a rethink - The Economic Times
There is a paradox that is happening in AI right now. Demand for AI productivity tools such as Claude Code is higher than ever as companies drive adoption. But it is driving AI costs higher than that of human capital. There have been multiple instances of employees exhausting a year's worth of AI budget within months. According to technology news publication The Information, Uber chief technology officer Praveen Neppalli Naga recently stated in an internal memo that the company burnt its 2026 AI budget in four months. This cost is likely to increase as large AI enterprises such as Anthropic and GitHub move from the current flat subscription structure to usage-based pricing. This is resulting in some companies hiring people instead, building guardrails to keep AI costs in check and optimising their AI usage. Rising AI usage Unlike the software-as-a-services era, where the software was meant for humans, seat-based subscription does not make sense with AI agents becoming mainstream in enterprises, said startup founders ET spoke with. AI agents process instructions and generate responses by consuming tokens, and more tokens means more computing cost for the AI companies. To manage the cost, they are moving towards usage or outcome-based pricing. GitHub in a blogpost said that starting June 1, the company will transition to usage-based billing as absorbing the escalating inference, or computing, cost is no longer sustainable. Anthropic has changed the billing framework for its Claude model for enterprise customers and has started billing based on usage rather than a fixed fee, The Information reported in April. The founder of a US-based AI services startup confirmed this to ET. Emails sent to Anthropic did not elicit any response till press time. This means increased AI cost for startups and enterprises. For Latentforce, an AI modernisation platform, spending on tokens has skyrocketed in the last six months, though Claude's usage-based model is yet to take effect for the company, said cofounder Aravind Jayendran. "Six months ago, our token cost was '20,000 per month, but now it is already moving to '2-3 lakh per month," he said. While the return on investment justifies the cost, he said the same cannot be said for several enterprises that Latentforce is helping adopt AI. Jayendran said his clients in the knowledge services sector are hiring people rather than deploying AI as the former is cheaper currently. This includes jobs such as data entry and documentation processing. Another founder, who optimises usage of large language models for enterprises, told ET on the condition of anonymity that many Indian enterprises still rely more on human capital than AI given the high costs of running and maintaining the systems. In a post on Reddit, a founder said with AI getting expensive, his company cancelled five of its AI subscriptions and hired two mid-level developers instead. Optimise, tweak Companies for which AI tools are a mainstay, meanwhile, are optimising internal AI usage and tweaking their revenue models. Latentforce is exploring optimising its AI model usage by a mixture of open source and frontier models. The founder of a Bengaluru-based unicorn said his company has put agentic guardrails in place to ensure that tokens are used for the approved projects and flag if the usage hits upper limits. "This is to ensure that they are used right, when we are spending so much," he added. Vijay Rayapati, cofounder of Atomicwork that helps clients automate employee requests and IT service workflows, told ET that his company offers hybrid pricing -- a flat rate with limits on tokens and an option for additional usage with extra payment. "Enterprises need visibility on budget and fixed cost helps with that. However, they will need to pay for their consumption as the AI companies have also increased the cost of APIs for new models," he said. Vivek Khandelwal, cofounder of agentic AI application CogniSwitch, said in the last few months, the company has also shifted towards API pricing and token usage.
Share
Copy Link
Major AI providers including GitHub, OpenAI, and Anthropic are abandoning flat subscription models for usage-based pricing, driving AI costs higher than human capital in some cases. Researchers face financial burdens while enterprises burn through annual budgets in months, prompting some to hire people instead of deploying AI tools.
James Zou, a biomedical-data scientist at Stanford University, has spent over $100,000 on AI tools in the past year—roughly equivalent to funding a postdoctoral fellow
1
. While Zou believes the investment is justified for coding, analysis, and literature summaries, this high cost of powering AI is becoming a critical issue across research institutions and enterprises. The financial burdens for researchers are mounting as AI providers fundamentally restructure how they charge for their services1
.
Source: Nature
OpenAI CEO Sam Altman revealed in January 2025 that the company was losing money on its $200-per-month ChatGPT Pro subscriptions because users consumed more computing power and electricity than anticipated
1
. This admission signals a broader industry challenge: the economics of flat subscription models simply don't work for AI companies facing skyrocketing infrastructure demands.GitHub announced on April 27 that it would transition GitHub Copilot from subscription-based service to usage-based pricing starting June 1, citing the higher demands of agentic AI
1
. The company paused new Copilot Pro signups in April while preparing for this transition, acknowledging that absorbing escalating inference costs is no longer sustainable2
3
.Anthropic has similarly changed its billing framework for the Claude model, moving enterprise customers from fixed fees to usage-based charges
3
. OpenAI has introduced a wider range of pricing tiers to better capture different usage patterns2
. Unlike the software-as-a-service era where seat-based subscriptions made sense, AI agents process instructions by consuming tokens, and more tokens mean more computing costs for providers3
.The impact of these pricing changes is already visible in corporate spending patterns. According to The Information, Uber CTO Praveen Neppalli Naga stated in an internal memo that the company burned through its 2026 AI budget in just four months
3
. Multiple instances show employees exhausting a year's worth of AI budget within months as AI services becoming more expensive across the board3
.
Source: ET
For Latentforce, an AI modernization platform, token costs skyrocketed from ₹20,000 per month six months ago to ₹2-3 lakh per month currently, even before Claude's usage-based model fully took effect
3
. Nvidia, which develops the majority of chips powering the AI revolution, now says that AI can cost more than human workers2
.The cost pressures are forcing companies to reconsider their AI adoption strategies. Aravind Jayendran, cofounder of Latentforce, reports that clients in the knowledge services sector are hiring people rather than deploying AI for tasks like data entry and documentation processing, as human capital is currently cheaper
3
. One founder posted on Reddit that his company cancelled five AI subscriptions and hired two mid-level developers instead3
.Many Indian enterprises still rely more on human capital than AI given the high costs of running and maintaining the systems, according to a founder who optimizes large language model usage for enterprises
3
. Research published in a March preprint suggests users trading down to downgraded AI models could end up costing more by using more tokens, even if each unit is less costly1
.Related Stories
Companies where AI tools remain essential are implementing strategies for managing AI spending. A Bengaluru-based unicorn founder revealed his company has put agentic guardrails in place to ensure tokens are used for approved projects and flag when usage hits upper limits
3
. Latentforce is exploring optimization through a mixture of open source and frontier models3
.Atomicwork, which helps clients automate employee requests and IT service workflows, now offers hybrid pricing—a flat rate with token limits and an option for additional usage with extra payment. "Enterprises need visibility on budget and fixed cost helps with that. However, they will need to pay for their consumption as the AI companies have also increased the cost of APIs for new models," said cofounder Vijay Rayapati
3
.For researchers, the pricing changes create significant challenges. Attila Gáspár, an economist at Central European University who uses AI to extract data from historical documents, encountered usage limits on his university-paid Claude subscription after 18 months of unrestricted access
1
. Geoscientist Matteo Niccoli upgraded from Claude Pro to Max and still hits limits on heavy workdays during multi-session work involving repeated back-and-forth between coding, reasoning, and analysis1
.Sebastian Baltes, a software-engineering researcher at Heidelberg University, fears that "such changes mean the gap between institutions, groups and individuals that can pay for premium subscriptions and those that can't will widen"
1
. Jeremy Howard, honorary professor at the University of Queensland, worries about inequality, noting that AI models have given students in parts of the world with low disposable income "the same access to the kind of intelligence that rich people do"1
.The ROI for AI investments remains under scrutiny as researchers grapple with additional overhead. Niccoli describes the bottleneck as "thinking and the discussion" around checking model outputs, noticing when context becomes overloaded, and knowing when answers drift—making AI useful but not necessarily a labor-saver
1
. As usage-based pricing becomes the norm, organizations must carefully evaluate whether the benefits of AI productivity tools justify their escalating costs.Summarized by
Navi
29 Apr 2026•Business and Economy

13 Apr 2026•Business and Economy

06 Mar 2025•Business and Economy

1
Business and Economy

2
Technology

3
Technology
