5 Sources
5 Sources
[1]
Google Antigravity falls to Earth under compute burden
Company tries to curb strain by banning customer accounts for 'malicious' usage Google customers paying $250 per month for AI Ultra subscriptions and less extravagant spenders have been surprised to find their accounts suspended for using the company's Antigravity agent development app and Gemini services with third-party agent tools like OpenClaw and OpenCode. Over the past few weeks, Google has been cutting off customers for terms of service violations, often with no warning. The problem for Google is that when third-party agent wrappers or harnesses rely on Google AI under the hood, the company's software isn't necessarily priced or provisioned for heavy autonomous usage. Many of the software developers who subscribed to these AI services have expressed surprise in various discussion threads, claiming that they were unaware their actions violated contractual agreements they're unlikely to have actually read. Varun Mohan, co-founder of Windsurf and now a DeepMind engineer, said that the account bans followed from malicious usage, though customers argue that that's a mischaracterization of what they were doing. "We've been seeing a massive increase in malicious usage of the Antigravity backend that has tremendously degraded the quality of service for our users," he explained in a social media post on Sunday. "We needed to find a path to quickly shut off access to these users that are not using the product as intended. We understand that a subset of these users were not aware that this was against our [Terms of Service] and will get a path for them to come back on but we have limited capacity and want to be fair to our actual users." Mohan went on to clarify that Google has blocked usage of Antigravity but not other Google services. "It is not intended to use the Antigravity backend as a proxy for other products and users in these groups have overwhelmed our compute," he explained. "We are going to make sure we bring people back on but needed to act fast to make sure we deliver a good experience for people using the product." The Register asked Google to provide examples of malicious usage. We've not heard back. Dissatisfied customers have expressed skepticism that using Antigravity and Gemini CLI with a third-party wrapper or harness like OpenClaw can legitimately be characterized as malicious. "Users paid for quota, used quota within limits, got banned," said AI engineer Mohan Prakash in a social media post. "That's not malicious, that's using the product you sold them. The real issue is the [Terms of Service] doesn't explicitly ban OpenClaw integration, so users assumed it was allowed. If you don't want harness integration, return an error like Anthropic does - this usage pattern is not permitted. Banning paying customers without warning is how you lose trust faster than you lose capacity." Google's account purge follows similar measures taken by Anthropic to prevent users from engaging in token price arbitrage by connecting subscription accounts, instead of its higher-cost API, to third-party services. It appears that Google made machine learning model tokens too readily available through free tier accounts and non-specific quotas. As a result, the company wasn't prepared for the compute demand or the cost incurred when developers ran its AI services through third-party software. This suggests what many have already observed - that AI companies are selling tokens at far below cost to gain market share, in the hope they can outspend the competition until the competition goes away and they can raise prices. ®
[2]
What's behind the OpenClaw ban wave
OpenAI currently isn't banning ChatGPT users for OpenClaw usage after recently hiring the tool's creator, highlighting inconsistent industry responses. If you've been using your flat-rate Claude or Gemini account to feed OpenClaw and its eye-popping AI abilities, get ready to be banned. Specifically, those who've used their Claude and Google OAuth credentials for OpenClaw, the viral AI sensation that can gobble up millions of AI tokens in a single afternoon, have seen both Anthropic and Google ban their AI accounts. The bans frequently come without warning, and many who've tried to find out what happened or get their accounts reinstated have been met with silence. Of course, you can always dodge the banhammer by creating a new Claude or Google account, and it's worth noting that most banned Google AI users are still able to access their Gmail, Google Drive, and other core Google services. Still, it's not fun to get locked out of the $200-a-month Claude or $250-a-month Google AI account you've been actively using for OpenClaw and other tools, including Google's popular Antigravity coding tool, particularly given the lack of refunds. "Pretty draconian from Google," wrote OpenClaw creator Peter Steinberger on X, who added that he would likely remove support for using Google's Antigravity OAuth credentials to power the viral AI agent. "Be careful out there if you use Antigravity." Responding to the hubbub on X, Google DeepMind engineer Varun Mohan said the company has "been seeing a massive increase in malicious usage of the Anitgravity backend that has tremendously degraded the quality of service for our users. We needed to find a path to quickly shut off access to these users that are not using the product as intended. We understand that a subset of these users were not aware that this was against our ToS and will get a path for them to come back on but we have limited capacity and want to be fair to our actual users." While both Anthropic and Google are cracking down on the use of flat-rate OAuth credentials for OpenClaw, ChatGPT isn't wielding the banhammer yet-perhaps because OpenClaw creator Steiberger is now among OpenAI's newest employees. OpenClaw itself is still an open-source tool, albeit with the backing of OpenAI. To understand the hubbub over Anthropic and Google's bans for OpenClaw use via OAuth credentials, you must know the difference between OAuth and API access to Claude and Gemini. Even if you don't know what OAuth access is, you likely use it all the time. Whenever you log into a third-party service using one of those "Login with Google," "Login with Facebook," or "Login with Apple" buttons, that's OAuth at work. Now, the Claude and Google bans for OAuth users connecting to OpenClaw aren't about OAuth per se; instead, it's the fact that OAuth credentials are used for authenticating flat-rate Claude and Gemini plans. Generally speaking, you're not supposed to use your Claude or Google AI OAuth credentials to power third-party AI tools and services, which typically don't enforce rate-limiting policies; instead, you're supposed to use the Claude or Gemini API, which charge by the token rather than a flat monthly fee. (In AI lingo, a token is a tiny unit of data used by AI models to process your prompts and create content such as text, images, and video; the more you interact with an AI, the more tokens you burn.) Still, just because you're not supposed to use your Claude, ChatGPT, or Google OAuth credentials for an unsanctioned third-party service doesn't mean you can't. For OpenClaw users to authenticate their flat-rate Claude accounts with the bot, they can grab an OAuth token (not to be confused with an LLM token) by running the Claude Code setup process, then using that token for OpenClaw rather than Claude Code. For Gemini users, a Google Antigravity plug-in for OpenClaw does the trick. (The process for connecting ChatGPT to OpenClaw via OAuth is far easier, but remember, OpenAI just hired OpenClaw's creator and is thus more invested in OpenClaw's success.) But while big AI providers like Anthropic and Google may have tolerated such "hacky" OAuth authentication methods in the past, they're now cracking down in earnest, and the explosive popularity of OpenClaw has accelerated the process. Unlike previous third-party AI tools that more or less flew under the radar, OpenClaw went from obscurity to everywhere-all-at-once status in a matter of weeks, far outstripping the user base of previous tools. Another difference is that OpenClaw's jaw-dropping agentic AI abilities come at the expense of tokens-a lot of tokens, with some users astonished to find they'd used millions of tokens in a single afternoon. Simply asking your OpenClaw agent "how are you?" could chew through 30,000 tokens (or way more, if you have a lengthy OpenClaw session going), versus a couple thousand tokens for ChatGPT on the web. From a business perspective, Anthropic and Google probably saw money going down the drain as OpenClaw users let it rip with their OAuth-enabled flat-rate AI accounts. And to the point of Google DeepMind's Varun Mohan, the OpenClaw-driven surge in OAuth usage may have taken a toll on non-OpenClaw users on the same flat-rate plans. (Indeed, I noticed frequent "attempting to reach Gemini 3 Flash" warnings when vibe coding with Gemini CLI over the weekend. Were those connectivity issues were due to OpenClaw usage? Good question.) To repeat, Anthropic and Google are more than happy to fund your OpenClaw habit via a pay-as-you-go API key, and indeed, some API-using OpenClaw users have run up some eye-watering bills. But as far as using your flat-rate, OAuth-enabled Claude or Google account for OpenClaw, the party appears to be over.
[3]
Google cuts access to Antigravity for some OpenClaw users citing malicious usage
Google caused controversy among some developers this weekend and today, Monday, February 23rd, after restricting their usage of its new Antigravity "vibe coding" platform, alleging "maliciously usage." Some users who had been using the open source autonomous AI agent OpenClaw in conjunction with agents built on Antigravity, as well as those who had connected OpenClaw agents to their Gmails, claimed on social media that they lost access to their Google accounts. According to Google, said users had been using Antigravity to access a larger number of Gemini tokens via third-party platforms like OpenClaw, which overwhelmed the system for other Antigravity customers. This move has cut off several users, underscoring the architectural and trust issues that can arise with OpenClaw. The timing of Google's crackdown is particularly pointed. Just one week ago, on February 15, OpenAI CEO Sam Altman announced that OpenClaw creator Peter Steinberger had joined OpenAI to lead its "next generation of personal agents." While OpenClaw remains an open-source project under an independent foundation, it is now financially backed and strategically guided by Google's primary rival. By cutting off OpenClaw's access to Antigravity, Google isn't just protecting its server load; it is effectively severing a pipeline that allows an OpenAI-adjacent tool to leverage Google's most advanced Gemini models. Google DeepMind engineer and former CEO and founder of Windsurf, Varun Mohan, said in an X post that the company noticed "malicious usage" that led to service degradation. "We've been seeing a massive increase in malicious usage of the Antigravity backend that has tremendously degraded the quality of service for our users. We needed to find a path to quickly shut off access to these users that are not using the product as intended. We understand that a subset of these users were not aware that this was against our ToS [Terms of Service] and will get a path for them to come back on but we have limited capacity and want to be fair to our actual users," the post said. A Google DeepMind spokesperson told VentureBeat that the move is not to permanently ban the use of Antigravity to access third-party platforms, but to align its use with the platform's terms of service. Unsurprisingly, Google's move has caused a furor among OpenClaw users, including from OpenClaw creator Peter Steinberger, who announced that OpenClaw will remove Google support as a result. Infrastructure and connection uncertainty OpenClaw emerged as a way for individual users to run shell commands and access local files, fulfilling a major promise of AI agents: efficiently running workflows for users. But, as VentureBeat has frequently pointed out, it can often run into security and guardrail issues. There are companies building ways for enterprise customers to access OpenClaw securely and with a governance layer, though OpenClaw is so new that we should expect more announcements soon. However, Google's move was not framed as a security issue but rather as one of access and runtime, further showing that there is still significant uncertainty when users want to bring in something like OpenClaw into their workflow. This is not the first time developers and power users of agentic AI found their access curtailed. Last year, Anthropic throttled access to Claude Code after the company claimed some users were abusing the system by running it 24/7. What this does highlight is the disconnect between companies like Google and OpenClaw users. OpenClaw offered many interesting possibilities for creating workflows with agents. However, because it is continually evolving, users may inadvertently run afoul of ToS or rate limits. Mohan said Google is working to bring the banned users back, but whether this means the company will amend its ToS or figure out a secure connection between OpenClaw agents and Antigravity models remains to be seen. For developers, the message is clear: the era of "bring your own agent" to a frontier model is ending. Providers are now prioritizing vertically integrated experiences where they can capture 100% of the telemetry and subscription revenue, often at the expense of the open-source interoperability that defined the early days of the LLM boom. Affected users Several users said on both the Y Combinator chat boards and X that they no longer had access to their Google accounts after running OpenClaw instances for certain Google products. Google's move mirrors a broader industry shift toward "walled garden" agent ecosystems. Earlier this year, Anthropic introduced "client fingerprinting" to ensure that its Claude Code environment remains the exclusive interface for its models, effectively locking out third-party wrappers like OpenClaw. For developers, the message is clear: the era of "bring your own agent" to a frontier model is ending. Providers are now prioritizing vertically integrated experiences where they can capture 100% of the telemetry and subscription revenue, often at the expense of the open-source interoperability that defined the early days of the LLM boom. Some have said they will no longer use Google or Gemini for their projects. Right now, people who still want to keep using Antigravity will need to wait until Google figures out a way for them to use OpenClaw and access Gemini tokens in a manner Google deems "fair." Google DeepMind reiterated that it had only cut access to Antigravity, not to other Google applications. Conclusion: the enterprise takeaway For enterprise technical decision-makers, the "Antigravity Ban" serves as a definitive case study in the risks of agentic dependency. As the industry moves from chatbots to autonomous agents, the following realities must now dictate strategy: * Platform fragility is the new normal: The sudden lockout of $250/month "Ultra" users proves that even high-paying enterprise customers have little leverage when a provider decides to change its "fair use" definitions. Relying on OAuth-based third-party wrappers for core business logic is now a high-risk gamble. * The rise of local-first governance: With OpenClaw moving toward an OpenAI-backed foundation and Google/Anthropic tightening their clouds, enterprises should prioritize agent frameworks that can run "local-first" or within VPCs. The "token loophole" that OpenClaw exploited is being closed; future agentic scale will require direct, high-cost API contracts rather than subsidized consumer seats. * Account portability as a requirement: The fact that users "lost access to their Google accounts" underscores the danger of bundling development environments with primary identity providers. Decision-makers should decouple AI development from core corporate identity (SSO) where possible to avoid a single ToS violation paralyzing an entire team's communications. Ultimately, the Antigravity incident marks the end of the "Wild West" for AI agents. As Google and OpenAI stake their claims, the enterprise must choose between the stability of the walled garden or the complexity (and cost) of truly independent, self-hosted infrastructure.
[4]
Google Restricts Gemini AI Ultra Accounts Over OpenClaw OAuth Access
Account Restrictions Hit Gemini AI Ultra Users Using OpenClaw OAuth Users on the Google AI Ultra plan, priced at $249.99 per month, said they suddenly lost access after connecting Gemini through using Antigravity OAuth tokens. Several users reported 403 "Terms of Service" style errors and described account restrictions that arrived without advance warning. A long-running thread on the Google AI Developers Forum links the restrictions to "OpenClaw OAuth" and describes repeated attempts to reach support. Users also warned that Google's account-based design can widen the impact beyond a single AI tool, because many products share the same sign-in.
[5]
OpenClaw users lost access to Gemini AI services, Google Antigravity lead Varun Mohan explains why
The issue raises concerns about relying on AI services controlled by big tech companies. Google has recently restricted access to its Antigravity AI tool powered by Gemini for some users. The affected users were those who used OpenClaw, which is an open-source AI assistant that assists people in writing code and completing tasks without requiring advanced technical knowledge. The move has drawn attention because it underscores how much control tech companies have over who can use their AI services. Users reported that the restrictions came without warning, leaving them unable to access Antigravity and, in some cases, other Google services such as Gmail. OpenClaw creator Peter Steinberger criticised Google's decision as harsh and cautioned other developers to be careful when using the platform. Varun Mohan, head of Antigravity, explained that the backend had seen a rise in malicious activity, which affected service quality for regular users. He said the company needed a way to quickly block these accounts while still allowing legitimate users to regain access later. Mohan acknowledged that some users might not have realised they were violating Google's terms but emphasised that the company has limited capacity to manage exceptions fairly. Also read: OnePlus 13 available with over Rs 14,000 discount on this platform This statement came after some Gemini AI Ultra subscribers reported on the Google AI Developer Forum that they couldn't access the Gemini 2.5 Pro model, and in some cases, other connected services like Gmail and Google Workspace. Furthermore, Google has also stated that the decision was made because some users were not using Antigravity as intended. Also read: Not just Samsung Galaxy S26 Ultra, Xiaomi, Oppo, Vivo phones may soon offer privacy display feature OpenClaw is a widely used service that connects AI models, such as Google's Gemini and Anthropic's Claude, to automate tasks like email management, flight check-in, and more. However, AI services are now being more careful about third-party services that go against the official usage guidelines. For instance, Anthropic has recently revised its guidelines to not allow the use of OAuth tokens from its Claude AI service in third-party services. Also read: Nothing Phone 4a design officially confirmed: Check expected specs, price and more details The controversy is also arising as OpenClaw is being integrated with OpenAI, which is a rival to Google and Anthropic. This might be the reason why tech firms are now being more careful about third-party services that work with their platforms. For consumers, this is a cause of concern regarding reliance on AI services that are managed by private corporations.
Share
Share
Copy Link
Google has suspended customer accounts for using OpenClaw with its Gemini AI services and Antigravity platform, citing overwhelming compute demand. Users paying $250 per month for AI Ultra subscriptions lost access without warning, sparking debate over whether the usage was truly malicious or simply unexpected. The crackdown reveals how AI companies struggle to balance flat-rate pricing with actual infrastructure costs.
Google has begun suspending customer accounts for using OpenClaw with Google Gemini and its Antigravity agent development platform, marking a significant shift in how AI service providers manage third-party agent tools
1
. Users paying $250 per month for AI Ultra subscriptions discovered their access abruptly terminated, often without advance warning, after connecting the viral open-source AI agent to their Google services through OpenClaw OAuth access4
.
Source: Analytics Insight
The account bans have affected developers who used OAuth credentials from Google Antigravity to power OpenClaw, a tool that automates complex workflows by connecting to various LLM providers
2
. Several users reported receiving 403 errors citing terms of service violations, with some losing access not only to AI tools but also to connected services like Gmail and Google Workspace3
. The restrictions came without refunds, leaving paying customers locked out of services they had actively subscribed to.Varun Mohan, a DeepMind engineer and former Windsurf co-founder who now leads Antigravity, explained that Google detected "a massive increase in malicious usage of the Antigravity backend that has tremendously degraded the quality of service"
1
. The company needed to quickly shut off access to users not using the product as intended, though Mohan acknowledged that some users were unaware their actions violated terms of service violations5
.
Source: Digit
The core issue stems from excessive token consumption. OpenClaw's agentic AI capabilities consume millions of tokens in single sessions—a simple "how are you?" query can burn through 30,000 tokens or more, compared to a few thousand for standard ChatGPT interactions
2
. When users authenticate flat-rate subscription accounts rather than pay-per-token API access, they effectively bypass rate-limiting policies designed to manage compute demand2
.Mohan clarified that Google blocked Antigravity usage specifically, stating it "is not intended to use the Antigravity backend as a proxy for other products" as users in these groups "overwhelmed our compute"
1
. The situation reveals how AI companies price tokens far below cost to capture market share, leaving them unprepared when developers leverage flat-rate accounts through third-party agent tools1
.Google's enforcement mirrors actions by Anthropic, which has also banned users for connecting Claude subscriptions to OpenClaw rather than using its higher-cost API
1
. Anthropic introduced client fingerprinting to ensure Claude Code remains the exclusive interface, effectively blocking open-source AI tools like OpenClaw3
.
Source: PCWorld
OpenAI presents a notable exception—the company isn't suspending customer accounts for OpenClaw usage, likely because it recently hired OpenClaw creator Peter Steinberger to lead its "next generation of personal agents"
2
3
. While OpenClaw remains an open-source project, it now operates with OpenAI's financial backing and strategic guidance, creating competitive tensions3
.Steinberger criticized Google's approach as "pretty draconian" and announced plans to remove support for Google Antigravity OAuth credentials from OpenClaw
2
. The timing appears significant: by cutting OpenClaw's access to Antigravity just one week after Steinberger joined OpenAI, Google effectively severed a pipeline allowing an OpenAI-adjacent tool to leverage its most advanced Gemini AI Ultra models3
.Related Stories
AI engineer Mohan Prakash challenged the characterization of malicious usage, arguing that "users paid for quota, used quota within limits, got banned"
1
. He noted that terms of service don't explicitly prohibit OpenClaw integration and suggested that if Google wanted to prevent such usage, it should return errors like Anthropic does rather than suspending customer accounts retroactively. "Banning paying customers without warning is how you lose trust faster than you lose capacity," Prakash wrote1
.The controversy highlights platform fragility and infrastructure challenges facing AI agents. While OpenClaw enables users to run shell commands and automate workflows efficiently, continually evolving AI agents may inadvertently trigger rate limits or terms violations
3
. Companies are now building governance layers for enterprise customers to access OpenClaw securely, though solutions remain nascent3
.For developers, the shift signals the end of interoperability that defined early LLM development. AI service providers now prioritize walled garden ecosystems where they control telemetry and subscription revenue completely, often at the expense of open-source AI tools
3
. Google stated it's working to restore access for banned users, though whether this involves amending terms of service or establishing secure connections between OpenClaw and Antigravity models remains unclear3
. The situation raises fundamental questions about relying on AI services controlled by major tech companies and what usage patterns will be permitted as compute costs continue climbing5
.Summarized by
Navi
[1]
[2]
[4]
28 Nov 2025•Technology

29 Jul 2025•Technology
08 Sept 2025•Technology

1
Technology

2
Policy and Regulation

3
Policy and Regulation
