Google Cloud API Keys Expose Gemini AI Access After Generative AI Rollout Transforms Security Risk

Reviewed byNidhi Govil

2 Sources

Share

Security researchers discovered nearly 3,000 Google Cloud API keys embedded in public websites that now authenticate to Gemini AI endpoints without warning. The issue emerged when enabling the Generative AI rollout on existing projects retroactively granted sensitive permissions to keys once considered harmless billing tokens, exposing organizations to quota theft and unexpected AI billing.

News article

Google Cloud API Keys Gain Unexpected Access to Gemini AI

A critical API security flaw has emerged from Google's Generative AI rollout, transforming thousands of public Google Cloud API keys into potential entry points for unauthorized access. Security firm Truffle Security discovered nearly 3,000 live Google API keys embedded in client-side code across websites that can now authenticate to Gemini AI endpoints, despite never being intended for such use

1

. These keys, identifiable by their AIza prefix, were originally deployed as billing tokens for services like embedded maps and Firebase, with Google's guidance indicating they posed minimal security risk when exposed publicly.

The vulnerability surfaces when users enable the Generative Language API on a Google Cloud project. This API enablement causes existing API keys in that project to gain surreptitious access to Gemini AI endpoints without any warning or notice to developers

1

. Security researcher Joe Leon from Truffle Security explained that with a valid key, attackers can access uploaded files, cached data, and charge LLM usage to victim accounts. The problem extends beyond Gemini AI—Quokka's separate analysis of 250,000 Android apps uncovered over 35,000 unique Google API keys embedded in mobile applications, demonstrating the scale of public exposure across different platforms

1

.

How the Vulnerability Enables Quota Theft and Data Exposure

The security gap allows attackers who scrape websites to harvest these exposed keys and exploit them for quota theft and data breaches. Malicious actors can access sensitive information via the /files and /cachedContents endpoints, make unauthorized API calls to Gemini, and rack up substantial unexpected AI billing charges for victims

1

. One Reddit user reported that a compromised Google Cloud API key resulted in $82,314.44 in charges between February 11 and 12, 2026—a dramatic spike from their regular monthly spend of $180

1

.

Compounding the issue, Truffle Security found that creating a new API key in Google Cloud defaults to "Unrestricted" status, meaning it applies to every enabled API in the project, including Gemini AI

1

. This default configuration transforms what developers believed were harmless billing tokens into live Gemini credentials sitting on the public internet. The affected keys span code repositories and websites linked to financial institutions, technology firms, and recruitment platforms—even including a site associated with Google itself

2

.

Google Responds with Proactive Detection Measures

While Google initially deemed this behavior as intended functionality, the company has since acknowledged the severity and implemented countermeasures. A Google spokesperson confirmed they worked with researchers to address the issue, stating: "We have already implemented proactive measures to detect and block leaked API keys that attempt to access the Gemini API"

1

. The company emphasized that protecting user data and infrastructure remains their top priority, though it remains unclear whether this vulnerability was actively exploited in the wild before detection systems were deployed.

Organizations Must Rotate Publicly Exposed API Keys Immediately

Security experts stress that organizations using cloud services need to audit their Google Cloud project configurations urgently. Users should verify whether AI-related APIs are enabled in their projects and check if API keys are publicly accessible in client-side JavaScript or public repositories

1

. Truffle Security recommends starting with the oldest keys first, as these were most likely deployed under previous guidance that API keys were safe to share publicly, then retroactively gained Gemini privileges when teams enabled the API.

Tim Erlin, security strategist at Wallarm, noted this case illustrates how risk profiles evolve dynamically: "APIs can be over-permissioned after the fact. Security testing, vulnerability scanning, and other assessments must be continuous"

1

. He emphasized that finding vulnerabilities isn't enough for API security—organizations must profile behavior and data access, identify anomalies, and actively block malicious activity. The combination of inference access, quota consumption, and possible integration with broader Google Cloud resources creates a materially different risk profile from the original billing-identifier model developers relied upon.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo