3 Sources
3 Sources
[1]
Google launches managed MCP servers that let AI agents simply plug into its tools
AI agents are being sold as the solution for planning trips, answering business questions, and solving problems of all kinds, but getting them to work with tools and data outside their chat interfaces has been tricky. Developers have to patch together various connectors and keep them running, but that's a fragile approach that's hard to scale and creates governance headaches. Google claims it's trying to solve that by launching its own fully managed, remote MCP servers that would make its Google and Cloud services -- like Maps and BigQuery -- easier for agents to plug into. The move follows the launch of Google's latest Gemini 3 model, and the company is looking to pair stronger reasoning with more dependable connections to real-world tools and data. "We are making Google agent-ready by design," Steren Giannini, product management director at Google Cloud, told TechCrunch. Instead of spending a week or two setting up connectors, developers can now essentially paste in a URL to a managed endpoint, Giannini said. At launch, Google is starting with MCP servers for Maps, BigQuery, Compute Engine, and Kubernetes Engine. In practice, this might look like an analytics assistant querying BigQuery directly, or an ops agent interacting with infrastructure services. In the case of Maps, Giannini said, without the MCP, developers would rely on the model's built-in knowledge. "But by giving your agent [...] a tool like the Google Maps MCP server, then it gets grounded on actual, up-to-date location information for places or trips planning," he added. While the MCP servers will eventually be offered across all of Google's tools, they are initially launching under public preview, meaning they're not yet fully covered by Google Cloud terms of service. They are, however, being offered to enterprise customers that already pay for Google services at no extra cost. "We expect to bring them to general availability very soon in the new year," Giannini said, adding that he expects more MCP servers to trickle in every week. MCP, which stands for Model Context Protocol, was developed by Anthropic about a year ago as an open-source standard to connect AI systems with data and tools. The protocol has been widely adopted across the agent tooling world, and Anthropic earlier this week donated MCP to a new Linux Foundation fund dedicated to open-sourcing and standardizing AI agent infrastructure. "The beauty of MCP is that, because it's a standard, if Google provides a server, it can connect to any client," Giannini said. "I'm looking forward to seeing how many more clients will emerge." One can think of MCP clients as the AI apps on the other end of the wire that talk to MCP servers and call the tools they offer. For Google, that includes Gemini CLI and AI Studio. Giannini said he's also tried it with Anthropic's Claude and OpenAI's ChatGPT as clients, and "they just work." Google argues this isn't just about connecting agents to its services. The bigger enterprise play is Apigee, its API management product, which many companies already use to issue API keys, set quotas, and monitor traffic. Giannini said Apigee can essentially "translate" a standard API into an MCP server, turning endpoints like a product catalog API into tools an agent can discover and use, with existing security and governance controls layered on top. In other words, the same API guardrails companies use for human-built apps could now apply to AI agents, too. Google's new MCP servers are protected by a permission mechanism called Google Cloud IAM, which explicitly protects what an agent can do with that server. They are also protected by Google Cloud Model Armor, which Giannini describes as a firewall dedicated to agentic workloads that defends against advanced agentic threats like prompt injection and data exfiltration. Administrators can also rely on audit logging for additional observability. Google plans to expand MCP support beyond the initial set of servers. In the next few months, the company will roll out support for services across areas like storage, databases, logging and monitoring, and security. "We built the plumbing so that developers don't have to," Giannini said.
[2]
Google debuts managed MCP servers for BigQuery, other cloud services - SiliconANGLE
Google debuts managed MCP servers for BigQuery, other cloud services Google LLC today introduced managed MCP servers that will enable artificial intelligence agents to interact with four of its cloud services. Until recently, giving AI agents access to an application required manually building an integration between the two workloads. That task can take a significant amount of time. It also increases the amount of bespoke code in software projects, which raises the risk of bugs. MCP is an open-source technology that addresses the challenge. It enables developers to create an agent-friendly interface called an MCP server for an application. When third parties wish to connect their AI agents to the application, they can use the ready-made MCP server instead of building a custom integration. But while MCP speeds up development, using it in production involves certain challenges. Developers have to set up infrastructure on which their application's MCP server can run and then maintain it. The managed MCP servers that Google debuted today are designed to ease those tasks for its customers. They remove the need to set up or maintain the underlying infrastructure. The MCP servers support four Google Cloud services: Google Maps Platform, BigQuery, Google Compute Engine and GKE. Google Maps Platform is the developer version of the company's popular mapping platform. The accompanying MCP server is called Maps Grounding Lite and enables AI agents to access data from the service. A navigation app, for example, could use it to help drivers find the fastest route to a destination. The second new MCP server enables AI agents to query records stored in BigQuery. The integration lends itself to, among other tasks, generating forecasts such as revenue predictions. According to Google, the MCP server enables AI agents to access BigQuery data without loading it into their context windows. That avoids the cybersecurity risks associated with moving business information to a new environment. The two other new MCP servers will help companies manage their Google Cloud environments. The first allows AI agents to perform tasks such as provisioning Google Compute Engine instances. The second MCP server, in turn, gives AI agents access to Google Cloud's GKE managed Kubernetes service. "The GKE MCP server exposes a structured, discoverable interface that allows agents to interact reliably with both GKE and Kubernetes APIs," Google executives Michael Bachman and Anna Berenberg wrote in a blog post. "This unified surface allows agents, operating autonomously or with human-in-the-loop guardrails, to diagnose issues, remediate failures, and optimize costs." The MCP servers are joined by MCP support in Google Cloud's Apigee platform. The offering provides tools that companies use to build, manage and secure application programming interfaces. Today's update will make it possible to turn Apigee-powered APIs into MCP servers through a relatively simple workflow.
[3]
Google adds official MCP server support: Agentic AI, BigQuery and Maps integration explained
Google has taken a major step toward making its cloud ecosystem fully ready for autonomous AI agents. The company has rolled out official support for the Model Context Protocol across key services, along with new managed MCP servers that give AI agents direct, structured access to tools like BigQuery, Google Maps, Compute Engine and Kubernetes Engine. The update positions Google Cloud as a native environment for agentic workloads without the messy connectors or brittle workarounds developers relied on until now. Also read: OpenAI's ChatGPT can now edit your images using Adobe Photoshop, here's how The Model Context Protocol is emerging as a common language that lets AI agents communicate with external tools and data sources. Instead of translating natural language into unpredictable API calls, MCP gives agents a clean, machine-readable way to discover capabilities, issue commands and process responses. By adopting MCP as a first-class interface, Google makes its most widely used services instantly accessible to any MCP-capable model. For developers, this means an AI agent no longer needs custom scripts to query BigQuery, plan routes with Maps or manage infrastructure. The agent can connect to a standard MCP endpoint, authenticate and begin executing precise, auditable operations. Google's managed MCP servers are hosted, production-grade endpoints designed for AI scenarios where reliability and governance matter. These servers expose the core functions of a Google service through a uniform MCP schema, avoiding the need to manually wrap APIs or maintain fragile integrations. BigQuery lets agents run SQL tasks and fetch results. Maps supports geospatial lookups, routing and location metadata. Compute Engine and Kubernetes Engine expose infrastructure actions for provisioning, scaling or managing deployments. Google has also tied MCP into its API management layer, giving enterprises a way to expose internal APIs as MCP tools through Apigee with full IAM and policy controls. The result is a cloud environment where AI agents can operate with the same clarity as human operators but with higher speed and less ambiguity. Also read: Android users can now share live video in emergencies, but there's a catch The shift signals a broader push across the industry toward agent-ready infrastructure. Instead of limiting AI to text generation, MCP-equipped agents can read structured outputs, plan workflows and take actions inside cloud systems. Google's adoption moves the protocol from a promising standard to a practical foundation for enterprise automation, data operations and intelligent services. Companies building AI-driven applications can now assemble workflows that combine large model reasoning with operational tools without relying on custom glue code. This reduces failure points and keeps agents aligned with enterprise security rules. Google says more services will receive MCP support in the coming months, expanding coverage across databases, storage, security and observability. As MCP becomes a default interface on Google Cloud, developers will be able to build agentic systems that interact with nearly every layer of the stack using a unified protocol.
Share
Share
Copy Link
Google rolled out fully managed MCP servers for BigQuery, Google Maps, Compute Engine, and Kubernetes Engine, making its cloud ecosystem agent-ready by design. The move eliminates weeks of connector setup and brings standardized access to AI agents through the Model Context Protocol. Enterprise customers get production-grade endpoints with built-in security and governance at no extra cost.

Google Cloud has launched fully managed MCP servers that give AI agents direct access to BigQuery, Google Maps Platform, Compute Engine, and Kubernetes Engine without the fragile connectors developers previously relied on
1
. The company describes the initiative as making Google "agent-ready by design," according to Steren Giannini, product management director at Google Cloud1
. Instead of spending one to two weeks building custom integrations, developers can now paste a URL to a managed endpoint and connect autonomous AI agents to Google Cloud services immediately1
.The timing aligns with Google's latest Gemini 3 model release, pairing stronger reasoning capabilities with more dependable connections to real-world tools and data
1
. These managed MCP servers are hosted, production-grade endpoints that remove infrastructure setup and maintenance burdens2
. Enterprise customers already paying for Google services get access at no extra cost, though the servers launch under public preview before reaching general availability early next year1
.The Model Context Protocol, developed by Anthropic roughly a year ago as an open-source standard, has become the common language allowing AI agents to communicate with external tools and data sources
1
3
. Anthropic recently donated MCP to a new Linux Foundation fund dedicated to open-sourcing and standardizing agent-ready infrastructure1
. The protocol's beauty lies in its interoperability: because MCP is a standard, Google's servers can connect to any client, including Anthropic's Claude, OpenAI's ChatGPT, Gemini CLI, and AI Studio1
.Instead of translating natural language into unpredictable API calls, MCP gives agents a clean, machine-readable way to discover capabilities, issue commands, and process responses
3
. This shift reduces the bespoke code in software projects and lowers the risk of bugs that come with manually building integrations2
.The BigQuery integration allows AI agents to query records and generate forecasts like revenue predictions without loading data into their context windows, avoiding cybersecurity risks associated with moving business information to new environments
2
. An analytics assistant can now query BigQuery directly for structured outputs1
. The Google Maps integration, branded as Maps Grounding Lite, grounds agents on actual, up-to-date location information for places or trip planning rather than relying on a model's built-in knowledge1
2
. Navigation apps can use it to help drivers find optimal routes2
.The infrastructure-focused MCP servers for Compute Engine and Kubernetes Engine let ops agents provision instances, diagnose issues, remediate failures, and optimize costs
2
. Google executives Michael Bachman and Anna Berenberg explained that the GKE MCP server exposes a structured, discoverable interface allowing agents to interact reliably with both GKE and Kubernetes APIs, whether operating autonomously or with human-in-the-loop guardrails2
.Related Stories
The bigger enterprise play involves Apigee, Google's API management product that many companies already use to issue API keys, set quotas, and monitor traffic
1
. Giannini explained that Apigee can essentially translate a standard API into an MCP server, turning endpoints like a product catalog API into tools an agent can discover and use, with existing security and governance controls layered on top1
. This means the same API guardrails companies use for human-built apps now apply to AI agents1
3
.Google's new MCP servers are protected by Google Cloud IAM, which explicitly controls what an agent can do with each server
1
. They're also shielded by Google Cloud Model Armor, described as a firewall dedicated to agentic workloads that defends against advanced threats like prompt injection and data exfiltration1
. Administrators can rely on audit logging for additional observability1
.Google plans to expand MCP support beyond the initial set of servers, rolling out coverage across databases, storage, logging, monitoring, and security in the coming months
1
3
. Giannini said Google expects more MCP servers to trickle in every week1
. As MCP becomes a default interface on Google Cloud, developers will be able to build agentic systems that interact with nearly every layer of the stack using a unified protocol3
.The shift signals a broader industry push toward agent-ready infrastructure where AI agents can operate with the same clarity as human operators but with higher speed and less ambiguity
3
. Companies building AI-driven applications can now assemble workflows that combine large model reasoning with operational tools without relying on custom glue code, reducing failure points and keeping agents aligned with enterprise security rules3
. Watch for how quickly other cloud providers adopt similar MCP strategies and whether this standardization accelerates the deployment of production AI agents across industries.Summarized by
Navi
24 Sept 2025•Technology

02 Jun 2025•Technology

11 Apr 2025•Technology

1
Technology

2
Technology

3
Technology
