Google Cloud launches Agentic Data Cloud to power autonomous AI agents across enterprises

2 Sources

Share

Google Cloud announced Agentic Data Cloud at Cloud Next 2026, a data infrastructure platform designed to enable autonomous AI agents to operate at scale. The platform addresses the shift from human-driven queries to agent-driven actions with three core components: Knowledge Catalog for automated metadata curation, a cross-cloud lakehouse that queries data across AWS and Azure without egress fees, and Data Agent Kit that lets engineers describe outcomes instead of writing code.

Google Cloud Reimagines Data Infrastructure for AI Agents

Google Cloud unveiled its Agentic Data Cloud at Cloud Next 2026 in Las Vegas, fundamentally rethinking how enterprise data stacks serve autonomous AI agents rather than human analysts

1

2

. The data infrastructure platform represents a shift from reactive intelligence to systems of action, where AI agents take direct business actions around the clock instead of waiting for humans to interpret dashboards. Andi Gutmans, VP and GM of Data Cloud at Google Cloud, told VentureBeat that "the data architecture has to change now" as companies move from human scale to agent scale

1

. The platform provides what Gutmans describes as the "connective tissue for AI agents" to access enterprise data without restrictions

2

.

Source: VentureBeat

Source: VentureBeat

Knowledge Catalog Solves the Context Gap Problem

The Knowledge Catalog addresses what Google identifies as the "context gap"—when agents misinterpret business-specific definitions and make costly errors

2

. Evolved from Dataplex, Google's existing data governance product, the catalog automates semantic metadata curation by inferring business logic from query logs without manual data steward intervention

1

. This architectural shift means data engineering teams can scale to their full data estate rather than just the curated subset a small team can maintain manually. The catalog scans documents including accounts, PDFs, PowerPoint presentations and images, extracting entities and studying relationships to build a navigable schema

2

. It covers BigQuery, Spanner, AlloyDB and Cloud SQL natively, and federates with third-party catalogs including Collibra, Atlan and Datahub

1

. Zero-copy federation extends semantic context from SaaS applications including SAP, Salesforce Data360, ServiceNow and Workday without requiring data movement.

Cross-Cloud Lakehouse Eliminates Data Gravity

The cross-cloud lakehouse tackles what Google calls "data gravity"—when agents lose autonomy due to cross-cloud latency or data trapped in other platforms

2

. BigQuery can now query Apache Iceberg tables sitting on AWS S3 via Google's Cross-Cloud Interconnect, a dedicated private networking layer, with no egress fees and price-performance comparable to native AWS warehouses

1

. Gutmans explained the previous federation worked through query APIs, limiting optimizations BigQuery could apply to external data. The new storage-based sharing approach means AI agents can treat data stored in Azure data lakes or S3 buckets as if it were local in Google Cloud

2

. Bidirectional federation in preview extends to Databricks Unity Catalog on S3, Snowflake Polaris and AWS Glue Data Catalog using the open Iceberg REST Catalog standard

1

.

Source: SiliconANGLE

Source: SiliconANGLE

Data Agent Kit Shifts Engineers from Writing to Reviewing

The Data Agent Kit introduces agent-centric platforms directly into developer workflows, shipping as portable MCP tools and IDE extensions for VS Code, Claude Code, Gemini CLI and Codex

1

. Rather than writing Spark pipelines to move data, engineers describe outcomes—a cleaned dataset for model training or a transformation enforcing governance rules—and agents select whether to use BigQuery, Lightning Engine for Apache Spark or Spanner, then generate production-ready code. "Customers are kind of sick of building their own pipelines," Gutmans said, noting they're "truly more in the review kind of mode"

1

. Google announced three specialized agents: a Data Engineering agent for building complex data transformations, a Data Science agent for automating AI model lifecycles across BigQuery and Spark, and a Database Observability agent that diagnoses and repairs infrastructure issues

2

. The Model Context Protocol provides a secure, universal interface allowing any agent to safely discover and use data assets, with interactions governed by existing IAM policies, VPC Service Controls and data residency requirements

2

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo