22 Sources
[1]
Google makes an interesting choice with its new agent building tool for enterprises | TechCrunch
Google CEO Sundar Pichai opened the Google Cloud Next conference on Wednesday with a video in which he announced one of the company's biggest new products: Gemini Enterprise Agent Platform. Google's tool is intended for building and managing agents at scale. This is Google's answer to Amazon's Bedrock AgentCore and to Microsoft Foundry. Given that AI, and agents in particular, are furthest along for technical tasks like coding, and that the tech is so new to the enterprise that security remains a real concern, Google has made an interesting choice with this tool. Agent Platform is particularly geared at IT and technical teams. The business folks, meanwhile, are directed toward what Google calls its Gemini Enterprise app, introduced in the fall. They can work with agents built by IT or build their own for tasks like scheduling meetings, performing trigger-based processes, creating shortcuts for repetitive tasks or creating and editing files without needing to switch apps, Google says. Google also underscored that the underlying models these tools tap into include Google's own Gemini LLM and Nano Banana 2 image generator, as well as Anthropic's Claude. The company announced support for Claude Opus, Sonnet and Haiku -- in other words, flagship, reasoning, and lower-cost models, including the new Opus 4.7 that launched last week.
[2]
How Google just revamped Gemini Enterprise for the agentic era - here's what's new
A new Agent Platform streamlines automated work and security. As companies use more agents in their workflows, managing them securely and efficiently becomes a primary challenge. Google just created a possible solution, wrapped in the same accessible interface that many teams are used to. On Wednesday at Google Cloud Next, the company's annual enterprise conference, Google released its new Gemini Enterprise Agent Platform for developers. Evolved from Vertex AI, Agent Platform "brings together the model selection, model building, and tuning services of Vertex AI that customers love, along with new features for agent integration, security, DevOps, orchestration, and more," CEO Thomas Kurian said in the announcement. Also: This powerful Gemini setting made my AI results way more personal and accurate The platform revamps the current Gemini Enterprise experience and offers over 200 models, including Gemini 3.1 Pro, Nano Banana 2, Gemma open models, and competitive models from Anthropic, such as its just-released Opus 4.7. Since Agent Platform is built on Vertex, Google noted that those services will now flow through Agent Platform exclusively. In the platform, according to Google, developers can design an agent's life cycle start to finish, from building the agents themselves to scaling and governing them. MCP support and an upgraded Agent Development Kit help developers maximize reasoning capabilities by structuring agents into sub-networks. That tiered approach should set agents up to handle complex tasks, Google said, adding that other features like faster runtime and Memory Bank help agents delegate to each other more efficiently and operate with more context for longer. "Gemini Enterprise is now an end-to-end system for the agentic era, built for agents that can execute complex, multi-step work processes," Google said in the announcement. Also: Prolonged AI use can be hazardous to your health and work: 4 ways to stay safe The company also emphasized that it has baked security into the new platform through tools such as Agent Identity, which assigns each agent a cryptographic ID. If you'd rather not take any risks, however, you can use Google's new Agent Simulation tool to "stress-test your agents against real-world scenarios before they ship," the company said. Once developers are done building and testing, they can publish agents from the platform to the Gemini Enterprise app, where employees can run those agents or build their own with no-code or lower-code options like Google's Agent Studio and Agent Designer. A Google employee demonstrated how users can deploy multiple agents in the enterprise app at once to tackle an inventory or marketing challenge, as if they were a team of workers. In the demo, each individual agent handled a specific element of a multi-step project for a furniture company, using the organization's Workspace contents to pull relevant data and strategy points. Running multiple autonomous agents can pose a host of privacy and security risks for any organization, especially when non-developer employees use them. Google emphasized that its revamped Gemini Enterprise addresses this by simplifying guardrails and permissions before users can access agents. The company said it "provides the same level of oversight and auditability found in essential business applications like payroll or quarterly financial reporting." Also: I tested ChatGPT Plus vs. Gemini Pro to see which is better - and if it's worth switching The Gemini Enterprise app sits atop Agent Platform, which Google said standardizes governance and security. "We provide a single control plane for governance in Agent Platform, so every employee can use and share agents with full IT visibility," the company added. "Both no-code and pro-code agents are managed through a consistent model for identity, security, and auditing." Google also announced Agentic Data Cloud, a new data architecture intended to help scale AI agents. Several new features let developers instantly query data without moving it out of AWS or Azure, leverage new data science tools across multiple surfaces, and enrich files with metadata to give agents more semantic context, among other capabilities. At the Workspace level, Google launched Workspace Intelligence, which uses Gemini reasoning to understand "complex semantic relationships within your Workspace apps (such as Docs, Slides, or Gmail) content, your active projects, your collaborators, and your organization's domain knowledge," the company wrote. Also: Scaling agentic AI demands a strong data foundation - 4 steps to take first While that may sound like what Gemini already does, Google framed Workspace Intelligence as an additional tool that Gemini will leverage when automating tasks such as slide generation and project prep. Google noted a few upgrades in the new feature, including proprietary infographics in Docs and advanced personalization tailored to a user's style. "Workspace Intelligence retrieves your relevant emails, chats, files, and information from the web to transform ideas into professionally formatted drafts that mimic your exact voice, brand, style, and company templates," Google said.
[3]
Google on why its all-in-one AI stack embraces competitors
Google Cloud Next Google Cloud's Andi Gutmans said that the company holds a structural advantage over its largest rivals in the race to win value from AI agents in the enterprise, arguing that no competitor currently combines cloud computing infrastructure, frontier AI models, and a data platform under one roof. "We're really the only provider that has the AI infrastructure, the model and the data platform," he said in response to a question from The Register during a briefing with reporters on the sidelines of Google Cloud Next. Gutmans, who runs Google Cloud's data business, including its analytics, transactional databases, storage and business intelligence products, said the integrated stack is critical to achieving value from AI. "If you think about AWS and Azure, they've got the infrastructure, they don't have the model," Gutmans said. "You look at the data providers, they have the data platform, but they've got to get the infrastructure and model from others. The AI model providers just do the AI model." Gutmans said as enterprises shift from AI tools that respond to human queries toward agents that act autonomously on behalf of employees, the significance of those gaps becomes more pronounced. That transition puts pressure on the underlying data platform in ways that earlier architectures were not designed to handle, and that the economics of running agents at scale rewards providers that control more of the stack. "If you ask 'How is this agentic data cloud really different because everyone is saying the same thing?' The answer is we are uniquely positioned to integrate these things very tightly which is now more important than ever as you go from human scale to agent scale because you're going to have to bend the price-performance curve or it's going to be too expensive." Gutman said Google spent the past year and a half rethinking its data platform for the shift to agent scale. He said roughly 90 percent of enterprise data remains unstructured and has historically gone unused. He said the Knowledge Catalog announced at the show is designed to make that data available to agents without requiring armies of data engineers to prepare it manually. The moment that made the change possible was not a product decision but a model one. He said that, when Gemini 2.5 arrived, there was a tipping point in reasoning capability that forced Google to re-engineer every agent in its data portfolio. "We've completely re-engineered every single one of our agents in the last year. So even the conversation analytics agent, the data science agent, the data engineering agent -- we've had to be less prescriptive with the models. That's where the Knowledge Catalog and the MCPs help because they're so much better than reasoning around them. That is the big tipping point." he said. "If you ask a customer how conversation analytics was last year versus now, they'll tell you they couldn't use it last year. It worked for simple stuff." He said the company has roughly 80 data-related announcements at the conference this week, and that nearly every agent product in his portfolio has been rebuilt in the past year. "The models have gone so far," he said. "It's night and day." He said approaches that required months of manual ontology-building are no longer necessary. "A year ago, people would be like, 'Let me get Palantir and get 20 people and work for six months and build an ontology.' That's not how you would approach it anymore," he said. "If you really want to activate your whole data estate you can't do it with people." The Register asked Gutmans how Google navigates a market where it simultaneously competes with, and also partners with many of the software providers Google makes its own TPU AI accelerators, but partners with Nvidia on chips. It has a data analytics platform in Big Query but also works with Databricks, Snowflake, and Informatica. GCP users can create, deploy and govern AI agents to carry out tasks across their digital estates, but it can also host those same capabilities from its partners at Salesforce and ServiceNow. "Our view, and I don't think its different than any other hyperscaler, is we want to build the best platform," he said. Gutmans said that the integrated stack is a real and durable competitive advantage, particularly as security, governance and cost efficiency become harder to manage across fragmented systems. He said the same principle applies to the cross-cloud lakehouse Google announced this week, which he said allows customers to query data sitting in Amazon Web Services or Microsoft Azure with low latency. "Differentiated, but open," is how he described Google's approach. ®
[4]
Google says it has all the answers for AI agent sprawl
As biz agentic bot-wrangling intensifies, company says AI orchestration, security and infrastructure tools on the way Google Cloud Next Google has overhauled its enterprise AI strategy in the wake of the agentic push across the biz landscape, rebranding and expanding its Vertex AI developer platform into what it now calls the Gemini Enterprise Agent Platform. It comes as the challenge facing businesses has shifted from building individual AI agents to managing hundreds or thousands of them at once - something Workday and others are trying to tackle too. "The early versions of AI models were really focused on answering questions that people had and assisting them with creative tasks. Now we're seeing as the models evolve people wanting to delegate tasks and sequences of tasks to agents," Google Cloud CEO Thomas Kurian told reporters during a press briefing. "And these agents then being able to turn around and use a computer, use all of GCP and Workspace as a tool." To meet the moment, Google rolled out infrastructure in the form of its eighth generation of TPU chips and security updates through its purchase of Wiz. Those announcements as well as the Gemini Enterprise Agent Platform are designed to give companies a single system for developing, deploying, governing, and monitoring AI agents across their organizations. Google says it can act as the connective layer between a company's data, its employees, and the growing fleet of autonomous agents that enterprises are beginning to rely on. "All the pieces are designed to do this," Kurian said in the briefing. "The security to protect these agents. Our data cloud to feed the agents context from within the system. Our AI infrastructure to optimize performance, scale and cost of how agents run. This year is the next evolution of where we see this AI technology going." He said organizations are choosing Google Cloud because of its ability to deliver "a comprehensive backbone for innovation" rather than "individual services that can be cobbled together." Gemini Enterprise Agent Platform is organized around four pillars: build, scale, govern, and optimize. On the build side, Google introduced Agent Studio, a low-code interface for creating agents using natural language, alongside an upgraded Agent Development Kit with a new graph-based framework for orchestrating multiple agents working together, the company said during a media prebriefing. It also provides an agent registry that gives organizations a central catalog of each internal agent and tool, the company said. Also inside the new platform is an agent marketplace that offers pre-built agents from partners including Atlassian, Oracle, ServiceNow, and Workday. The platform includes Agent Runtime, a feature that Google says delivers sub-second cold starts and gives users the ability to provision new agents in seconds. It also supports long-running agents -- autonomous processes that can operate for hours or days on complex business workflows like financial reconciliation or sales prospecting. A new Memory Bank feature gives agents persistent, long-term memory across sessions rather than starting from scratch each time, the company said. But it is the governance capabilities that may matter most to enterprise buyers who fear that AI tools may proliferate across their organizations with limited oversight. Agent Identity assigns every agent a unique cryptographic ID with defined authorization policies, creating an auditable trail of every action, Google said. Agent Gateway, meanwhile, acts as the police for agent ecosystems, enforcing security policies and protecting against prompt injection, tool poisoning, and data leakage. An Agent Anomaly Detection system flags suspicious behavior by analyzing the intent behind agent actions, and gives users the chance to stop it before it goes rogue. Then there are the ways Google has said its tools can be used to fine-tune agents, such as Agent Simulation for stress-testing them against synthetic interactions before deployment. Agent Evaluation scores live performance, while Agent Observability dashboards trace execution paths and diagnosing problems in real time for rapid debugging, the cloud giant told reporters. Google said the Gemini Enterprise app -- the consumer-facing side of the platform -- is a place where non-technical employees can build and manage their own agents using Agent Designer. Users can create schedule- or trigger-based agents to automate multi-step processes, while an "Inbox in Google Enterprise gives those users a central hub for monitoring agent activity with notifications sorted into categories like "Needs your input," "Errors," and "Completed." Google CEO Sundar Pichai said, based on internal adoption statistics, there is evidence of a shift toward agentic workflows. He said 75 percent of all code at Google is now AI-generated and approved by engineers, up from 50 percent last (northern hemisphere) fall. In a blog, he described a recent internal code migration completed by agents and engineers working together that "was completed six times faster than was possible a year ago with engineers alone." Its tools are surging in popularity, Google claims, with nearly 75 percent of its Cloud customers using AI products, while Gemini Enterprise saw 40 percent growth in paid monthly active users quarter over quarter in Q1, and Google's first-party models now process more than 16 billion tokens per minute via direct API use, up from 10 billion the prior quarter. There also appears to be a lot of token-maxxing among customers. Google said 330 Google Cloud customers each processed more than one trillion tokens, while 35 reached the 10-trillion-token milestone with its models. Within the press material for the show, several large customers provided testimonials about their own Gemini deployments. GE Appliances said it has more than 800 of Google's AI agents running across manufacturing, logistics, and supply chain operations. KPMG reported 90 percent Gemini Enterprise adoption among employees with more than 100 agents deployed in the first month. Tata Steel said it deployed over 300 specialized agents in nine months. Merck announced a partnership valued at up to $1 billion to build an agentic platform across its R&D, manufacturing, and commercial functions. The announcements land in an increasingly competitive market for enterprise AI platforms. Microsoft, Amazon Web Services, and Salesforce have all made a push into agent orchestration and management in recent months. Google's approach leans heavily on vertical integration, with the hope that designing chips, models, infrastructure, and application layers together produces better results than assembling components from different vendors. Google also announced a $750 million fund to support its partner ecosystem in building and deploying agentic AI, along with agreements with McKinsey, Deloitte, and other consulting firms that will receive early access to upcoming models from Google DeepMind. ®
[5]
Pichai opens Cloud Next 2026 with $240B backlog, 750M Gemini users, and a plan to turn Search into an agent manager
Summary: Sundar Pichai opened Cloud Next 2026 with Google Cloud at $70 billion in annual revenue, 48% growth, a $240 billion backlog that doubled in a year, and $175-185 billion in planned capital expenditure. The Gemini app has 750 million monthly users, AI Overviews reach two billion, and the Gemini API processed 85 billion requests in January alone. Pichai framed the conference around Search evolving from a retrieval engine into an "agent manager" and announced the Universal Commerce Protocol with Shopify, Target, and Walmart, while positioning Google's full-stack integration from custom silicon to consumer distribution as the advantage competitors cannot replicate. Sundar Pichai opened Google Cloud Next 2026 on Tuesday with a set of numbers that reframe the competitive dynamics of enterprise AI. Google Cloud is now generating more than $70 billion in annual revenue, growing at 48% year on year, with a backlog of $240 billion, up 55% and more than double the roughly $155 billion of a year ago. The number of billion-dollar deals Google Cloud signed in 2025 exceeded the combined total of the three previous years. Existing customers are outpacing their own commitments by 30%, spending faster than they contracted. Google has committed $175 billion to $185 billion in capital expenditure for 2026, nearly doubling the $91.4 billion it spent last year. Pichai described the moment as "a fundamental rewiring of technology and an accelerant of human ingenuity." The money suggests he may not be exaggerating. The keynote, titled "The Agentic Cloud," was less a product launch than a thesis statement. Google is positioning itself not as a cloud provider that offers AI but as the operating system for what it calls the agentic enterprise: a model in which AI agents handle routine business operations autonomously, communicate with each other across platforms, and interact with the physical world through commerce, search, and real-time data. The pitch is that Google is the only company that controls every layer of that stack, from the custom silicon that runs inference, to the frontier models that power reasoning, to the cloud platform that hosts the agents, to the productivity suite and search engine through which three billion users interact with them. The Gemini app has reached 750 million monthly active users as of the fourth quarter of 2025, up 100 million from the previous quarter. AI Overviews, Google's AI-generated search summaries, reach two billion monthly users across more than 200 countries and drive 10% more search queries globally. AI Overviews now trigger on approximately 48% of all tracked queries, up from 31% in February 2025, a 58% increase in a year. The Gemini API processed 85 billion requests in January 2026, a 142% increase from 35 billion in March 2025. Eight million paid Gemini Enterprise seats are deployed across 2,800 companies. Thirteen million developers are building with Google's generative models. Gemini 3 Pro has had, in Pichai's words, "the fastest adoption of any model in our history." These are not cloud metrics. They are platform metrics. Google is arguing that its advantage over AWS, Azure, OpenAI, and Anthropic lies not in any single product but in the fact that it reaches more users, processes more queries, and touches more surfaces than any competitor. Search alone handles more than a billion shopping interactions per day. Workspace has more than three billion users. Android runs on billions of devices. The thesis is that when AI agents become the primary interface for work and commerce, the company with the largest existing surface area wins, because the agents need somewhere to run, something to connect to, and someone to serve. Pichai's most consequential framing may have come in a podcast appearance earlier this month: "A lot of what are just information-seeking queries will be agentic in Search. You'll be completing tasks. You'll have many threads running." He described Search evolving from a retrieval engine into an "agent manager," an orchestration layer that dispatches AI agents to complete tasks on a user's behalf rather than returning a list of links. The infrastructure for this is already being built. Google announced the Universal Commerce Protocol at NRF in January, an open-source standard for agentic commerce co-developed with Shopify, Etsy, Wayfair, Target, and Walmart. More than 20 partners have endorsed it, including Adyen, American Express, Best Buy, Flipkart, Macy's, Mastercard, Stripe, The Home Depot, Visa, and Zalando. UCP is built on REST and JSON-RPC transports with the Agent2Agent protocol, Model Context Protocol, and a new Agent Payments Protocol built in. It lets AI agents treat any participating store as a programmable service, with the merchant remaining the merchant of record. Pichai, who described himself as "an indecisive shopper," said he is "looking forward to the day when agents can help me get from discovery to purchase." The implications for the advertising industry are significant. If Search shifts from showing links that users click to dispatching agents that complete purchases, the entire cost-per-click model that funds Google's advertising business, and by extension the businesses of every company that advertises on Google, changes. Retailers are already deploying AI-powered shopping through Gemini, ChatGPT, and Copilot. The question is whether agentic commerce cannibalises Google's own advertising revenue or whether Google can capture a larger share of the transaction itself. UCP suggests Google is betting on the latter. The competitive positioning at Cloud Next was unusually direct. Thomas Kurian said competitors are "handing you the pieces, not the platform," leaving enterprise teams to integrate components themselves. The claim rests on Google's vertical integration: Ironwood TPUs and the forthcoming eighth-generation split into Broadcom-designed training chips and MediaTek-designed inference chips provide the silicon. Gemini 3 Pro, 3 Flash, and 3.1 Pro provide the models. The Gemini Enterprise Agent Platform, formerly Vertex AI, provides the developer tools and runtime. Workspace Studio provides the no-code agent builder. Search and Android provide the consumer distribution. No other company assembles all of these under one roof. The argument has a specific target: Microsoft Copilot, which despite being embedded in virtually every Fortune 500 company has struggled with adoption. Only 3.3% of Microsoft 365 users with Copilot access actually pay for it, and its accuracy net promoter score deteriorated to negative 24.1 by September 2025. Google's eight million paid Gemini Enterprise seats in roughly four months represents a faster trajectory, though from a much smaller base. GitHub has frozen new Copilot sign-ups because agentic coding sessions consume more compute than users pay for, illustrating why owning the silicon layer, as Google does, is not just a technical advantage but an economic one. The $175 billion to $185 billion in planned capital expenditure is the number that makes the rest of the strategy credible or alarming, depending on how the next two years unfold. Roughly 60% goes to servers and 40% to data centres and networking equipment. Combined with Microsoft, Meta, and Amazon, total big tech AI infrastructure spending is approaching $700 billion this year, a figure large enough to reshape energy markets and strain power grids. Pichai acknowledged on the fourth-quarter earnings call that the "top question is definitely around compute capacity and all the constraints, be it power, land, supply chain," and expects Google to remain supply-constrained through 2026. The backlog provides the justification. At $240 billion, it represents more than three years of current revenue contracted but not yet delivered. Thirteen product lines each generate more than $1 billion in annual revenue. The ServiceNow deal alone was worth $1.2 billion over five years. If the demand is real, and the backlog suggests it is, then the capital expenditure is not a gamble but an obligation: the cost of building the infrastructure to fulfil commitments already made. Google Cloud holds roughly 11% of the cloud infrastructure market, behind AWS at 31% and Azure at 25%. The gap has narrowed: Google grew at 48% in the fourth quarter of 2025, the fastest of the three, and achieved sustained profitability for the first time. But the gap remains. What Pichai presented at Cloud Next is not a plan to close that gap through incremental cloud sales. It is a plan to redefine what the cloud is, from a place where companies store data and run workloads to a platform where AI agents perform work, make decisions, complete purchases, and coordinate with each other across organisational boundaries. If that transition happens, the company that built the agents, the models, the chips, the protocols, and the distribution channels stands to capture a share of the value that the current market share numbers do not reflect. That is the bet. Cloud Next 2026 is the moment Google made it explicit.
[6]
Google Cloud Next 2026: AI agents, A2A protocol, Workspace Studio, and the full-stack bet against OpenAI and Anthropic
Summary: Google rebranded and consolidated its AI platform at Cloud Next 2026, renaming Vertex AI to the Gemini Enterprise Agent Platform and absorbing Agentspace into a unified Gemini Enterprise product. The announcements include Workspace Studio (no-code agent builder), 200+ models in the Model Garden including Anthropic Claude, partner agents from Box, Workday, Salesforce, and ServiceNow, ADK v1.0 stable releases across four languages, Project Mariner (web-browsing agent), managed MCP servers with Apigee as an API-to-agent bridge, and A2A protocol v1.0 in production at 150 organisations. Kurian framed the strategy as owning the full stack from chip to inbox while competitors "hand you the pieces, not the platform." Google used the opening keynote of Cloud Next 2026 on Tuesday to unveil what amounts to a full rebranding and consolidation of its AI platform around agents. Vertex AI is now the Gemini Enterprise Agent Platform. Google Agentspace, the employee-facing AI assistant, has been absorbed into a unified product called Gemini Enterprise. The announcements span a no-code agent builder for Google Workspace, a redesigned developer platform with more than 200 models including third-party options such as Anthropic's Claude, a web-browsing agent called Project Mariner, managed MCP servers across Google Cloud services, and the production-grade Agent2Agent protocol for cross-platform agent communication. Thomas Kurian, Google Cloud's chief executive, titled the keynote "The Agentic Cloud" and drew a deliberate contrast with competitors: other vendors, he said, are "handing you the pieces, not the platform," leaving teams to integrate components themselves. The timing is deliberate. OpenAI's Operator is scoring 87% on complex browser task benchmarks and the company has recruited Cognizant and CGI to push its Codex coding agent into enterprise software shops, with enterprise revenue now accounting for 40% of OpenAI's total. Anthropic has launched a marketplace for Claude-powered enterprise tools and its Model Context Protocol has reached 10,000 servers and 97 million monthly SDK downloads. Google is fighting from third position in cloud market share, behind AWS and Microsoft Azure, but exited the fourth quarter of 2025 with the fastest growth rate of the three at 50% year on year, and is betting that vertical integration, owning the model, the runtime, the silicon, and the distribution channel through Workspace, gives it an advantage neither competitor can replicate. Google Workspace Studio is the most consumer-facing announcement. It is a no-code platform that lets business users build and deploy AI agents across Gmail, Docs, Sheets, Drive, Meet, and Chat by describing automations in plain language. A user can type "every Friday, ping me to update my tracker" and Gemini creates the automation. Workspace Studio connects to third-party applications including Asana, Jira, Mailchimp, and Salesforce, and can call external APIs via webhooks or run custom logic through Apps Script. It is rolling out to Google Workspace business, enterprise, and education customers. The developer-facing platform, now called the Gemini Enterprise Agent Platform, received deeper upgrades. Agent Designer, a visual flow canvas for building agent workflows, is in preview. Agent Engine Sessions and Memory Bank, which give agents persistent context across interactions, are generally available. A new Agent Garden provides prebuilt agent solutions for customer service, data analysis, and creative tasks. A free tier via Express mode lowers the entry barrier. The Model Garden now hosts more than 200 models spanning Google's own Gemini and Gemma families, third-party models including Anthropic Claude, and open models such as Llama. Google also announced six new agents for data engineering and coding in BigQuery, including a data engineering agent that automates pipeline creation from natural language prompts and a code interpreter that translates queries into executable Python with visualisations. Partner agents from Box, Workday, Salesforce, ServiceNow, Dun and Bradstreet, and S&P Global are integrated into the platform, giving enterprise customers prebuilt capabilities for document intelligence, HR self-service, IT operations, and financial data. Project Mariner, Google DeepMind's web-browsing agent powered by Gemini 2.0, scores 83.5% on the WebVoyager benchmark and handles ten concurrent tasks on cloud-based virtual machines. It automates shopping, information retrieval, and form-filling, and is available to Google AI Ultra subscribers in the United States. The roadmap includes a visual builder called Mariner Studio in the second quarter, cross-device synchronisation in the third quarter, and an agent marketplace in the fourth quarter. The most strategically significant announcement may be the least visible to end users. Google's Agent2Agent (A2A) protocol, originally launched with more than 50 technology partners, has reached 150 organisations in production, not pilot, routing real tasks between agents built on different platforms. The protocol is now governed by the Linux Foundation's Agentic AI Foundation and has reached version 1.2, with signed agent cards using cryptographic signatures for domain verification. Microsoft, AWS, Salesforce, SAP, and ServiceNow are running A2A in production environments. A2A is designed to complement rather than compete with Anthropic's Model Context Protocol (MCP). MCP handles how an agent connects to tools and data sources. A2A handles how agents communicate with each other across organisational and platform boundaries. Google adopted MCP across its own services in December 2025, launching fully managed remote MCP servers for Google Maps, BigQuery, Compute Engine, and Kubernetes Engine, with Cloud Run, Cloud Storage, AlloyDB, Cloud SQL, Spanner, Looker, and Pub/Sub on the roadmap. Apigee, Google's API management platform, now functions as an MCP bridge, translating any standard API into a discoverable agent tool with existing security and governance controls. Google is simultaneously positioning A2A as the standard for the layer above: the orchestration of multiple agents from multiple vendors working together on a single task. The practical implication is that a Salesforce agent built on Agentforce can hand off a task to a Google agent running on Vertex AI, which can query a ServiceNow agent for IT asset data, all through A2A without any of the three systems needing to understand each other's internal architecture. Native A2A support is now built into Google's Agent Development Kit, LangGraph, CrewAI, LlamaIndex Agents, Semantic Kernel, and AutoGen. Google's open-source Agent Development Kit reached stable v1.0 releases across Python, Go, and Java, with TypeScript support also available. It is a code-first framework optimised for Gemini but model-agnostic and deployable to any container or Kubernetes environment. The security layer includes Model Armor for defence against indirect prompt injection, zero-trust architecture applied to decentralised agent systems, and access management through Google Cloud IAM with audit logging. OpenAI's own enterprise agent push through Codex and systems integrator partnerships has reached three million weekly users. Anthropic's enterprise marketplace for Claude-powered tools is building an ecosystem through partners including Snowflake. Microsoft's Copilot is embedded in virtually every Fortune 500 company. AWS has Bedrock with its own agents framework maturing rapidly. The enterprise AI agent market is not a two-horse race. It is a five-way contest in which each competitor has a structural advantage the others lack. OpenAI has the strongest consumer brand and the most advanced reasoning models. Anthropic has the most trusted safety positioning and the fastest-growing enterprise revenue. Microsoft has the deepest enterprise distribution through Office and Azure. AWS has the largest cloud infrastructure base and the strongest developer gravity. Google's argument is that it is the only company that owns all four layers of the stack: the custom silicon (Ironwood TPUs), the frontier models (Gemini), the cloud platform (now unified as the Gemini Enterprise Agent Platform), and the enterprise distribution channel (Workspace with more than three billion users across Google's productivity tools). Kurian framed the strategy explicitly: "If you want to adopt a technology successfully, you need to pick a few important projects and do them well, rather than spraying on a lot of little projects." No other competitor controls the full vertical from chip to application. Google's own AI Agent Trends report, published ahead of the conference, found that 89% of business teams are already using AI agents and the average organisation runs 12. The most common enterprise use cases are customer service at 49%, marketing at 46%, security operations at 46%, and IT support at 45%. Early customer deployments suggest the productivity claims are not purely theoretical: Danfoss, the Danish industrial manufacturer, automated 80% of transactional decisions in email-based order processing using Google's agents, reducing response times from 42 hours to near real-time. Suzano, a Brazilian pulp and paper company, built an agent with Gemini Pro that translates natural language into SQL queries, cutting query time by 95% for 50,000 employees. The agents run on Google's Gemini model family, with the Gemini 2.5 generation being retired in October in favour of the 3.x line. Gemini 3 Pro and Gemini 3 Flash, released in late 2025 and iterated through early 2026, provide the reasoning backbone. Gemini 3 Flash delivers a 15% improvement in overall accuracy over Gemini 2.5 Flash and is optimised for high-frequency agentic workflows and real-time processing. Gemini 3.1 Pro, the most advanced reasoning variant, is available in preview. A new experimental model, GLM 5, targets complex systems engineering and long-horizon agentic tasks through the Model Garden. Gemini 3.2 is expected to be formally announced during the conference, with an expanded context window beyond one million tokens and optimised parameter counts for reduced inference latency. Demis Hassabis, DeepMind's chief executive, stated in January that his team is "focusing on Gemini 4 this year." Google also recently launched Gemma 4 open models under Apache 2.0 licensing, built from the same research as Gemini 3 and providing an open-weight alternative for enterprise customers who need to run models on their own infrastructure. The infrastructure beneath the models is equally central to the pitch. Ironwood, Google's seventh-generation TPU announced the same day, delivers 4.6 petaFLOPS per chip and scales to 9,216-chip superpods producing 42.5 exaFLOPS. Anthropic has committed to up to one million Ironwood units. The custom silicon means Google can offer inference at costs that customers buying Nvidia GPUs at retail cannot match, which in a market where inference is the dominant and growing expense, translates directly into pricing power for the agent services that run on top. Google Cloud holds roughly 11% of the cloud infrastructure market. AWS holds 31%. Azure holds 25%. The gap is significant and Cloud Next will not close it. But the agentic era, if it materialises at the scale Google is projecting, reshuffles the competitive dynamics in ways that favour a company with a vertically integrated stack over companies that assemble their AI capabilities from multiple vendors. Google is betting that the enterprise customer who adopts AI agents at scale will choose the platform where the model, the runtime, the silicon, the governance, and the productivity suite are all built by the same company and optimised to work together. It is a large bet. Cloud Next 2026 is where Google is asking enterprises to take it.
[7]
Google and AWS split the AI agent stack between control and execution
The era of enterprises stitching together prompt chains and shadow agents is nearing its end as more options for orchestrating complex multi-agent systems emerge. As organizations move AI agents into production, the question remains: "how will we manage them?" Google and Amazon Web Services offer fundamentally different answers, illustrating a split in the AI stack. Google's approach is to run agentic management on the system layer, while AWS's harness method sets up in the execution layer. The debate on how to manage and control gained new energy this past month as competing companies released or updated their agent builder platforms -- Anthropic with the new Claude Managed Agents and OpenAI with enhancements to the Agents SDK -- giving developer teams options for managing agents. AWS with new capabilities added to Bedrock AgentCore is optimizing for velocity -- relying on harnesses to bring agents to product faster -- while still offering identity and tool management. Meanwhile, Google's Gemini Enterprise adopts a governance-focused approach using a Kubernetes-style control plane. Each method offers a glimpse into how agents move from short-burst task helpers to longer-running entities within a workflow. To understand where each company stands, here's what's actually new. Google released a new version of Gemini Enterprise, bringing its enterprise AI agent offerings -- Gemini Enterprise Platform and Gemini Enterprise Application -- under one umbrella. The company has rebranded Vertex AI as Gemini Enterprise Platform, though it insists that, aside from the name change and new features, it's still fundamentally the same interface. "We want to provide a platform and a front door for companies to have access to all the AI systems and tools that Google provides," Maryam Gholami, senior director, product management for Gemini Enterprise, told VentureBeat in an interview. "The way you can think about it is that the Gemini Enterprise Application is built on top of the Gemini Enterprise Agent Platform, and the security and governance tools are all provided for free as part of Gemini Enterprise Application subscription." On the other hand, AWS added a new managed agent harness to Bedrock Agentcore. The company said in a press release shared with VentureBeat that the harness "replaces upfront build with a config-based starting point powered by Strands Agents, AWS's open source agent framework." Users define what the agent does, the model it uses and the tools it calls, and AgentCore does the work to stitch all of that together to run the agent. The shift toward stateful, long-running autonomous agents has forced a rethink of how AI systems behave. As agents move from short-lived tasks to long-running workflows, a new class of failure is emerging: state drift. As agents continue operating, they accumulate state -- memory, too, responses and evolving context. Over time, that state becomes outdated. Data sources change, or tools can return conflicting responses. But the agent becomes more vulnerable to inconsistencies and becomes less truthful. Agent reliability becomes a systems problem, and managing that drift may need more than faster execution; it may require visibility and control. It's this failure point that platforms like Gemini Enterprise and AgentCore try to prevent. Though this shift is already happening, Gholami admitted that customers will dictate how they want to run and control any long-running agent. "We are going to learn a lot from customers where they would be using long-running agents, where they just assign a task to these autonomous agents to just go ahead and do," Gholami said. "Of course, there are tricks and balances to get right and the agent may come back and ask for more input." What's becoming increasingly clear is that the AI stack is separating into distinct layers, solving different problems. AWS and, to a certain extent, Anthropic and OpenAI, optimize for faster deployment. Claude Managed Agents abstracts much of the backend work for standing up an agent, while the Agents SDK now includes support for sandboxes and a ready-made harness. These approaches aim to lower the barrier to getting agents up and running. Google offers a centralized control panel to manage identity, enforce policies and monitor long-running behaviors. Enterprises likely need both. As some practitioners see it, their businesses have to have a serious conversation on how much risk they are willing to take. "The main takeaway for enterprise technology leaders considering these technologies at the moment may be formulated this way: while the agent harness vs. runtime question is often perceived as build vs. buy, this is primarily a matter of risk management. If you can afford to run your agents through a third-party runtime because they do not affect your revenue streams, that is okay. On the contrary, in the context of more critical processes, the latter option will be the only one to consider from a business perspective," Rafael Sarim Oezdemir, head of growth at EZContacts, told VentureBeat in an email. Iterating quickly lets teams experiment and discover what agents can do, while centralized control adds a layer of trust. What enterprises need is to ensure they are not locked into systems designed purely for a single way of executing agents.
[8]
Gemini Enterprise Agent Platform lets you build, govern, and optimize your agents.
Gemini Enterprise Agent Platform is our new developer platform that has everything your technical teams need to build, scale, govern and optimize agents. Think of it as a one-stop-shop for all of your autonomous agents, built on top of our leading infrastructure and integrated with our data and security capabilities. This new platform, announced at Google Cloud Next '26, brings the model building and tuning services of Vertex AI together with new features for agent integration, security, DevOps and more. Agent Platform is designed to flex to your team's unique needs and provides access to Gemini 3.1 Pro, Gemini 3.1 Flash Image (Nano Banana 2) and Lyria 3. It also supports Anthropic's Claude Opus, Sonnet and Haiku. Plus, Agent Platform integrates with the Gemini Enterprise app, which acts as the front door for AI for every employee. Learn more about Gemini Enterprise Agent Platform on the Cloud blog.
[9]
Agentic AI blueprint key focus for Genpact and Google - SiliconANGLE
You can't prompt your way out of complexity: Why enterprise AI is turning to to process intelligence As enterprises navigate the complexities of scaling AI initiatives, the agentic AI blueprint for success lies in combining deep process intelligence with powerful cloud platforms, enabling organizations such as Genpact Ltd. to transform finance and operations workflows through reimagined processes and effective partner ecosystems. That convergence defines the moment the industry finds itself in as a wave of agentic AI announcements proliferates. The technology is maturing quickly; the partner relationships that determine who actually captures value are maturing too, according to Nidhi Srivastava (pictured, right), senior vice president and head of digital and cloud at Genpact. "The partner ecosystem is now the new village wherein we work with the vision of the client," Srivastava said. "We've been working with business processes for our clients for many, many years. We have a very deep understanding of what the last mile looks like for our clients. When you combine that with the power of a platform like Google Cloud, it becomes a motif for success." Srivastava and Pallab Deb (left), managing director of global partner go-to-market practice and product engagement at Google Cloud LLC, spoke with theCUBE's John Furrier and co-host Alison Kosik at Google Cloud Next, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed the agentic AI blueprint for success and why process intelligence is surging as the decisive enterprise advantage. (* Disclosure below.) One of the clearest signals of agentic AI's maturity is where early enterprise momentum is concentrating. Finance transformation -- long considered a back-office function -- has emerged as a primary beachhead for agentic deployments, driven by its deterministic nature and clear return on investment. Genpact moved early on this signal, launching its agentic AP Suite to automate accounts payable workflows end to end as part of its broader Service-as-Agentic-Solutions portfolio, Srivastava noted. "I think we are clearly at the point where we should solve the real problems," Srivastava said. "Playing at the edge and getting used to it and feeling comfortable -- we are past that point. Finance transformation has kind of bubbled up to the top." The return of process expertise as a first-class discipline echoes a pattern traceable back to the ERP era, according to Deb. AI is bringing functional depth back to the foreground after two decades in which pure technology focus -- mobile, data lakes, cloud -- dominated enterprise investment, he explained. The engineering gap that separates a polished demo from a production deployment on a mission-critical SAP environment is where that expertise now earns its keep. "You think you can prompt your way out of complexity? Don't even try," Deb said. "The demos are going to show you that you can prompt and get wonderful things [to] happen. Try doing that on top of a fairly custom SAP installation that manages your supply chain, where things are mission-critical. You're probably going to need hard engineering work. I think that's the gap between what's promised versus what it takes to get through the journey." Google Cloud's $750 million investment in its partner ecosystem underlines the strategic weight both companies are placing on the agentic AI blueprint for success. But throwing AI at an unreformed process yields only marginal gains -- the prerequisite is reimagining workflows for AI from the ground up, including value stream mapping and building agents that mirror the personas of the people they are designed to support, according to Srivastava. "It's super important to reimagine the process for AI, because throwing AI on an old process only gives you marginal improvement," she said. "That's also where the business case of AI sometimes suffers, because people are just instrumenting AI into an existing process which was not designed for AI. Spending enough time reimagining the process is critical to the scale when you're looking for success at scale." Here's the complete video interview, part of SiliconANGLE's and theCUBE's coverage of Google Cloud Next:
[10]
Next '26 - when you don't have enterprise legacy, you build it. Google Cloud's playbook for winning agentic AI trust.
On the first day of Next '26, I asked whether Google Cloud's push to own the enterprise AI governance layer was a realistic competitive proposition - or whether enterprise buyer legacy and a competitive landscape that's seeing ServiceNow, Salesforce, Workday and Microsoft all making the same claim (amongst others) - would make this difficult. Having spent two days here in Las Vegas with the company - sitting in on the keynotes, product demos, interviewing customers, speaking with CEO Thomas Kurian and other Google Cloud executives - I think I've got a better understanding of Google Cloud's playbook for winning agentic AI trust in the enterprise. The competitive questions I raised on Tuesday haven't disappeared, but I think Google Cloud's official position of 'being Switzerland' in the enterprise AI value capture is a little more nuanced than it's fully letting on. Google Cloud's strategy, I think, is to build the enterprise trust it sometimes lacks - by embedding with customers, co-investing in outcomes, and proving value on the ground. Google Cloud took to the main stage this week to talk up the benefits of its full stack - the 'Android of the agentic era', as Kurian put it. The core argument - that delivering serious agentic AI by stitching together fragmented models, disconnected silicon and separate governance tools is much harder than doing so via an integrated stack - has a real logic to it. The Gemini Enterprise Agent Platform, with Agent Identity, Agent Gateway and OTel-based observability that can aggregate traces from third-party agents, is a coherent governance architecture. Combine this with the Knowledge Catalog, the Cross-Cloud Lakehouse, and Bring Your Own MCP support - these represent a serious attempt to build a platform that is genuinely open at the edges while being meaningfully differentiated at the center. The real test of that argument is whether it holds up in practice. I spoke with Matt Renner, Google Cloud's Chief Revenue Officer,where he framed Google's competitive positioning as being a neutral partner. He said: Our approach is to be the platform - the Switzerland - that orchestrates across all of that, and creates interoperability so you don't have to throw away the agents you've already built. The reality is, if you're going to have one place to orchestrate your agent strategy, it should probably be independent of your application strategy. That's where we're seeing a lot of traction on Gemini Enterprise. His assessment of how the major SaaS vendors are faring with the same ambition was: They've tried this with data. They're trying it now with agents. Mixed success at best. Whether you find Google's claim to Switzerland more credible than a ServiceNow or Salesforce making the equivalent claim is a reasonable debate. But the structural argument - that the orchestration layer should probably sit independent of any individual application vendor - has logic behind it, and 1,500 enterprise Gemini Enterprise customers suggests it is resonating with buyers. Two customers I interviewed this week put some meat on the bones of how this is playing out in reality for enterprise buyers. They illustrate what the full stack proposition looks like when it is actually working - and at a level of ambition well beyond the efficiency plays that dominate most enterprise AI conversations. At Merck, the use cases span drug discovery, clinical development, manufacturing and commercial operations (I'll be writing up this full case study in the coming days). Dave Williams, Chief Information and Digital Officer at the company, described what agentic AI looks like in the context of in-silico drug development: When you start to think about how agents and robotics can play a role - where the scientists, rather than manually running all those filters, are more guiding the process - you start to see really significant productivity improvements. At Citi Wealth, the Sky conversational avatar is an external-facing customer engagement platform positioned explicitly as a revenue play, not a cost reduction exercise. Joseph Bonanno, Head of Wealth Intelligence at the organization, said that Citi is looking to agentic to drive revenue, not just drive out cost: Our clients actually have about five trillion dollars away from us. There lies the opportunity. If I can upsell, cross-sell and retain clients, that is far more important than finding efficiencies here and there. Everybody's doing efficiencies. This is about playing offence. Both cases involve genuinely cross-organizational AI deployments that span functions and workflows in ways earlier technology waves could not. Rohit Bhat, GM and Managing Director of Financial Services at Google Cloud, outlined why Citi was able to pursue such a valuable client-facing use case: One of the big advantages we've had with Citi has been the full-stack advantage on our platform. That concept gets a bit lost in the noise, but it matters: if part of your company is devoted to understanding how to build models and capabilities, informing the team thinking about how data systems need to interact within those models, informing the team building the software layer on which you build client experiences -- and you have those design and engineering teams under one roof -- you can have a much more defined and informed strategy on governance, controls, policy and risk systems. Neither Merck nor Citi arrived at these use cases by accident. Both had spent years doing the foundational data work before any of this was possible. And they're now looking at Google Cloud's full stack to provide them a platform for future agentic work. I'll come back to this - it's important. In a private press roundtable on the second day, I put the governance contest directly to Kurian: given that multiple vendors are making essentially the same pitch, how are enterprise buyers deciding which governance platform to trust - and do you think we end up with one layer or multiple competing ones? His answer covered the technical architecture carefully - agent identity, the zero trust principle applied to agents, the OTel logging standard enabling cross-platform visibility. On that last point, he said: We're trying to provide customers with a central place they can monitor, manage, and govern all their agents, whether built by us or built on another platform and exposed to us. What he did not address was the enterprise political dimension of that, nor would he be drawn on whether one platform would be chosen to govern across all areas of the enterprise. ServiceNow would describe its agents as the ones governing others. Salesforce built Agentforce specifically to own the customer-facing governance layer. Other vendors are making similar arguments and they're not going to quietly accept Google's Agent Gateway as the authoritative monitoring surface. The skirting around the question was itself informative...Switzerland. Renner was, in this respect, a bit more candid. He acknowledged that the realistic near-term outcome is not one platform replacing another but customers using multiple - with Google competing hard for the cross-functional orchestration layer while ISVs retain their domain-specific positions. He said: My honest view is that customers aren't going to do without their ISVs -- they're just going to use both us and the ISVs, rather than choosing one or the other. However, despite all of this, how do you get from where most enterprises actually are - messy legacy estates, siloed data, change-averse workforces - to the deployments Merck and Citi are describing? Simply put, even before Google Cloud entered with agentic AI, huge amounts of work had been done by the organizations already. Merck's Williams offered a grounded CIO-level answer - when asked what advice he would give peers in regulated industries, he came back to three things: The foundation - the data. Without that, this isn't going to work. Second, the human side - change management. And third, focus on the right things. On change management specifically, he was unsparing - calling it "probably the longest pole in the tent." And the emotional component of that challenge came through elsewhere in our conversation: Everybody in my seat is excited about agentic AI on one hand, and also a bit terrified on the other. If you don't think about this proactively with the right tools, you could end up with thousands of people building agents that you don't know are properly governed - from a cyber standpoint, a quality standpoint, a risk standpoint. Again, this is a data readiness, change management and organizational capability problem. And both Merck and Citi have been building towards agents for a while. Merck has been running a cloud acceleration programme for five years, has consolidated its commercial estate into a common data model, and has a single manufacturing data model across all its shop floors. Citi spent years consolidating infrastructure that Bonanno described as looking like "seven different companies" before building the One Wealth platform that makes the agentic avatar Sky possible. Google Cloud met both organizations at a point of readiness that most enterprises have not yet reached - and this is important. This context on where customers are is important, as it frames what Google Cloud is attempting to do going forward. What is genuinely interesting about Google Cloud's approach - and what I did not fully appreciate earlier this week - is how deliberate the strategy is for supporting customers towards agentic AI (and consequently, governance). It is not waiting for enterprises to become ready. It is actively supporting them to get from A to B. Williams described the Merck partnership in terms that go well beyond a technology purchase. Google is, he said, co-investing "to solve the data, process, and people upskilling challenge - not just selling the software." The speed at which the deal came together is also telling: It was only two and a half months ago that I emailed [Google Cloud] and said we want to take this to the next level. We got together, set ambitious goals, said we want to get this done before the end of the first quarter - not actually thinking that was ever going to happen. But it did. And not only that, the teams already have a roadmap and deployment plan for Gemini Enterprise across all our employees. We've been impressed with how quickly everyone is moving. The $750 million partner fund announced this week supports this logic too - embedding forward-deployed engineers alongside Accenture, Deloitte, PwC and the major GSIs to work directly inside customer environments, specifically tasked with resolving data readiness issues and integration complexities. The McKinsey Google Transformation Group, also announced at the event, takes this further: joint teams, co-funded value assessments, outcome-based commercial models, with McKinsey QuantumBlack technologists working alongside Google's FDEs on client use cases. Google Cloud's Renner outlined the underlying philosophy: Our strategy is not to bundle. Our strategy is to have successful projects. Show up with the right technical resources, work together, and make it work. He also noted that the POC-to-production success rate for enterprise AI projects three years ago was around ten per cent. It is significantly higher now, he said, driven by better upfront qualification, clearer governance structures and more experienced GSI partners. That trajectory is the argument. This is a company that, without the decades of installed base that ServiceNow or Salesforce can draw on, is essentially earning strategic relevance by co-investing in the most ambitious use cases customers have - demonstrating value at a depth the incumbent SaaS vendors are not currently positioned to match, and betting that the relationships built in that process compound into the strategic partnerships of the next decade. What I am taking away from Las Vegas is that Google Cloud is not simply selling a platform and hoping enterprises find their way to it. It is walking into customer environments and doing the hard organizational and data work alongside them. That is resource-intensive, and the $750 million partner fund and the FDE model are clearly the answer to scaling it beyond the Mercks and the Citis of the world. The question that this week did not answer - and that the next few years will - is whether Google Cloud can extend that model to the enterprises that do not have Merck's data foundations or Citi's transformation appetite, and do it before the incumbents find their footing in the governance layer Google is working hard to claim as its own. On the evidence of this week, it is a serious contender. Whether it is the winner is a different question. Google Cloud may be positioning itself as Switzerland, but if you take its strategy as a whole - it's governance claim, the full stack offering, and its heavy customer co-investment - the company's behaviour is not neutral in its entirety. It's strategic, it's aimed at winning enterprise trust in the agentic AI world, and it's smart.
[11]
Real-time marketing now reality with data and agentic AI - SiliconANGLE
Agentic AI gives CPG brands a real-time edge on marketing spend and product testing Consumer packaged goods companies face mounting pressure to grow profitably while operating on razor-thin margins -- and agentic AI is emerging as the tool that can close the gap between fragmented data and real-time marketing outcomes. The sector has long relied on weeks-long campaign measurement cycles, disconnected consumer data and costly physical test markets to guide brand decisions. Google Cloud LLC's Agentic Data Cloud, unveiled at Google Cloud Next 2026, signals a shift toward giving agents the contextual intelligence needed to act on enterprise data without hindrance -- and its effects are already being felt in CPG, according to Sonia Fife (pictured, right), global leader of consumer packaged goods, strategic industries, at Google Cloud. "On the revenue side we're seeing agents really be able to drive that transformation, looking around corners to understand what trends are coming," Fife said. "Then marketing is helping to take those trends -- transform them into resonant concepts and campaigns." Fife and Jeff Follestad (left), senior manager of GCP partner sales at EPAM Systems Inc., spoke with theCUBE's John Furrier and co-host Alison Kosik at Google Cloud Next, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed how agentic AI is transforming CPG real-time marketing workflows, enabling media optimization and accelerating time to market through synthetic consumer data. (* Disclosure below.) One of the most concrete gains from agentic-powered real-time marketing in CPG is the collapse of traditional campaign measurement timelines. Historically, CPG organizations would run a campaign, then wait four to eight weeks to analyze results, adjust media mix and reallocate spend -- a process that allowed markets to shift well before brands could respond, Follestad noted. "In this new world, in literally near real time, that dial or that adjustment of my marketing spend assortment can be [updated]," Follestad said. "If we see customer sentiment shift [or] there's some event that triggers more effectiveness on one channel than another, you can optimize that to great efficiency." Agentic AI is also compressing the cost and time required to validate new product ideas. Rather than organizing physical consumer panels that can take months and significant budget to execute, brands can now deploy synthetic consumer audiences in minutes, Follestad explained. That velocity allows companies to fail fast, redirect resources and prioritize concepts with stronger market signals before committing to production. "If you've developed something in a vacuum and you test it quickly, you can retreat from that product and focus on something else, whereas you may have taken six months to get to that point in the past," Follestad said. "It's in an industry very sensitive to margins and running very lean." The frontier for CPG extends further into the transition from the physical shelf to the invisible shelf -- where agentic commerce protocols enable AI agents to discover, evaluate and purchase products on behalf of consumers, according to Fife. Brands competing in this environment must treat their product information management and digital asset management data as agent-ready infrastructure from day one. "The objective of agentic commerce is to make the experience frictionless for the customer, for the consumer," Fife said. "Whether that's shopping agents that will become a part of the repertoire for many D2C brands, or even brand agents that are driving a whole level of experience -- this is really a time of transformation, but ... it begins, first of all, as always with the data." Here's the complete video interview, part of SiliconANGLE's and theCUBE's coverage of Google Cloud Next:
[12]
Next '26 - control the agents, control the enterprise? Google Cloud enters the battle for enterprise AI governance
The question of who governs enterprise AI agents is clearly becoming one of the most contested in the technology industry. The competition is fierce, as your favourite SaaS vendors and enterprise technology partners all make their pitch for being the 'AI governance and management layer'. As organizations move from experimenting with individual AI tools to deploying hundreds - or thousands - of autonomous agents across their operations, the question emerging is: which platform wins and gets to own the control layer? At Google Cloud Next in Las Vegas this week, Google made its pitch for that position. The content was dense and there were a slew of announcements - everything from new TPU generations, a redesigned data architecture, the formal arrival of Wiz into the Google Cloud security fold - but the strategic principle running through all of it was pretty unmistakable. Google wants to be the operating system for the agentic enterprise. And it is prepared to spend serious money to get there. The argument, as Google Cloud (and many of its competitors) are aware of is: those who own the governance and management layer for agentic AI will likely capture the most value. Google Cloud CEO Thomas Kurian opened his keynote with a summary of the moment enterprises currently find themselves in. He said: The experimentation phase is behind us. Now the real challenge begins: how do you move AI into production across your entire enterprise?" It's a framing that will resonate with the CIOs we speak to as part of the diginomica network. Our data suggests 93 per cent of organizations are now using AI in some form - but only 57 per cent report achieving a 50 per cent success rate from their implementations. The bridge between experimentation and production is where vendors are hoping to make their case. If customers are struggling to get agents to production - and deliver value - the market will compete to be the platform of choice delivering value. Google Cloud's answer is a unified stack - and Kurian argued that fragmentation is the enemy. He said: You cannot deliver AI by piecing together fragmented silicon and disconnected models. To drive real value, you need an architecture where chips are designed for the models, models are grounded in your data, and agents and applications are built with models and secured by the platform. That argument has a logic to it. But it's also, conveniently, an argument that positions Google as the only vendor with the full stack to make it work. We'll come back to whether that holds in the real world. The centerpiece of this year's event is the Gemini Enterprise Agent Platform - described by Kurian on stage as "the Android of the agentic era." It's not an accidental turn of phrase. Android didn't win by being the best operating system - it grew to rival Apple's iOS by becoming the connective tissue between hardware, applications and users at a moment when the market needed a common layer. Google is making the same argument for enterprise AI. However, if we extend this argument out, is Google Cloud really arguing to be the Android of the agentic AI era, if it's arguing that its full stack approach is the key selling point? Software coupled with hardware? That sounds mighty similar to Apple's iOS argument to me... That being said, Google Cloud doesn't mandate a consolidated stack. It still has a composable architecture - it just suggests it all works better together. The Agent Platform brings together model selection, agent building, orchestration, governance and observability into a single environment. Agent Identity assigns every agent a unique cryptographic ID. Agent Registry indexes every agent and tool across the organization. Agent Gateway enforces policy centrally. Long-running agents can now operate autonomously for days at a time, managed through a unified Inbox. Kurian described the ambition: Gemini Enterprise is now the end-to-end system for the Agentic Era - the connective tissue between your data, your people, and all of your apps and agents that transforms all of your processes into a single, intelligent flow. It's worth flagging Vertex AI here. Google confirmed today that all Vertex AI services and roadmap evolutions will be delivered exclusively through the Agent Platform going forward, rather than as a standalone service. That's not a rebrand. Google is confirming a pivot away from selling developer-facing infrastructure and towards owning the enterprise application layer. The developer tooling is being absorbed into the platform play. Sundar Pichai, speaking earlier in the keynote, gave a sense of the scale of investment behind this ambition. He said: In 2022, we were investing $31 billion in CapEx. This year, we plan to invest between $175 and $185 billion in total CapEx - a nearly six-times increase in just four years. Just over half of that machine learning compute, Sundar confirmed, is expected to go towards the cloud business. That's not a company hedging its bets. Alongside the Agent Platform, Google announced what it's calling the Agentic Data Cloud - a reimagined data architecture anchored by a new Knowledge Catalog. Amin Vahdat, Google Cloud's SVP and Chief Technologist for AI and Infrastructure, introduced the new platform and said: Reasoning without context is just a guess. And when you expect your AI to make decisions and your agents to take actions, you cannot afford to guess. The Knowledge Catalog is Google's answer to 'the context problem' - a universal context engine that aggregates business meaning from across the enterprise data estate. The Knowledge Catalog pulls context not just from Google Cloud's own services, but from third-party platforms - specifically naming Salesforce Data360, SAP, ServiceNow and Workday as sources being ingested into Google's reasoning engine. A new Cross-Cloud Lakehouse extends the same logic to infrastructure, offering zero-copy connectivity to data sitting in AWS and Azure. Read the Knowledge Catalog announcement carefully and what you're actually seeing is Google reaching into the data estates of the very platforms it now competes with for the orchestration layer, and positioning them as subordinate context providers. That's quite interesting. This is where the picture gets genuinely complicated for CIOs - and where the competitive heat is most visible. During his keynote segment, Pichai said: The conversation has gone from 'can we build an agent' to 'how do we manage thousands of agents?'" As we've outlined throughout this piece, Google Cloud believes that it has a reasonable argument to be the one to take that forward for enterprise buyers. However, there was a telling exchange during a press Q&A at Next, where Michael Gerstenhaber, Google Cloud's VP of Product Management, offered a realistic example of what agent-to-agent management looks like. He said: There are two kinds of agents at play...there's Michael's agent - an agent that wants to perform a task for a user at work. And then there's the person who owns the data - maybe ServiceNow wants to provide access to ticket data, and they do that with an agent. That's ServiceNow's agent, not necessarily the user's agent. The user's administrator needs to know what data was queried, and the ServiceNow administrator also has to contemplate security and governance for their agent - did I rigorously check whether Michael should have access to this data? There is a separation of concerns here between the person who builds the agent and provides access to the data, and the agent builder who doesn't know what data is out there but sends their agent to go find it and solve the problem. Those are two very different kinds of administration. "The administrator should make it safe by default - both on the foreign agent side and the local agent side - and both should be monitored: traces, metrics, logs, specifically about what that agent did on both sides of the transaction. It's a thoughtful articulation of a genuinely complex problem. But notice the framing: ServiceNow's agent is the "foreign agent" operating inside Google's governance plane. Google is the platform that monitors both sides of the transaction. ServiceNow, of course, would frame it the other way around. They own the ITSM relationship, the ticket data, the operational workflows. They're not going to quietly accept being governed by Google's Agent Gateway. Salesforce, equally, has Agentforce - designed explicitly to own the governance layer for customer-facing agentic workflows. Neither is going to cede audit authority without a fight. This creates a scenario for CIOs where you could end up with multiple competing (or complementary?) governance layers, each with a legitimate claim, each generating its own traces, metrics and logs, none naturally deferring to the others. That's not a technology problem - it's a political one. And it's one the vendors are going to be very reluctant to resolve on the customer's behalf. The Microsoft problem is also worth referencing. Unlike Salesforce, ServiceNow and SAP, Microsoft doesn't appear in the Knowledge Catalog's list of platforms being absorbed as context providers. Instead, Google frames its relationship with Microsoft around interoperability - exporting documents into Office formats, connecting agents across Microsoft 365. It's a compatibility pitch, not an absorption one. Whether that reflects strategic caution, an acknowledgment that Microsoft would push back too visibly, or simply the reality of Microsoft's entrenched position in the enterprise productivity layer is up for debate at this point. What makes today's announcements particularly interesting is the distance they represent from where Google Cloud was standing twelve months ago. At Next '25 last April, Kurian's pitch was built on openness. Google's differentiator, he argued, was its willingness to manage multiple AI agents across different frameworks and vendors - including competitors' models. The Agent2Agent protocol, launched with support from over 50 partners including Salesforce, ServiceNow and Workday, was framed as an open standard for agent interoperability. Google wanted to be the neutral broker. That was reinforced in December, when Karthik Narain, Google Cloud's newly appointed Chief Product and Business Officer, told diginomica in his first interview since joining from Accenture that the company's differentiation came precisely from its refusal to lock customers in. The "why not Google?" moment he described was grounded in openness - the argument that AI-first transformation required a platform that prioritized interoperability over convenience. It's worth stating that this is still true today. Google Cloud has said multiple times today that this architecture is composable and it is open. However, what today's announcements do suggest is that the tone has hardened. The openness language is still present - multi-model support, MCP integration, Microsoft 365 connectivity - but the structural moves tell a different story. Vertex AI absorbed into Agent Platform. Knowledge Catalog ingesting competitor data. A $750 million partner fund to facilitate enterprise relationships at scale through SI partners. Agent Identity, Agent Registry and Agent Gateway asserting governance across the entire agentic workforce. Google hasn't abandoned openness as a message. But it is building value capturing walls at the same time. Google Cloud's full stack argument is intellectually strong. If you're building an agentic enterprise from scratch, or if you're already deeply Google Cloud-native, the proposition of having infrastructure, models, data, productivity and governance co-designed in a single platform is genuinely compelling. Gerstenhaber's articulation of the agent governance problem suggests real architectural rigour, not just marketing positioning. But most CIOs aren't operating in a greenfield environment. They have decades of ServiceNow investment, embedded Salesforce workflows, Microsoft 365 across the entire organization, and AWS or Azure infrastructure that predates Google Cloud's enterprise push by years. In that environment, the full stack argument doesn't reduce complexity - it adds another layer to it. You're not replacing the existing stack. You're asking CIOs to place Google on top of it as the governance layer, while every other incumbent makes the same claim from a position of far more embedded strength. A year ago Google was pitching to be useful. This year it's pitching to be essential. The shift from open orchestrator to platform owner is quite telling - and the competitive response from ServiceNow, Salesforce and Microsoft will be equally real. It's interesting to me though that many of the 'big deal' customers speaking here today are taking a co-innovation and investment approach, where Google Cloud is offering to place engineers in customer environments to make agentic AI valuable, sort out enterprise data siloes, and fix archaic processes. That's a genuinely useful argument for buyers - and it might work if Google Cloud can scale it in the medium term. Customers are in need of support and Google Cloud has the resources to provide it. Whether Google Cloud can convert this into the kind of deep IT relationships its competitors have spent decades building is the question. It's going to be entertaining watching this competitive landscape play out from the sidelines...
[13]
Real-time data pipelines for agentic AI execution - SiliconANGLE
How real-time data pipelines are giving AI agents something worth acting on As enterprises race to wire AI into their operations, the infrastructure bottleneck has shifted from model capability to data access -- and the enterprises winning the race are those treating real-time data pipelines as a first-class architectural concern. Agents are only as intelligent as their underlying data substrate, and batch-based architectures built for yesterday's dashboards cannot meet that bar. Enterprises now need real-time data pipelines that move operational data from legacy sources into modern analytical systems at near-instantaneous latency, according to Benjamin Kennady (pictured, left), cloud solutions architect at Striim International Inc. For Striim, a data streaming and integration platform, that starts with the pipeline layer itself. "Striim really enables you to do that real-time data replication at scale," Kennady said. "We're designed to do that ingestion from your Oracle and your SQL server and your operational databases, and then replicate that data in real time with sub-second or second latency into your analytic systems, so that those agents can then be used to actually make those real-time decisions." Kennady and Vinod Ramachandran (right), senior product manager at Google LLC, spoke with theCUBE's John Furrier and Alison Kosik at Google Cloud Next, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed modern real-time data architecture on Google Cloud, the role of open formats in unlocking agentic workflows and how Striim and Google are jointly enabling enterprise-scale data replication. (* Disclosure below.) Enterprises sitting on sprawling legacy infrastructure do not need to tear it all down to reach agent-ready architecture. The key pivot is ensuring agents can immediately access data that already flows into object stores, using open formats such as Apache Iceberg to make that data instantly queryable across BigQuery, AlloyDB and other analytical systems, Ramachandran noted. "Your automated pipeline can now just ride straight into open formats like Iceberg and it's immediately available in all these analytical systems," he said. "Analysts or system engineers building these pipelines can just make the tweak, use open formats and see it immediately accessible -- it's actually a feature, not a bug." The practical stakes of getting real-time data pipelines right showed up clearly in Striim's work with United Parcel Service Inc. During a surge in package theft, UPS struggled to scale its fraud detection because its existing architecture could not ingest structured and unstructured data -- images, emails and transactional records -- fast enough to power real-time decisions, Kennady explained. By using Striim alongside Google Cloud to replicate that multimodal data into BigQuery at scale, UPS was able to build agentic models that reduced fraud risk country-wide. "Package detection and fraud risk is happening in real time and therefore you need that data in real time for your agentic workflows," Kennady said. "UPS used Striim along with Google to resolve that problem and to reduce that package theft at scale across the whole country." Here's the complete video interview, part of SiliconANGLE's and theCUBE's coverage of Google Cloud Next:
[14]
Google Brings All Enterprise AI Agent Tools Under One Roof | PYMNTS.com
At Google Cloud Next 2026, Google announced the Gemini Enterprise Agent Platform, a unified system designed to handle all of it. The platform replaces Vertex AI as Google's primary enterprise AI development environment and bundles agent building, deployment, data integration, security and optimization into a single offering. All future Vertex AI services and roadmap updates will be delivered through it. The launch is Google's direct answer to Amazon's Bedrock AgentCore and Microsoft's Foundry. The timing reflects a broader shift in enterprise AI competition. The race is no longer about which model performs best. It's about which platform makes agents easiest to build, deploy and trust at scale. Google's platform separates builder tools by audience. Technical teams work through the Agent Development Kit (ADK), a code-first environment that supports graph-based multi-agent networks where specialized agents delegate tasks among themselves. Business users access Agent Studio, a low-code visual interface for designing agent logic without writing code, according to SiliconAngle. Both tools received significant upgrades, with the ADK processing more than six trillion tokens monthly on Gemini models, Google said. The scaling layer addresses a failure point common to enterprise AI pilots. Proof-of-concept agents break down when moved into production because they can't maintain context across multi-step workflows or extended time periods. The revamped Agent Runtime supports long-running agents that maintain state for days at a time, backed by a Memory Bank for persistent, long-term context, according to Google Cloud. An agent managing a sales prospecting sequence, for example, can now run autonomously across multiple days without losing track of prior interactions. Payhawk, the expense management platform, told Google its Financial Controller Agent now uses Memory Bank to recall user-specific constraints and history, cutting expense submission time by more than 50%. PayPal said it uses the Agent Development Kit and visual tools to manage multi-agent workflows and inspect agent interactions, with Google's Agent Payment Protocol providing the foundation for trusted agent-based commerce. Agents are only as useful as the data they can reach. Most enterprise AI deployments stall not because the model is wrong but because the agent can't connect to the systems that hold the relevant information. The ADK supports native ecosystem integrations that connect agents to internal data without building custom pipelines, and lets users activate data in platforms such as BigQuery and Pub/Sub with batch and event-driven agents that run asynchronous tasks like content evaluation and data analysis in the background, as reported by SiliconAngle. The platform also connects to more than 200 models through Model Garden, including Google's own Gemini 3.1 Pro and third-party models including Anthropic's Claude Opus, Sonnet and Haiku. L'Oréal said it is building a proprietary agentic platform on Google Cloud using the ADK, connecting agents to its data platform and core operational applications through Model Context Protocol. The company described the approach as a shift from workflow automation to autonomous, outcome-oriented agent orchestration. The governance layer is where the platform makes its clearest break from point solutions. Enterprises deploying agents at scale face a specific risk: agents acting without a traceable identity, operating outside approved boundaries or exposing sensitive data. The platform assigns every agent a unique cryptographic ID through Agent Identity, creating an auditable trail for every action mapped back to predefined authorization policies, according to Google. An Agent Registry indexes every internal agent, tool and approved skill. An Agent Gateway enforces consistent security policies across the entire agent fleet. Agent Anomaly Detection flags unusual reasoning in real time using statistical models alongside an LLM-as-a-judge framework. TechCrunch noted that given how new agent technology is to the enterprise and how real security concerns remain, Google has oriented the platform primarily toward IT and technical teams, with business users directed toward the separate Gemini Enterprise app for task-level use cases.
[15]
Data foundation is key to building the agentic enterprise - SiliconANGLE
The AI agent race will be won by companies that build the right data foundation The agentic AI boom has an inconvenient truth: Without a solid data foundation, the revolution stops before it starts. Conversations surrounding an emergent agent control plane signal that agentic AI is maturing -- but the enterprises best positioned to capitalize are those that did the unglamorous data work years before the hype arrived. That divide between prepared and unprepared organizations is sharpening in real time, according to Ben Kessler (pictured, right), chief executive officer of 66degrees LLC, a Google Cloud-focused AI and data solutions firm that earned the 2026 Google Cloud LLC Partner of the Year Award for AI. "As these advances of data technology and these advances of AI technology and even agentic [technology emerges], it's just reinforced for me and for us as a business that there's such a need -- as there's a need for people to eat every day -- for people to learn how to adopt technology and learn how to take this technology and drive and create differences for their business," Kessler said. "The agentic enterprise is all about the 'how' changes. It allows us to actually deliver a little bit faster or a little bit more of a certain outcome." Kessler and Brendan Bonthuis (left), chief information officer of Gordon Food Service Inc., the largest family-operated food distributor in North America, spoke with John Furrier and co-host Alison Kosik at Google Cloud Next, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed how the partnership between 66degrees and Gordon Food Service is translating years of infrastructure investment into a practical data foundation to build the agentic enterprise. (* Disclosure below.) For organizations navigating the agentic enterprise, the absence of established playbooks makes trusted partnerships more valuable than ever. No enterprise today has a decade of agentic AI experience to draw from, and that novelty changes how companies need to approach the journey, Bonthuis noted. "This feels different because no one has 10 years of experience doing this," he said. "Having a partner that can walk with you ... to figure out what is the plan and how do we approach it and learn together -- that's been really important as we go on that journey." The data foundation that makes that journey possible at Gordon Food Service was built well before agentic AI became a mainstream conversation. Working with 66degrees, the company prioritized centralizing its data on the Google Cloud Platform and ensuring it was accessible across multiple formats -- work that is now paying dividends, according to Bonthuis. "We built out a data platform, so we would prioritize the ability to quickly ingest data across a bunch of different formats," Bonthuis said. "Having your data in one spot and having it accessible within GCP -- we felt like it was a good idea years ago and we worked really hard on that with these guys and we're really glad it's there now. It certainly doesn't guarantee success, but it's a really important step." The broader opportunity now is one of workforce amplification rather than displacement, according to Kessler. Agents are allowing 66degrees' own consultants to multiply their output. The same logic applies across its client base as the firm helps traditional enterprises adopt agentic systems with the accumulated context of prior deployments. "What we've been able to do is empower our people with agents and with AI to eventually 10X their capacity," Kessler said. "It's not actually doing the same amount of work with fewer people. It's actually figuring out how to take your workforce, apply agents and work with agents in order to make them more productive." Here's the complete video interview, part of SiliconANGLE's and theCUBE's coverage of Google Cloud Next:
[16]
Google Accelerates Agentic AI Shift With New Enterprise Platform | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The company announced these developments Wednesday (April 22) in conjunction with its Cloud Next event in Las Vegas. Google's new Gemini Enterprise Agent Platform provides a system for building, scaling, governing and optimizing agents. It builds upon the company's existing AI development platform, Vertex AI, by combining model selection, model building and agent building capabilities with new features for agent integration, DevOps, orchestration and security, according to a Wednesday press release. The company also introduced three new agents in Google Security Operations to help organizations defend against the malicious use of AI. These include a Threat Hunting agent that searches for novel attack patterns and stealthy adversary behaviors, a Detection Engineering agent that identifies coverage gaps and creates new detections for threat scenarios, and a Third-Party Context agent that enriches workflows with contextual data from third-party content, per a Wednesday press release. Google Cloud's new $750 million fund will provide resources and incentives to global consulting firms, systems integrators, software providers and channel partners to help their joint customers adopt agentic AI. Resources available to partners will include AI value assessments, Gemini proofs-of-concept, Gemini Enterprises practice building, agentic AI prototyping and development, Wiz security assessments and usage incentives, according to a Wednesday press release. The new agreement with Thinking Machines Lab will see Google Cloud provide the AI startup with additional AI infrastructure capabilities and capacity, including A4X Max VMs with Nvidia GB300 GPUs as well as services such as Kubernetes Engine, Spanner, Cluster Director, Cloud Storage and Anywhere Cache. Myle Ott, founding researcher at Thinking Machines Lab, said in a Wednesday press release that this infrastructure got the company running "at record speed." Sundar Pichai, CEO of Google and Alphabet, said in a Wednesday blog post that the pace of technological change has never been faster than it has been over the past year. "Our first-party models now process more than 16 billion tokens per minute via direct API use by our customers, up from 10 billion last quarter," Pichai said.
[17]
Google's AI agent platform takes pole position but work remains - SiliconANGLE
Google's AI agent platform takes pole position but work remains Enterprises are rapidly moving from an artificial intelligence that answers questions and generates content to one that performs tasks and takes actions. According to Google Cloud Chief Executive Thomas Kurian, this shift requires a fundamentally different approach to infrastructure and software. Google's view is that only a tightly integrated portfolio - spanning silicon to applications and everything in between - can effectively support this transition. A linchpin of this transition is the emergent data and AI platform, what we call the system of intelligence and what Google is initially exposing as its Knowledge Catalog. This capability ultimately abstracts and harmonizes analytics and operational applications. In addition, we see an evolving system of agency - what Google calls a system of action, comprising the Knowledge Catalog and the Agent Platform. We think measurable business value will ride on top of this infrastructure and that is where the real battle lines will be drawn. Specifically, we see frontier model vendors, of which Google is one, rapidly building out capabilities that will become fundamental to the future of software - which we predict will be the biggest transformation in the history of the software industry. In this Breaking Analysis, we contextualize the announcements and news from Google Cloud Next 2026 using the framework we've been iterating on for the three years, architected by George Gilbert. The kickoff this week was Google's big TPU 8 announcement - 8t and 8i - with the Acquired guys hosting. It was positioned as the next big step in Google's silicon roadmap and part of a broader message that Google wants to be seen as the only hyperscaler with a frontier model, a differentiated data stack and a credible path to delivering agents at scale. There was also a bit of semantic gymnastics around whether a tensor processing unit is an application-specific integrated circuit - Google said "it's not an ASIC," rather a more general-purpose chip. Call it what you want - it is specialized silicon built to run modern AI efficiently, and it is central to Google's argument that economics and performance are going to matter more than ever. Surprising to us at the TPU pre-announcement was, while there were plenty of "2X, 3X, 9.8X" claims, there was virtually no mention of metrics around performance per watt, arguably the most important measure for operators that are energy-constrained. TPUs are both impressive and critical to Google's strategy. Our view, however, is that Nvidia Corp. remains a crucial partner of Google's (and of other hyperscalers), irrespective of their in-house silicon efforts. In other words, we don't see TPU as directly competitive to Nvidia, rather we see it as a capability that gives Google differentiable advantage via its ability to integrate hardware and software tightly. It also allows Google to best manage the gap between accelerator demand and supply. Regardless, access to Nvidia's CUDA ecosystem is fundamental in providing optionality to developers and being able to provide access to the world's largest and most important AI ecosystem. The bigger setup in this Breaking Analysis is that Google has been doing yeoman's work for years in the data platform layer. It's not just BigQuery. It's the metadata layer on top, plus integration with the operational database Spanner. In our view, Google is the only hyperscaler that has been meaningfully competitive with Snowflake and Databricks as a data platform - and that work has been a long build toward what's now showing up as an agent platform story. This is where the conversation starts to get really interesting. For decades, the industry built silos - analytic data silos and operational application silos. Agents that transform a business - agents that can perceive, reason, decide, act and learn - don't unlock much value if they're siloed. You end up with automation, but you don't change how the business works. This is the context for the premise Kurian put forward this week. The industry has moved past retrieval-augmented generation-based chatbots - request and receive an answer - and into a world where agents, and teams of agents, act on behalf of humans and take action. That shift pulls a new set of infrastructure requirements into the critical path. It has to be integrated. Google's positioning is that it believes it is the only "full stack" hyperscaler that can bring the pieces together - silicon, infrastructure, data, models, applications and services - into a coherent system for agentic workloads. At the same time, many frontier model vendors don't have a cloud platform, which creates a structural advantage for Google as it tries to turn model capability into enterprise deployment. Kurian (and other executives) presented the stack slide below (and variants), throughout the conference to accentuate this point. Takeaway: TPU 8 is the headline, but the more important story is Google's attempt to connect its silicon, data platform and frontier model posture into an integrated agent platform narrative. In this section we revisit our framework for how the software industry is changing. Later in this research note, we map Google's model and try to reconcile how it fits into our vision of the future. The core idea in the slide below is, in our research, we believe the entire software industry model is changing, moving from software-as-a-service to "service-as-software." When the industry moved from on-premises to SaaS, everything changed - the technology model, the business model and the operating model. Vendors stopped shipping software and started operating it. Value delivery became continuous - and the organization had to be built around that reality. The shift underway now is broader. SaaS re-architecture primarily changed software companies and the information technology function that consumed them. Service-as-software changes the entire enterprise. Any company can scale with less labor by embedding intelligence into workflows and delivering outcomes through software. Over time, that pushes more businesses toward platform economics - and the markets that reward platforms tend toward winner-take-most, with software-like marginal economics conferring competitive advantage to firms that lean into AI. Agents are the catalyst. Agents don't unlock much value if they live inside silos. They add real value when they change business outcomes end-to-end. That's the point of the middle part of the slide above - the system of intelligence or SoI. Google introduced its Knowledge Catalog, which begins to unlock some of the capabilities of the SoI that we've previously described. The point is broader, however, is that this layer connects agents to the operational and analytic reality of the enterprise so they can perceive, reason, decide, act and learn across the business, not inside a single department. The main constraint we're trying to resolve is shown at the bottom of the slide. For 60 years, enterprises built silos - analytic data silos, operational application silos and then the organizational silos that formed around them. Each department ends up with its own applications and its own data stores. That structure is not designed for agents that need cross-functional context and permissions to drive outcomes such as "compress the hire-to-onboard cycle," "reduce quote-to-cash friction" or "cut days out of incident response" -- and do so with human language prompts that reimagine the entire workflow, rather than "pave the cowpath" by automating existing processes. The slide below is the maturity path from departmental reporting to something that starts to resemble a digital twin of the enterprise - the real time digital representation of an organization. When we discuss this concept, we often point out that historically in the data business, we think in terms of "strings" that databases understand. Here we think differently - in terms of concepts that humans understand, such as people, places, things and activities (for example, processes). We believe this represents a profound shift in software. The system of intelligence is an emerging layer and perhaps the most valuable piece of real estate in the emerging AI software stack. It can't sit on top of a pile of disconnected metrics and dashboards -- business intelligence infrastructure. Rather, it needs a substrate that models the business in a way agents can use - with enough context, timeliness and consistency - to drive decisions and actions that are trusted and repeatable at scale. At the left edge of the chart, Level 1 is where most companies started - running reports against siloed operational applications. It's mostly manual, mostly departmental, and there's little self-service. Level 2 is where the modern data platform story took off - a BigQuery, Snowflake or Databricks-style approach where teams standardize key metrics and dimensions to feed "cubes" from different applications. That improves self-service, but in practice the organization still behaves like departments with their own data and views of the truth. Level 3 is the first real step toward modeling the business instead of just reporting on it. Real-time events start flowing from operational systems into the data platform, and the data platform enriches those events in return. The entities and the events start reinforcing each other, and that's where "context" becomes something you can actually compute, not just describe. Level 4 and Level 5 move into behavioral modeling and prediction. This is where products like Salesforce Data Cloud or SAP's data cloud are headed - models of processes derived from their application footprints, with richer behavioral patterns and predictive signals. The important nuance is these don't have to become walled gardens. Think of them as value-add layers that sit on top of today's data platforms and increase fidelity and actionability. The north star is the digital twin - a real-time digital representation of the enterprise that captures people, places, things and activities/processes. That is the prerequisite for layering a system of intelligence on top and expecting agents to do more than automate small tasks. We now move to the handoff that has to happen if agents are going to take action with confidence. The generative layer gives you creativity - for example, tokens, language, synthesis, exploration. But enterprise action requires determinism - the rules, the guardrails and the auditable trail that says what happened, why it happened and what the system should do (and is allowed to do) next. In our view, this is the core bridge between nondeterministic intelligence on top - deterministic execution underneath - tied together tightly enough that you can trust the outcome. The simplest way to think about it is goals and guardrails. Agents have goals - what they're trying to accomplish - and guardrails - what they must do and the rules they must follow. These next stages are the deterministic layers that turn "smart" into "safe and operable" (below). On this slide, the four stages take the maturity model from "analytics helping humans" to "systems coordinating work": One reason this is so important in our view is it answers the "How do you get there?" question that kept coming up. Specifically, the feedback from Geoffrey Moore when we first laid out service-as-software in a serious way - the idea is compelling, but enterprises need specific steps to get from point A to point B. The last two slides - the five stages of data platform maturity and these four deterministic stages - are the work that fills in that bridge. It's also why, candidly, this has been iterative. We thought the model was "done" in January - then it got deeper, because the deterministic layers are where the hard problems live. The bottom line: Agents don't dramatically accelerate value in silos. They compound value when they can navigate enterprise state, take actions under rules, and leave an audit trail. This is the bridge we see between generative output and governed execution. The slide below is the next step past the deterministic layers. The bottom stack - mapping plus rules - is the part that makes agents safe. It defines what's allowed, what must be true, and how actions get executed without blowing up the business. But that only covers a slice of how companies actually operate. The bigger chunk is tacit knowledge - the stuff people call "tribal knowledge" - what experts do when the rules conflict, when the data is incomplete, and when the situation is ambiguous. That's why the slide above separates the deterministic digital twin (the orange/gold "scaffolding") from the cognitive digital twin (the blue "crystallization"). The deterministic layer is the governed backbone. The cognitive layer is how the organization captures the "why" behind decisions and learns over time. The viral "context graph" chatter that hit the VC and vendor community late last year was essentially about this problem. Context is what you reach for when deterministic logic breaks down. The logical workflow is: The important nuance is you can't cover the enterprise with deterministic rules alone. In the view laid out above, the rules are necessary, but they're not the whole game. Most of how a company works lives in judgment calls, conflict resolution, prioritization and experience. That's the 90% problem. The "gold standard" example cited here is a company called Mercor Inc. The concept is that even if the implementation is difficult, an expert teaches their thinking process, teaches how to grade the reasoning process, and even teaches what wrong reasoning looks like. That "teach and grade" loop is the only reliable way to capture the why behind expert judgment. Other approaches try to do it cheaper because it's less onerous on the expert, but they lose fidelity. A simple way to explain the mechanics is punishment and reward: If the agent says one plus one equals three, it gets penalized; if it says one plus one equals two, it gets rewarded. Over time you get compounding behavior improvement. That's the flywheel. The key connection back to the earlier slide is that deterministic rules make tacit knowledge capture easier. When the rules are explicit, you narrow the surface area of ambiguity. Humans don't have to explain everything, only the exceptions - and the system can learn faster because it knows exactly where the rules stopped being sufficient. This is relevant for Google because the agent platform conversation is starting to show more maturity. It's still early, but the steps are becoming clearer - and the deep dives this week at Google Cloud Next 2026 reinforced that the path forward isn't just more capable models. That's important but it's the combination of deterministic scaffolding plus a systematic way to capture and refine expert judgment that represents state-of-the art today. Bottom line: The deterministic twin makes agents safe. The cognitive twin makes them useful at scale. The compounding comes from tightly integrating the two together so exceptions become training data and expertise turns into an asset. We believe the most useful way to decode Google's data and AI announcements is to strip away the product names - "Knowledge Catalog" and "Agent Platform" - and map the underlying capabilities to the layers that we described earlier in this research. Google's terminology is a little inverted relative to ours. Google tends to talk about "system of intelligence" as the modern data stack and "system of action" as the new agent layer. Our view is different - the system of intelligence is the harmonization layer that makes action safe and repeatable, and it is what ultimately feeds the system of agency. Understanding the different language and mapping Google's parlance to ours at the functional level helps to highlight both progress and gaps. Level 1 - Mapping layer (entities and lineage) Google is doing real work extracting database management system technical metadata and lineage. It is also pulling unstructured, document-oriented tacit knowledge into a knowledge graph - and we acknowledge that's advanced. The shortfall is unification. Google can extract entities, but it does not yet unify those entities across systems into a single, authoritative reference. "Customer" shows up in many places. Resolving "customer" across all those systems remains the hard part. At least this is our current understanding of where Google is at. A practical way to say this, as we said earlier, is that databases store strings - the knowledge graph wants to speak in things people understand. The move from strings to things - and then from things to activities and processes - is where the big value realization happens, and it's also where the work gets hard. Level 2 - Rules layer (from dimensional semantics to application semantics) Google's catalog captures data quality rules, BI metrics and business glossary content - dimensional semantics. That's useful, but it stops short of full application semantics - the business process rules that are entangled and entombed inside legacy application silos. This is the layer where Palantir Technologies Inc. often comes up as a reference point for our work - not because it covers everything, but because it shows what "process rules as data" looks like when you go deep. We believe Google wants to cover a lot more ground than Palantir can cover at its pace and within its constrained domain -- but that doesn't make the step any easier. Nor does it mean that Palantir, and its CEO Alex Karp, don't have ambitions to broaden its scope. Level 3 - Institutional memory (how vs. why) Google has the substrate to capture agent reasoning and store it. The unified trace viewer (Trace Explorer inside of Google Cloud Trace) is a real step because it shows how an agent got to an outcome. That is not the same as capturing human expert reasoning - the why - which is what drives judgment and confidence. It's a nuanced gap, but it's the difference between replaying a path and learning a decision system that can be trusted. Level 4 - Decision guidance (context synthesis, confidence still thin) Google can synthesize context and enable complex multi-retrieval. That allows an agent to retrieve more relevant material and make a judgment. The missing piece is confident, scored guidance from the system of intelligence itself - the ability to say "here's what to do and here's our confidence score," grounded in a library of human-grade "why" and in process-aware semantics. Without the "why," the system can feel closer to static institutional memory than decision guidance. On the system-of-agency side, the key requirement is the learning loop - every layer needs feedback. Agents do work, get scored, get reinforced, and then you accelerate value. This is where Google's agent evaluation and optimization work is important. We heard a consistent theme from customers in that there is real excitement, but it's still early for most of them when you push past Kurian's keynote narrative - "in the past year we didn't just see adoption, we saw transformation." Skilled employees are being redeployed toward building, deploying and managing agents - that's where a lot of the near-term "productivity gain" is going. In other words, the productivity story is increasingly coming from the work of creating agentic capability, not just consuming it. So in that sense Kurian's proclamation holds water. But this is still an elusive reality for the vast majority of enterprises. This also explains why Palantir keeps coming up in these conversations. Palantir's forward deployed engineers or FDEs effectively did this work for customers - building the deterministic foundation (mapping plus rules/ontology) and then layering action on top of it. That deterministic foundation is what makes action safe. The open question is how quickly the broader market can build that foundation without needing an army of specialists - and how quickly Google can industrialize the "why," not just the "how," so agents can act with confidence. Bottom line - Google is putting credible pieces on the chess board - for example, lineage + metadata extraction, a graph-oriented approach to unstructured knowledge, multi-turn agent evaluation and failure clustering via optimizer. But gaps exist - unifying entities across systems, moving from dimensional semantics into real process semantics, and capturing the human "why" so decision guidance becomes confident, scored and repeatable. This is work that remains. The agent platform discussion has centered around coding. The market is converging on the increasingly obvious fact that to build a universal knowledge-work agent, you start with the coding agent, because the way agents interact with the world is through tools - and tool use increasingly means writing code to call those tools. That's why Anthropic leaned into coding first and why the coding stack is now the battleground across the frontier model vendors. We see the competitive pressure showing up in many places. Anthropic's Claude Code is gaining massive traction, OpenAI is pushing Codex, Grok has to have a credible coding agent capability to be competitive as a frontier model, and Google is taking a different route by building an enterprise agent platform that tries to turn the "harness" into something broader. The news around SpaceX Corp. owner Elon Musk having an option to buy Cursor underscores the point: If you don't have a first-class coding agent story, you fall behind quickly. In the simplest terms, the progression on the slide above is: Anthropic's early advantage is that "coding first" path is the fastest way into general-purpose agent behavior. The harness gives developers a high-control environment - user interface/command-line interface, orchestration, tool use, context management - and it becomes the training ground for what later turns into broader knowledge-work agents. Their Conway work is the logical next step - an attempt to assemble a fuller platform on top of MCP, with proprietary extensions and enterprise features that go beyond pure coding. Google's push is different. It's trying to build the enterprise control plane for agents - and it has some things the frontier labs can't easily deliver on their own, at least not yet, even if the frontier labs carve off pieces of the platform experience and live inside enterprise stacks. Three practical differences stand out to us: Net-net, we believe the frontier labs are going to keep pushing hard on coding agents because it is the fastest path to credible tool use. Google's advantage comes from turning that into an enterprise-ready platform layer - governance, agent identity, intent-based policy and shared memory - so customers don't end up with a new generation of silos and lock-in disguised as "agent progress." Work remains for Google and others, but the direction is becoming clearer. Google is betting the farm on an integrated stack - TPU, frontier model, data platform, agent platform and applications as one cohesive system. That's Google's differentiation. In our view, it's directionally right, and it also pulls Google into new territory where the winners will be decided by adoption, operating leverage and the ability to turn "context" into "action" without breaking security boundaries. We see Google as having a strong technical story here, but its vision statement to businesses could be framed in more "wallet" than technical terms. This brings an interesting question for the likes of Snowflake and Databricks. Those two are climbing "up" from the read path - give me context from the data platform - and then stretching toward richer context. The question is: What happens once they've read and reasoned, and how do they operationalize the decision the agent makes? That's where action becomes the differentiator, and it's why coding capability matters so much as the starting point for agent maturity. If the agent can't reliably execute, then all that context is just analysis theater. Google is moving in the right direction - but it's also crossing into the arena occupied by Palantir and Celonis SE, and even the integration layer that vendors such as Salesforce Inc.'s MuleSoft helped define. We see this as the hard part. Specifically harmonization across operational systems, data systems and workflows - with enterprise-grade governance. This is where we believe our "system of intelligence" has to emerge as a real product layer - irrespective of what Google calls it. A few takeaways from this: Bottom line: Google looks like it's taking pole position in the agent platform conversation because it can bring the full stack and because its data platform has been doing the yeoman's work for years. The work that remains is the hard part - turning "read context" into "take action," building the harmonization layer at enterprise scale, and solving the OEM/perimeter problem so Gemini can be consumed where the data lives, not only where Google runs. We open and close on the same topic: TPU. The last point is the one that changes the tone of the whole week - Google's TPU capacity gives it a structural advantage in AI compute, and it will show up in product velocity. The Epoch AI chart on "H100 equivalent" capacity below suggests Google has more AI compute than any other cloud, driven largely by TPUs - a brute-force approach compared with Nvidia, but one that lets Google control its own destiny on supply and deployment. This ties to a concept raised on the Latent Space podcast: "GPU-rich" vs. "GPU-poor." Google looks "GPU-rich," which helps explain why generative AI and agents are showing up broadly across its product line. Microsoft, by contrast, looks "GPU-poor" in this slide - which helps explain prioritization decisions (Office first), uneven rollout gen AI enablement across its Azure services, and the downstream pressure it has created elsewhere in the stack. There's a separate point that needs to be said: None of this is "TPUs are eroding Nvidia's moat." The demand for accelerated compute is so large that quality accelerators will sell, in a large part because Nvidia can't satisfy all demand. Google will use TPUs primarily to power its own services - not as a merchant silicon vendor. A few takeaways to anchor this analysis: Net-net, we believe Google's TPU advantage is less about claiming an "Nvidia alternative" and more about enabling Google's own stack to move faster - especially at the software layer, where capacity becomes the difference between selective enablement and pervasive agent rollout. In the next 30 to 60 days, run a structured "agent platform bake-off" with Google, AWS and Microsoft Azure in the mix. Force it into a synthetic production workflow that crosses multiple systems (not a demo). Pick one high-value, end-to-end process, tie it to your actual data and policies, put Apache Iceberg in the mix and score it on the things like security/auditability, semantic consistency across systems, time-to-value and the operational requirements to keep it running.
[18]
Context engineering is missing layer in agentic AI - SiliconANGLE
OpenText and Google target the data layer gap holding back enterprise agentic AI Organizations racing to deploy agentic AI are discovering that raw model performance is only part of the equation -- context engineering is the key to managing decades of unstructured, ungoverned data trapped inside legacy information management systems. As Google Cloud Next 2026 signals a complete pivot toward the agentic enterprise, the deeper question for IT leaders is not which model to choose but how to supply agents with the governed, contextual data they need to act reliably at scale. That challenge sits squarely in the wheelhouse of Open Text Corp., a global leader in secure information management that has been building the context layer enterprise agents depend on, according to Waqas Ahmed (pictured, left), vice president of AI engineering at OpenText. "Enterprise information is not just files on a drive," Ahmed said. "It is organized, governed, tagged with context, tagged with metadata and integrated with the business applications and customer processes. To wire that into the AI providers and LLMs, you have to be able to build that context so you are not flooding the LLMs with extra information, but you're giving them the right information at the right time." Ahmed and Yemi Falokun (right), global AI/ML partner engineering lead at Google LLC, spoke with theCUBE's John Furrier at Google Cloud Next, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed how OpenText and Google Cloud LLC are co-engineering a full agentic stack built on context engineering, data sovereignty and open interoperability standards. (* Disclosure below.) The OpenText and Google Cloud partnership, formally expanded in November 2025 to cover AI innovation, data privacy and sovereign cloud, reflects a shared conviction that enterprise AI success requires more than model access -- it requires a governed data layer. OpenText's generative AI journey with Google began in May 2023, when it started using what is now the Gemini Enterprise Agent Platform to build industry-specific solutions, including its Content Aviator, which lets users interact with decades of enterprise documents in a trusted, secured environment, Falokun noted. "What they're now doing is taking the foundation they've built from 2023 into the AI agentic era," Falokun said. "We're now deeply integrating the Gemini Enterprise Agent Platform so that we can allow our joint customers to deploy secure autonomous solutions at scale, leveraging those decades of information that they store on behalf of their users." The stakes of getting context engineering wrong are significant. Without it, agents operating inside sensitive workflows -- such as HR onboarding or insurance claims -- have no way to enforce security policies, reproduce deterministic outcomes or generate the audit trails that compliance teams require, Ahmed explained. This is precisely the gap OpenText Aviator Studio is designed to close with a no-code platform that allows customers to build, govern, and connect enterprise AI agents -- and extend them across multi-application workflows beyond any single system. "We believe that we should bring AI to the data and not data to the AI," Ahmed said. "When you bring AI to the data, it means that you can still be in control of the permissions, the governance, the security, the access, the relationship between the data and the local intelligence can orchestrate across those entities, those stores, to find the right answer." The partnership also extends to data sovereignty, a concern increasingly central to global enterprises. OpenText supports private AI deployment patterns where the entire processing lifecycle remains within a customer's sovereign environment, and Google Cloud reinforces this with regional services, customer-managed encryption, and a commitment not to use customer data for model training, Falokun said. On interoperability, OpenText has made its Content Aviator available within the Gemini Enterprise environment through both agent-to-agent integration and Model Context Protocol-based data connectors -- meaning Google Workspace customers can access OpenText-hosted content through agents without writing any custom code. "If you keep the agents locked up within their individual applications, all you have done is automated the existing applications for some efficiency, but that's not the true power of the agentic enterprise," Ahmed said. "The true power is in the choreography and orchestration across silos, across applications where data provides the intelligence and AI executes the actions to provide productivity, insights and competitive advantage." Stay tuned for the complete video interview, part of SiliconANGLE's and theCUBE's coverage of Google Cloud Next.
[19]
Maximizing Gemini: Google Cloud makes its bid to build the operating system for enterprise AI - SiliconANGLE
Maximizing Gemini: Google Cloud makes its bid to build the operating system for enterprise AI Google LLC has emerged as the only cloud "hyperscaler" with a leading frontier artificial intelligence large language model - Gemini - and today it issued a raft of announcements designed to capitalize on that current advantage. The search giant's cloud unit launched the Gemini Enterprise Agent Platform as its new hub for building AI agents. Google also unveiled a new Gemini Enterprise application designed to transition AI from an isolated tool into a secure, collaborative autonomous engineer for the enterprise. The latest releases were described by Google Cloud Chief Executive Thomas Kurian (pictured) as the next chapter in the ongoing the AI saga. "You have moved beyond the pilot, the experimental phase is behind us," Kurian said during his keynote address at Google Cloud Next in Las Vegas. "How do you move AI into your entire enterprise? The answer is a unified stack." As SiliconANGLE analysts have noted, Google is one of the few key tech players that has the resources to optimize the stack end-to-end. Its focus, based on this week's announcements at Google Cloud Next, has been on maximizing the compute layer, the global network, security, data engines and the application platform to generate enterprise AI value. Gemini plays a central role in this strategy, as evidenced by its integration in a multitude of the announcements made today. The new Gemini Enterprise application is designed to solve frustrations around siloed AI agents that have proven to be tough to oversee. It adds a new "Inbox" for agentic management, providing a more centralized command for guiding and manage agents in use. Gemini also powers the newly announced Data Agent Kit, a data engineering experience for leveraging favored practitioner tools, and a new shared workspace feature, called Projects, for pivoting Gemini from a solo AI assistant to a collaborative tool. Gemini was featured prominently in Google Cloud's security announcements, wrapped around new governance tooling and agentic identity solutions. "We are moving in a bold and responsible way," said Sundar Pichai, CEO of Google and its parent company Alphabet Inc., who spoke to the conference in a prerecorded video. "Think of it as mission control for the agentic enterprise. One thing is perfectly clear: We are firmly in the agentic Gemini era." Being "mission control" for the agentic world will still require powerful hardware that can run the models for delivering the brainpower behind reasoning machines. Google addressed this as well with the announcement today of two new Tensor Processing Units or TPUs. The company introduced the TPU 8t and TPU 8i, custom silicon designed to serve as the workhorses for model training and inference. TPU 8t employs a specialized accelerator to address memory access issues for LLMs and memory bandwidth optimization problems that have hindered progress in AI deployment. "[TPU] 8t is a powerhouse optimized for training," Amin Vahdat, chief technologist for AI Infrastructure at Google, said in a presentation today. "We can now turn months of training into weeks." The custom-designed TPU 8i is architected to host a larger key-value cache at inference time for LLMs, which can significantly accelerate text generation. The technology behind the 8i design improves latency, another roadblock for AI, by shrinking the network diameter and the number of hops a data packet must take to cross the system. "We've finally broken the memory wall that slows long context decoding," Vahdat said. Though Google's announcements this week underscored its confidence in Gemini to anchor an agentic AI strategy, statements by company executives pointed toward a development worth watching in the evolution of AI for the enterprise. Competition for enterprise market share in enterprise AI will rely on the ability of the tech industry's major players to serve as the control layer where AI does its work. Pichai alluded to this in his description of "mission control," and Google's announcements this week of new features such as Agent-to-Agent Orchestration, Agent Gateway and Agent Observability spotlight the need for bringing a measure of order into the AI equation. "We built the agent platform to manage the entire lifecycle of an agent," Kurian noted. There are indications that Google's strategy is beginning to translate into financial results and market momentum. Alphabet reported 48% revenue growth year-over-year for its cloud operations in the fourth quarter of 2025, a number that represented the fastest growth rate among the "Big Three" hyperscalers. Cloud backlog also surged 55% quarter-over-quarter. Data points such as these offer evidence that the machine learning and AI wave is carrying Google Cloud to more success than it has previously seen. Google's bid to be the operating system for enterprise AI got much reinforcement this week and its future success will likely depend on whether this message influences the growing number of users who are embracing AI to get work done. "Companies are not just redesigning workflows, they are turning their employees into AI builders," Kurian said. "We offer you an integrated stack with the freedom to choose the world's best chips and models. This platform is ready, so what will each of you build?"
[20]
The agent control plane hits overdrive at Next 2026 - SiliconANGLE
The decisive layer in AI is still unclaimed: theCUBE's Google Cloud Next day one keynote analysis The fight for the agent control plane is underway -- and it might determine who controls enterprise AI for the next decade. Google LLC came into Google Cloud Next 2026 with a clearer positioning for Gemini: less as a standalone model and more as a connective layer. The emphasis is shifting toward how it ties together data systems, applications and the agent runtimes enterprises are starting to move into production, according to John Furrier (pictured, right), co-founder and chief executive officer of SiliconANGLE Media Inc. "The control plane is that horizontal layer that moves data around and it connects to all the systems," Furrier said. "It's like the main nerve center. It's like the backbone, the spine of all the systems -- and whoever owns the control plane kind of wins." Furrier and co-host Alison Kosik (left) conducted a day one keynote analysis at Google Cloud Next, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed Google's competitive positioning and what enterprise AI adoption really requires. (* Disclosure below.) No hyperscaler has fully established control over enterprise AI or the agent control plane -- a gap that this year's conference is focused on addressing. Multi-agent usage on the Databricks' platform grew 327% in just four months, a signal that production deployment has already crossed an inflection point, Furrier emphasized.. That growth underscores the stakes for Google: As agent orchestration becomes the enterprise default, the platform that routes those workflows wins. "The AI-native applications are real and you're starting to see coding become almost done by agents 100%," he said. "I saw Databricks set a stat that said they've crossed over -- less humans coding than machines. That's a major milestone." The model leaderboard, meanwhile, may be the wrong scoreboard entirely. Enterprise value is being created at the systems layer -- in the infrastructure, data pipelines and agent runtimes that models run on -- not in model capability alone, Furrier noted. That's the layer Google is targeting with Gemini, positioning itself as the platform agents depend on to function. "As I pointed out ... all the enterprises and all the real action is not what the models [are,] that's what the models are interfacing with," he said. "Those are the systems." But a strong product won't be enough on its own to navigate the contested market. Agentic AI is restructuring the enterprise from the inside out, with CFOs becoming operators and people officers managing agent workforces, Furrier explained. Consequently, the unit of value inside these organizations is shifting. "You have a new kind of currency going on with tokens and that's changing the organizational structures," he said. "That's changing how people are organizing their teams. That's changing how people work. It's a complete reset in the corporate world." Here's the complete video interview, part of SiliconANGLE's and theCUBE's coverage of Google Cloud Next.
[21]
With Gemini Enterprise Agent Platform, Google brings agentic development and control under one roof - SiliconANGLE
With Gemini Enterprise Agent Platform, Google brings agentic development and control under one roof Google Cloud is taking a massive leap toward building the autonomous enterprise with the launch of the Gemini Enterprise Agent Platform, an evolution of the existing Vertex AI platform that becomes its new hub for building artificial intelligence agents. Announced at Google Cloud Next 2026 in Las Vegas, the new offering brings together all of the model selection, development and agent building tools found in Vertex AI, together with new features designed to facilitate agent integration, orchestration, DevOps and security. It's positioned as a single destination for technical teams to develop AI agents that can then be delivered seamlessly to employees via the new Gemini Enterprise application also launched today, enabling every worker to begin automating their work. Google Cloud Vice President of Product Management Michael Gerstenhaber said in a blog post that the original Vertex AI platform was designed to enable the massive engineering required for building tools in the early days of generative AI. "But today, we're managing a different level of complexity with agents interacting across multiple systems -- and often without security and governance guardrails," he wrote. "To move toward a truly autonomous enterprise, one where agents can act with the same independence and reliability as a member of your team, you need a foundation that can sustain that level of trust." Moving forward, Gerstenhaber said, all of the services previously housed in Vertex AI, along with all of its future roadmap developments, can now be found within the Gemini Enterprise Agent Platform, along with everything needed to deliver multi-agent teams into the enterprise. The revamped platform is much more than just a facelift, with Gemini Enterprise Agent Platform designed to provide the infrastructure that handles the entire lifecycle of AI agents. According to Gerstenhaber, Google has broken this down into four main pillars: Building, scaling, governing and optimizing autonomous workforces. For those building AI agents, the main focus is on the new Agent Studio and Agent Development Kit or ADK - both of which have received significant upgrades. The first is designed for regular business users that need to design their own agents, and includes a low-code visual interface that makes it simple to drag-and-drop agent logic into place. For hardcore developers, the ADK is where it's at. Builders will be able to unlock more powerful reasoning by accessing the most powerful AI models and organizing their agents into a network of sub-agents capable of solving complex problems, using its new graph-based framework. Gerstenhaber said the new ADK supports native ecosystem integrations that make it simple to connect AI agents to internal data without building custom pipelines. Users will also be able to activate their data in platforms such as BigQuery and Pub/Sub with batch and event-driven agents in order to run massive, asynchronous tasks such as content evaluation and data analysis in the background. To scale AI agents from a fancy proof-of-concept to live environments, users need a platform that's able to handle the performance, state and security requirements of real-world work, and Gemini Enterprise Agent Platform delivers here too. It features a revamped Agent Runtime for the simple provisioning of new agents, plus support for multiday workflows to keep them running autonomously for days on end. There are tools for agent-to-agent orchestration too, enabling agents to easily delegate tasks to one another, so multiple specialized agents can work with one another on the most complex tasks given to them. To support the context required for agents running at large scale, Google has created a new Agent Memory Bank that dynamically generates and curates long-term memories from conversations. This can be accessed by tapping into new "Memory Profiles" that allow agents to recall high-accuracy details with low latency, ensuring that context is never lost. For governance, the Google Enterprise Agent Platform offers a secure-by-design architecture that applies enterprise policy controls to each agent that's deployed, whether customers build them themselves or source them from Google's partner ecosystem. These controls make it simple to assign each agent with an Agent Identity, just like each human has their own. Gerstenhaber explained that each agent receives its own, unique cryptographic ID that leaves a clear and auditable trail for every action it takes, and that these can be mapped back to predefined authorization policies. Users will also be able to maintain a central library of approved tools agents can access through the new Agent Registry, Gerstenhaber said. It indexes each internal agent, tool and agent skill, simplifying the discovery process while ensuring they can only access approved assets. Meanwhile, the Agent Gateway is designed to act like an air traffic control tower, allowing administrators to oversee their entire fleet of AI agents and enforce consistent security policies across them all. There's also a comprehensive range of tools for protecting agents against prompt attacks and monitoring their behavior in real time, found under the Agent Security dashboard. Finally, for optimizing agents, the Google Enterprise Agent Platform provides tools for testing them before they're shipped and then monitoring their performance in production environments. With Agent Simulation, users can test how their agents work on synthetic workloads using virtualized tools in a controlled environment. Once they're up and running, they'll be able to use the Agent Evaluation tools to continuously score each agent as it performs its work. The Agent Observability tool enables them to dig even deeper and visually trace the complex reasoning of each agent and debug issues as they occur. Should an agent fail to perform as expected, users can then pull up the Agent Optimizer to automatically refine its system instructions and enhance its accuracy. Although Google will undoubtedly push customers to use its fleet of Gemini models, it's still maintaining its commitment to an open model ecosystem. Users will be able to enjoy "first-class access" to a selection of more than 200 models, including Gemini 3.1 Pro and Gemini 3.1 Flash, its open-source Gemma 4 models and Lyria 3 for creating music and audio. It also lists numerous third-party models, including Anthropic PBC's Claude 3.5 Sonnet and Haiku.
[22]
Google puts Gemini Enterprise at the heart of the new agentic taskforce for enterprise automation - SiliconANGLE
Google puts Gemini Enterprise at the heart of the new agentic taskforce for enterprise automation Google Cloud is on a mission to accelerate the adoption of artificial intelligence agents across enterprise computing environments, paving the way for a new era where AI can automate many of the most complicated, multistep tasks currently performed by humans. To that end, it has announced a major revamp of Gemini Enterprise, saying it intends to transition AI from an isolated productivity tool into a "secure, collaborative autonomous engine" for business. "Companies are ready to build their agentic task force, but this demands doing so within a secure and governed environment," the company said. "This includes creating and deploying agents with their own identity, registry, and gateway so they can always be traced, monitored, and managed." Announced today at Google Cloud Next 2026, the new Gemini Enterprise application arrives at a critical juncture for enterprise AI. Though many organizations have experimented with large language models, the novelty of summarizing emails and generating code is wearing off. Companies are increasingly frustrated by the "human-in-the-loop" bottleneck - the need for a person to sit and prompt an AI through every single step of a multi-stage project. With Gemini Enterprise, Google wants to solve the problems around siloed AI agents that are difficult to monitor and lack the persistence and context to automate long-term tasks such as monthly financial reconciliations and multiday sales prospecting. By providing a secure, governed environment that gives agents their own identities, tool registries and memories, it believes it can finally convince enterprises to embrace autonomous systems in their most complex workloads. Gemini Enterprise will provide business workers with access to a new breed of "long-running" AI agents that can be built using the new Gemini Enterprise Agent Platform that was also announced today. They'll be able to participate in agentic development too thanks to the newly enhanced Agent Designer tool, which provides low-code and no-code interfaces for them to build their own agents, using either natural language instructions or a visual flow designer. It combines generative intelligence with deterministic logic, or essentially strict business rules, to ensure that agents don't hallucinate or do anything that contradicts the company's compliance policies. Users will also be able to codify their unique expertise into "Skills" that essentially formalize specific workflows, such as applying brand guidelines to a project or formatting a report in a particular way, and save them so others can use the same actions. This means users will be able to avoid re-explaining the context each time they ask an agent to perform a more complex task. The agent will simply draw on the necessary skills to get the job done, only when they're required, to ensure it doesn't overload its reasoning process. In addition, Gemini Enterprise is adding a new "Inbox" for agentic management. It's basically a centralized command location for users to monitor, guide and securely manage all of the agents they're using. Notifications will be categorized into actionable groups, such as "needs your input," "errors" or "completed jobs," so users can quickly see at a glance how their agents are progressing. To address the "solo" nature of AI, Gemini Enterprise is adding new features called Projects and Canvas, which will transform it from a personal assistant into a team member that everyone can access. Projects provides a shared workspace for humans and agents to co-create, drawing on context from Google Workspace and Microsoft OneDrive. Meanwhile, Canvas is an integrated editor that enables teams to collaborate on Docs and Slides along with AI agents, eliminating the need to keep switching tabs. Google revealed that Gemini Enterprise now supports Bring Your Own Model Context Protocol, which can be thought of as a technical bridge that allows administrators to connect the platform to their internal tools and servers. With this, AI agents built with Gemini can now discover and use tools hosted on companies' private servers, whenever they need them to complete a task. There's also a new Agent Marketplace in the Agent Gallery, which allows enterprises to access specialized third-party agents developed by Google partners such as ServiceNow Inc., Oracle Corp. and Accenture Plc. Finally, Google said it's doubling down on governance in an effort to appease anyone concerned about what could possibly go wrong. For instance, each agent can be granted its own Agent Identity, essentially a traceable digital ID that allows its work to be tracked and audited. Admins can use the agent's digital ID to enforce "least privilege access" and eliminate any risk, while the new Agent Gateway helps to protect against the dangers of data leaks and prompt injection attacks, Google said. The message from Google is clear - the task force of the future won't just be made up of humans, but a hybrid of experts working alongside persistent, autonomous agents. As these updates roll out over the next few months, Google will help businesses cement a 360-degree view of their AI agents. The goal is to help them understand not only "what" their agents are doing, but also "why" they're doing it, while providing the necessary safeguards and controls to rein them in at any moment.
Share
Copy Link
Google launched its Gemini Enterprise Agent Platform at Cloud Next 2026, positioning itself as the only provider combining AI infrastructure, frontier models, and data platforms under one roof. With Google Cloud hitting $70 billion in annual revenue and a $240 billion backlog, the company is betting on integrated tools for building and managing AI agents at enterprise scale while competitors like Amazon and Microsoft pursue fragmented approaches.
Google CEO Sundar Pichai opened the Google Cloud Next conference with numbers that underscore the company's aggressive push into enterprise AI. Google Cloud now generates more than $70 billion in annual revenue, growing at 48% year-over-year, with a backlog of $240 billion that doubled in just one year
5
. The centerpiece announcement was the Gemini Enterprise Agent Platform, Google's answer to Amazon's Bedrock AgentCore and Microsoft Foundry, designed specifically for building and managing AI agents at scale1
.
Source: SiliconANGLE
Andi Gutmans, who runs Google Cloud's data business, told The Register that Google holds a structural advantage over its largest rivals. "We're really the only provider that has the AI infrastructure, the model and the data platform," he said, contrasting Google's integrated approach with competitors who must cobble together services from multiple vendors
3
. The all-in-one AI stack includes Google's custom TPU chips, Gemini models, and cloud platform, creating what the company argues is a unique advantage as enterprises shift from human-scale to agent-scale operations.In an interesting strategic choice, Google has positioned its agent building tool primarily for IT and technical teams rather than business users
1
. Given that AI agents are furthest along for technical tasks like coding, and that security remains a real concern for enterprises adopting this new technology, the platform evolved from Vertex AI to bring together model selection, building, tuning services, and new features for agent integration, security, DevOps, orchestration, and more2
.The platform offers access to over 200 models, including Gemini 3.1 Pro, Nano Banana 2, Gemma open models, and competitive models from Anthropic, such as its just-released Opus 4.7, plus Claude Opus, Sonnet and Haiku
1
2
. Google emphasized that Vertex AI services will now flow through Agent Platform exclusively, making it the central hub for enterprise AI development.The Gemini Enterprise Agent Platform is organized around four pillars: build, scale, govern, and optimize
4
. Developers can design an agent's life cycle from start to finish using tools like Agent Studio, a low-code interface for creating agents using natural language, and an upgraded Agent Development Kit with a graph-based framework for orchestrating multiple agents working together2
4
.
Source: SiliconANGLE
MCP support and the tiered approach help developers maximize reasoning capabilities by structuring agents into sub-networks, enabling them to handle complex tasks
2
. Features like faster runtime and Memory Bank help agents delegate to each other more efficiently and operate with more context for longer, with persistent memory across sessions rather than starting from scratch each time2
4
.As the challenge facing businesses shifts from building individual AI agents to managing hundreds or thousands of them at once, governance capabilities may matter most to enterprise buyers
4
. Google has baked security into the platform through tools such as Agent Identity, which assigns each agent a cryptographic ID with defined authorization policies, creating an auditable trail of every action2
4
.Agent Gateway acts as the enforcement layer for agent ecosystems, protecting against prompt injection, tool poisoning, and data leakage, while Agent Anomaly Detection flags suspicious behavior by analyzing the intent behind agent actions
4
. For testing before deployment, Agent Simulation lets developers "stress-test your agents against real-world scenarios before they ship"2
. Google emphasized that the platform "provides the same level of oversight and auditability found in essential business applications like payroll or quarterly financial reporting"2
.While the Agent Platform targets technical teams, business users can access the Gemini Enterprise app, introduced in the fall, where they can work with agents built by IT or build their own for tasks like scheduling meetings, performing trigger-based processes, creating shortcuts for repetitive tasks, or creating and editing files without switching apps
1
. The app sits atop Agent Platform, which standardizes governance and security across both no-code and pro-code agents2
.Google Cloud CEO Thomas Kurian told reporters that the early versions of AI models focused on answering questions, but now "we're seeing as the models evolve people wanting to delegate tasks and sequences of tasks to agents"
4
. The company also announced Workspace Intelligence, which uses Gemini reasoning to understand complex semantic relationships within Workspace apps content, active projects, collaborators, and organizational domain knowledge2
.
Source: SiliconANGLE
Related Stories
Gutmans revealed that Google spent the past year and a half rethinking its data platform for the shift to agent scale, with the arrival of Gemini 2.5 representing a tipping point in reasoning capability
3
. "We've completely re-engineered every single one of our agents in the last year," he said, including conversation analytics, data science, and data engineering agents3
. The company has roughly 80 data-related announcements at the conference, with nearly every agent product rebuilt in the past year.Approaches that required months of manual ontology-building are no longer necessary. "A year ago, people would be like, 'Let me get Palantir and get 20 people and work for six months and build an ontology.' That's not how you would approach it anymore," Gutmans explained
3
. The new Knowledge Catalog is designed to make roughly 90 percent of enterprise data that remains unstructured available to agents without requiring armies of data engineers to prepare it manually.Pichai cited internal adoption statistics as evidence of a shift toward agentic workflows, noting that 75 percent of all code at Google is now AI-generated and approved by engineers, up from 50 percent last fall
4
. The Gemini app has reached 750 million monthly active users as of Q4 2025, while AI Overviews reach two billion monthly users across more than 200 countries5
. The Gemini API processed 85 billion requests in January 2026 alone, a 142% increase from 35 billion in March 2025, with eight million paid Gemini Enterprise seats deployed across 2,800 companies5
.Gutmans argued that the integrated stack becomes critical as enterprises move to agent scale, where the economics of running agents rewards providers that control more of the stack. "If you're going to have to bend the price-performance curve or it's going to be too expensive," he said, emphasizing that scaling the management and deployment of AI agents requires tight integration between infrastructure, models, and data platforms
3
. The number of billion-dollar deals Google Cloud signed in 2025 exceeded the combined total of the three previous years, with existing customers outpacing their own commitments by 30%5
.Summarized by
Navi
[1]
[3]
[4]
30 Apr 2026•Technology

22 Apr 2026•Technology

06 Aug 2025•Technology

1
Technology

2
Health

3
Policy and Regulation
