11 Sources
11 Sources
[1]
Google makes an interesting choice with its new agent building tool for enterprises | TechCrunch
Google CEO Sundar Pichai opened the Google Cloud Next conference on Wednesday with a video in which he announced one of the company's biggest new products: Gemini Enterprise Agent Platform. Google's tool is intended for building and managing agents at scale. This is Google's answer to Amazon's Bedrock AgentCore and to Microsoft Foundry. Given that AI, and agents in particular, are furthest along for technical tasks like coding, and that the tech is so new to the enterprise that security remains a real concern, Google has made an interesting choice with this tool. Agent Platform is particularly geared at IT and technical teams. The business folks, meanwhile, are directed toward what Google calls its Gemini Enterprise app, introduced in the fall. They can work with agents built by IT or build their own for tasks like scheduling meetings, performing trigger-based processes, creating shortcuts for repetitive tasks or creating and editing files without needing to switch apps, Google says. Google also underscored that the underlying models these tools tap into include Google's own Gemini LLM and Nano Banana 2 image generator, as well as Anthropic's Claude. The company announced support for Claude Opus, Sonnet and Haiku -- in other words, flagship, reasoning, and lower-cost models, including the new Opus 4.7 that launched last week.
[2]
How Google just revamped Gemini Enterprise for the agentic era - here's what's new
A new Agent Platform streamlines automated work and security. As companies use more agents in their workflows, managing them securely and efficiently becomes a primary challenge. Google just created a possible solution, wrapped in the same accessible interface that many teams are used to. On Wednesday at Google Cloud Next, the company's annual enterprise conference, Google released its new Gemini Enterprise Agent Platform for developers. Evolved from Vertex AI, Agent Platform "brings together the model selection, model building, and tuning services of Vertex AI that customers love, along with new features for agent integration, security, DevOps, orchestration, and more," CEO Thomas Kurian said in the announcement. Also: This powerful Gemini setting made my AI results way more personal and accurate The platform revamps the current Gemini Enterprise experience and offers over 200 models, including Gemini 3.1 Pro, Nano Banana 2, Gemma open models, and competitive models from Anthropic, such as its just-released Opus 4.7. Since Agent Platform is built on Vertex, Google noted that those services will now flow through Agent Platform exclusively. In the platform, according to Google, developers can design an agent's life cycle start to finish, from building the agents themselves to scaling and governing them. MCP support and an upgraded Agent Development Kit help developers maximize reasoning capabilities by structuring agents into sub-networks. That tiered approach should set agents up to handle complex tasks, Google said, adding that other features like faster runtime and Memory Bank help agents delegate to each other more efficiently and operate with more context for longer. "Gemini Enterprise is now an end-to-end system for the agentic era, built for agents that can execute complex, multi-step work processes," Google said in the announcement. Also: Prolonged AI use can be hazardous to your health and work: 4 ways to stay safe The company also emphasized that it has baked security into the new platform through tools such as Agent Identity, which assigns each agent a cryptographic ID. If you'd rather not take any risks, however, you can use Google's new Agent Simulation tool to "stress-test your agents against real-world scenarios before they ship," the company said. Once developers are done building and testing, they can publish agents from the platform to the Gemini Enterprise app, where employees can run those agents or build their own with no-code or lower-code options like Google's Agent Studio and Agent Designer. A Google employee demonstrated how users can deploy multiple agents in the enterprise app at once to tackle an inventory or marketing challenge, as if they were a team of workers. In the demo, each individual agent handled a specific element of a multi-step project for a furniture company, using the organization's Workspace contents to pull relevant data and strategy points. Running multiple autonomous agents can pose a host of privacy and security risks for any organization, especially when non-developer employees use them. Google emphasized that its revamped Gemini Enterprise addresses this by simplifying guardrails and permissions before users can access agents. The company said it "provides the same level of oversight and auditability found in essential business applications like payroll or quarterly financial reporting." Also: I tested ChatGPT Plus vs. Gemini Pro to see which is better - and if it's worth switching The Gemini Enterprise app sits atop Agent Platform, which Google said standardizes governance and security. "We provide a single control plane for governance in Agent Platform, so every employee can use and share agents with full IT visibility," the company added. "Both no-code and pro-code agents are managed through a consistent model for identity, security, and auditing." Google also announced Agentic Data Cloud, a new data architecture intended to help scale AI agents. Several new features let developers instantly query data without moving it out of AWS or Azure, leverage new data science tools across multiple surfaces, and enrich files with metadata to give agents more semantic context, among other capabilities. At the Workspace level, Google launched Workspace Intelligence, which uses Gemini reasoning to understand "complex semantic relationships within your Workspace apps (such as Docs, Slides, or Gmail) content, your active projects, your collaborators, and your organization's domain knowledge," the company wrote. Also: Scaling agentic AI demands a strong data foundation - 4 steps to take first While that may sound like what Gemini already does, Google framed Workspace Intelligence as an additional tool that Gemini will leverage when automating tasks such as slide generation and project prep. Google noted a few upgrades in the new feature, including proprietary infographics in Docs and advanced personalization tailored to a user's style. "Workspace Intelligence retrieves your relevant emails, chats, files, and information from the web to transform ideas into professionally formatted drafts that mimic your exact voice, brand, style, and company templates," Google said.
[3]
Google says it has all the answers for AI agent sprawl
As biz agentic bot-wrangling intensifies, company says AI orchestration, security and infrastructure tools on the way Google Cloud Next Google has overhauled its enterprise AI strategy in the wake of the agentic push across the biz landscape, rebranding and expanding its Vertex AI developer platform into what it now calls the Gemini Enterprise Agent Platform. It comes as the challenge facing businesses has shifted from building individual AI agents to managing hundreds or thousands of them at once - something Workday and others are trying to tackle too. "The early versions of AI models were really focused on answering questions that people had and assisting them with creative tasks. Now we're seeing as the models evolve people wanting to delegate tasks and sequences of tasks to agents," Google Cloud CEO Thomas Kurian told reporters during a press briefing. "And these agents then being able to turn around and use a computer, use all of GCP and Workspace as a tool." To meet the moment, Google rolled out infrastructure in the form of its eighth generation of TPU chips and security updates through its purchase of Wiz. Those announcements as well as the Gemini Enterprise Agent Platform are designed to give companies a single system for developing, deploying, governing, and monitoring AI agents across their organizations. Google says it can act as the connective layer between a company's data, its employees, and the growing fleet of autonomous agents that enterprises are beginning to rely on. "All the pieces are designed to do this," Kurian said in the briefing. "The security to protect these agents. Our data cloud to feed the agents context from within the system. Our AI infrastructure to optimize performance, scale and cost of how agents run. This year is the next evolution of where we see this AI technology going." He said organizations are choosing Google Cloud because of its ability to deliver "a comprehensive backbone for innovation" rather than "individual services that can be cobbled together." Gemini Enterprise Agent Platform is organized around four pillars: build, scale, govern, and optimize. On the build side, Google introduced Agent Studio, a low-code interface for creating agents using natural language, alongside an upgraded Agent Development Kit with a new graph-based framework for orchestrating multiple agents working together, the company said during a media prebriefing. It also provides an agent registry that gives organizations a central catalog of each internal agent and tool, the company said. Also inside the new platform is an agent marketplace that offers pre-built agents from partners including Atlassian, Oracle, ServiceNow, and Workday. The platform includes Agent Runtime, a feature that Google says delivers sub-second cold starts and gives users the ability to provision new agents in seconds. It also supports long-running agents -- autonomous processes that can operate for hours or days on complex business workflows like financial reconciliation or sales prospecting. A new Memory Bank feature gives agents persistent, long-term memory across sessions rather than starting from scratch each time, the company said. But it is the governance capabilities that may matter most to enterprise buyers who fear that AI tools may proliferate across their organizations with limited oversight. Agent Identity assigns every agent a unique cryptographic ID with defined authorization policies, creating an auditable trail of every action, Google said. Agent Gateway, meanwhile, acts as the police for agent ecosystems, enforcing security policies and protecting against prompt injection, tool poisoning, and data leakage. An Agent Anomaly Detection system flags suspicious behavior by analyzing the intent behind agent actions, and gives users the chance to stop it before it goes rogue. Then there are the ways Google has said its tools can be used to fine-tune agents, such as Agent Simulation for stress-testing them against synthetic interactions before deployment. Agent Evaluation scores live performance, while Agent Observability dashboards trace execution paths and diagnosing problems in real time for rapid debugging, the cloud giant told reporters. Google said the Gemini Enterprise app -- the consumer-facing side of the platform -- is a place where non-technical employees can build and manage their own agents using Agent Designer. Users can create schedule- or trigger-based agents to automate multi-step processes, while an "Inbox in Google Enterprise gives those users a central hub for monitoring agent activity with notifications sorted into categories like "Needs your input," "Errors," and "Completed." Google CEO Sundar Pichai said, based on internal adoption statistics, there is evidence of a shift toward agentic workflows. He said 75 percent of all code at Google is now AI-generated and approved by engineers, up from 50 percent last (northern hemisphere) fall. In a blog, he described a recent internal code migration completed by agents and engineers working together that "was completed six times faster than was possible a year ago with engineers alone." Its tools are surging in popularity, Google claims, with nearly 75 percent of its Cloud customers using AI products, while Gemini Enterprise saw 40 percent growth in paid monthly active users quarter over quarter in Q1, and Google's first-party models now process more than 16 billion tokens per minute via direct API use, up from 10 billion the prior quarter. There also appears to be a lot of token-maxxing among customers. Google said 330 Google Cloud customers each processed more than one trillion tokens, while 35 reached the 10-trillion-token milestone with its models. Within the press material for the show, several large customers provided testimonials about their own Gemini deployments. GE Appliances said it has more than 800 of Google's AI agents running across manufacturing, logistics, and supply chain operations. KPMG reported 90 percent Gemini Enterprise adoption among employees with more than 100 agents deployed in the first month. Tata Steel said it deployed over 300 specialized agents in nine months. Merck announced a partnership valued at up to $1 billion to build an agentic platform across its R&D, manufacturing, and commercial functions. The announcements land in an increasingly competitive market for enterprise AI platforms. Microsoft, Amazon Web Services, and Salesforce have all made a push into agent orchestration and management in recent months. Google's approach leans heavily on vertical integration, with the hope that designing chips, models, infrastructure, and application layers together produces better results than assembling components from different vendors. Google also announced a $750 million fund to support its partner ecosystem in building and deploying agentic AI, along with agreements with McKinsey, Deloitte, and other consulting firms that will receive early access to upcoming models from Google DeepMind. ®
[4]
Pichai opens Cloud Next 2026 with $240B backlog, 750M Gemini users, and a plan to turn Search into an agent manager
Summary: Sundar Pichai opened Cloud Next 2026 with Google Cloud at $70 billion in annual revenue, 48% growth, a $240 billion backlog that doubled in a year, and $175-185 billion in planned capital expenditure. The Gemini app has 750 million monthly users, AI Overviews reach two billion, and the Gemini API processed 85 billion requests in January alone. Pichai framed the conference around Search evolving from a retrieval engine into an "agent manager" and announced the Universal Commerce Protocol with Shopify, Target, and Walmart, while positioning Google's full-stack integration from custom silicon to consumer distribution as the advantage competitors cannot replicate. Sundar Pichai opened Google Cloud Next 2026 on Tuesday with a set of numbers that reframe the competitive dynamics of enterprise AI. Google Cloud is now generating more than $70 billion in annual revenue, growing at 48% year on year, with a backlog of $240 billion, up 55% and more than double the roughly $155 billion of a year ago. The number of billion-dollar deals Google Cloud signed in 2025 exceeded the combined total of the three previous years. Existing customers are outpacing their own commitments by 30%, spending faster than they contracted. Google has committed $175 billion to $185 billion in capital expenditure for 2026, nearly doubling the $91.4 billion it spent last year. Pichai described the moment as "a fundamental rewiring of technology and an accelerant of human ingenuity." The money suggests he may not be exaggerating. The keynote, titled "The Agentic Cloud," was less a product launch than a thesis statement. Google is positioning itself not as a cloud provider that offers AI but as the operating system for what it calls the agentic enterprise: a model in which AI agents handle routine business operations autonomously, communicate with each other across platforms, and interact with the physical world through commerce, search, and real-time data. The pitch is that Google is the only company that controls every layer of that stack, from the custom silicon that runs inference, to the frontier models that power reasoning, to the cloud platform that hosts the agents, to the productivity suite and search engine through which three billion users interact with them. The Gemini app has reached 750 million monthly active users as of the fourth quarter of 2025, up 100 million from the previous quarter. AI Overviews, Google's AI-generated search summaries, reach two billion monthly users across more than 200 countries and drive 10% more search queries globally. AI Overviews now trigger on approximately 48% of all tracked queries, up from 31% in February 2025, a 58% increase in a year. The Gemini API processed 85 billion requests in January 2026, a 142% increase from 35 billion in March 2025. Eight million paid Gemini Enterprise seats are deployed across 2,800 companies. Thirteen million developers are building with Google's generative models. Gemini 3 Pro has had, in Pichai's words, "the fastest adoption of any model in our history." These are not cloud metrics. They are platform metrics. Google is arguing that its advantage over AWS, Azure, OpenAI, and Anthropic lies not in any single product but in the fact that it reaches more users, processes more queries, and touches more surfaces than any competitor. Search alone handles more than a billion shopping interactions per day. Workspace has more than three billion users. Android runs on billions of devices. The thesis is that when AI agents become the primary interface for work and commerce, the company with the largest existing surface area wins, because the agents need somewhere to run, something to connect to, and someone to serve. Pichai's most consequential framing may have come in a podcast appearance earlier this month: "A lot of what are just information-seeking queries will be agentic in Search. You'll be completing tasks. You'll have many threads running." He described Search evolving from a retrieval engine into an "agent manager," an orchestration layer that dispatches AI agents to complete tasks on a user's behalf rather than returning a list of links. The infrastructure for this is already being built. Google announced the Universal Commerce Protocol at NRF in January, an open-source standard for agentic commerce co-developed with Shopify, Etsy, Wayfair, Target, and Walmart. More than 20 partners have endorsed it, including Adyen, American Express, Best Buy, Flipkart, Macy's, Mastercard, Stripe, The Home Depot, Visa, and Zalando. UCP is built on REST and JSON-RPC transports with the Agent2Agent protocol, Model Context Protocol, and a new Agent Payments Protocol built in. It lets AI agents treat any participating store as a programmable service, with the merchant remaining the merchant of record. Pichai, who described himself as "an indecisive shopper," said he is "looking forward to the day when agents can help me get from discovery to purchase." The implications for the advertising industry are significant. If Search shifts from showing links that users click to dispatching agents that complete purchases, the entire cost-per-click model that funds Google's advertising business, and by extension the businesses of every company that advertises on Google, changes. Retailers are already deploying AI-powered shopping through Gemini, ChatGPT, and Copilot. The question is whether agentic commerce cannibalises Google's own advertising revenue or whether Google can capture a larger share of the transaction itself. UCP suggests Google is betting on the latter. The competitive positioning at Cloud Next was unusually direct. Thomas Kurian said competitors are "handing you the pieces, not the platform," leaving enterprise teams to integrate components themselves. The claim rests on Google's vertical integration: Ironwood TPUs and the forthcoming eighth-generation split into Broadcom-designed training chips and MediaTek-designed inference chips provide the silicon. Gemini 3 Pro, 3 Flash, and 3.1 Pro provide the models. The Gemini Enterprise Agent Platform, formerly Vertex AI, provides the developer tools and runtime. Workspace Studio provides the no-code agent builder. Search and Android provide the consumer distribution. No other company assembles all of these under one roof. The argument has a specific target: Microsoft Copilot, which despite being embedded in virtually every Fortune 500 company has struggled with adoption. Only 3.3% of Microsoft 365 users with Copilot access actually pay for it, and its accuracy net promoter score deteriorated to negative 24.1 by September 2025. Google's eight million paid Gemini Enterprise seats in roughly four months represents a faster trajectory, though from a much smaller base. GitHub has frozen new Copilot sign-ups because agentic coding sessions consume more compute than users pay for, illustrating why owning the silicon layer, as Google does, is not just a technical advantage but an economic one. The $175 billion to $185 billion in planned capital expenditure is the number that makes the rest of the strategy credible or alarming, depending on how the next two years unfold. Roughly 60% goes to servers and 40% to data centres and networking equipment. Combined with Microsoft, Meta, and Amazon, total big tech AI infrastructure spending is approaching $700 billion this year, a figure large enough to reshape energy markets and strain power grids. Pichai acknowledged on the fourth-quarter earnings call that the "top question is definitely around compute capacity and all the constraints, be it power, land, supply chain," and expects Google to remain supply-constrained through 2026. The backlog provides the justification. At $240 billion, it represents more than three years of current revenue contracted but not yet delivered. Thirteen product lines each generate more than $1 billion in annual revenue. The ServiceNow deal alone was worth $1.2 billion over five years. If the demand is real, and the backlog suggests it is, then the capital expenditure is not a gamble but an obligation: the cost of building the infrastructure to fulfil commitments already made. Google Cloud holds roughly 11% of the cloud infrastructure market, behind AWS at 31% and Azure at 25%. The gap has narrowed: Google grew at 48% in the fourth quarter of 2025, the fastest of the three, and achieved sustained profitability for the first time. But the gap remains. What Pichai presented at Cloud Next is not a plan to close that gap through incremental cloud sales. It is a plan to redefine what the cloud is, from a place where companies store data and run workloads to a platform where AI agents perform work, make decisions, complete purchases, and coordinate with each other across organisational boundaries. If that transition happens, the company that built the agents, the models, the chips, the protocols, and the distribution channels stands to capture a share of the value that the current market share numbers do not reflect. That is the bet. Cloud Next 2026 is the moment Google made it explicit.
[5]
Google Cloud Next 2026: AI agents, A2A protocol, Workspace Studio, and the full-stack bet against OpenAI and Anthropic
Summary: Google rebranded and consolidated its AI platform at Cloud Next 2026, renaming Vertex AI to the Gemini Enterprise Agent Platform and absorbing Agentspace into a unified Gemini Enterprise product. The announcements include Workspace Studio (no-code agent builder), 200+ models in the Model Garden including Anthropic Claude, partner agents from Box, Workday, Salesforce, and ServiceNow, ADK v1.0 stable releases across four languages, Project Mariner (web-browsing agent), managed MCP servers with Apigee as an API-to-agent bridge, and A2A protocol v1.0 in production at 150 organisations. Kurian framed the strategy as owning the full stack from chip to inbox while competitors "hand you the pieces, not the platform." Google used the opening keynote of Cloud Next 2026 on Tuesday to unveil what amounts to a full rebranding and consolidation of its AI platform around agents. Vertex AI is now the Gemini Enterprise Agent Platform. Google Agentspace, the employee-facing AI assistant, has been absorbed into a unified product called Gemini Enterprise. The announcements span a no-code agent builder for Google Workspace, a redesigned developer platform with more than 200 models including third-party options such as Anthropic's Claude, a web-browsing agent called Project Mariner, managed MCP servers across Google Cloud services, and the production-grade Agent2Agent protocol for cross-platform agent communication. Thomas Kurian, Google Cloud's chief executive, titled the keynote "The Agentic Cloud" and drew a deliberate contrast with competitors: other vendors, he said, are "handing you the pieces, not the platform," leaving teams to integrate components themselves. The timing is deliberate. OpenAI's Operator is scoring 87% on complex browser task benchmarks and the company has recruited Cognizant and CGI to push its Codex coding agent into enterprise software shops, with enterprise revenue now accounting for 40% of OpenAI's total. Anthropic has launched a marketplace for Claude-powered enterprise tools and its Model Context Protocol has reached 10,000 servers and 97 million monthly SDK downloads. Google is fighting from third position in cloud market share, behind AWS and Microsoft Azure, but exited the fourth quarter of 2025 with the fastest growth rate of the three at 50% year on year, and is betting that vertical integration, owning the model, the runtime, the silicon, and the distribution channel through Workspace, gives it an advantage neither competitor can replicate. Google Workspace Studio is the most consumer-facing announcement. It is a no-code platform that lets business users build and deploy AI agents across Gmail, Docs, Sheets, Drive, Meet, and Chat by describing automations in plain language. A user can type "every Friday, ping me to update my tracker" and Gemini creates the automation. Workspace Studio connects to third-party applications including Asana, Jira, Mailchimp, and Salesforce, and can call external APIs via webhooks or run custom logic through Apps Script. It is rolling out to Google Workspace business, enterprise, and education customers. The developer-facing platform, now called the Gemini Enterprise Agent Platform, received deeper upgrades. Agent Designer, a visual flow canvas for building agent workflows, is in preview. Agent Engine Sessions and Memory Bank, which give agents persistent context across interactions, are generally available. A new Agent Garden provides prebuilt agent solutions for customer service, data analysis, and creative tasks. A free tier via Express mode lowers the entry barrier. The Model Garden now hosts more than 200 models spanning Google's own Gemini and Gemma families, third-party models including Anthropic Claude, and open models such as Llama. Google also announced six new agents for data engineering and coding in BigQuery, including a data engineering agent that automates pipeline creation from natural language prompts and a code interpreter that translates queries into executable Python with visualisations. Partner agents from Box, Workday, Salesforce, ServiceNow, Dun and Bradstreet, and S&P Global are integrated into the platform, giving enterprise customers prebuilt capabilities for document intelligence, HR self-service, IT operations, and financial data. Project Mariner, Google DeepMind's web-browsing agent powered by Gemini 2.0, scores 83.5% on the WebVoyager benchmark and handles ten concurrent tasks on cloud-based virtual machines. It automates shopping, information retrieval, and form-filling, and is available to Google AI Ultra subscribers in the United States. The roadmap includes a visual builder called Mariner Studio in the second quarter, cross-device synchronisation in the third quarter, and an agent marketplace in the fourth quarter. The most strategically significant announcement may be the least visible to end users. Google's Agent2Agent (A2A) protocol, originally launched with more than 50 technology partners, has reached 150 organisations in production, not pilot, routing real tasks between agents built on different platforms. The protocol is now governed by the Linux Foundation's Agentic AI Foundation and has reached version 1.2, with signed agent cards using cryptographic signatures for domain verification. Microsoft, AWS, Salesforce, SAP, and ServiceNow are running A2A in production environments. A2A is designed to complement rather than compete with Anthropic's Model Context Protocol (MCP). MCP handles how an agent connects to tools and data sources. A2A handles how agents communicate with each other across organisational and platform boundaries. Google adopted MCP across its own services in December 2025, launching fully managed remote MCP servers for Google Maps, BigQuery, Compute Engine, and Kubernetes Engine, with Cloud Run, Cloud Storage, AlloyDB, Cloud SQL, Spanner, Looker, and Pub/Sub on the roadmap. Apigee, Google's API management platform, now functions as an MCP bridge, translating any standard API into a discoverable agent tool with existing security and governance controls. Google is simultaneously positioning A2A as the standard for the layer above: the orchestration of multiple agents from multiple vendors working together on a single task. The practical implication is that a Salesforce agent built on Agentforce can hand off a task to a Google agent running on Vertex AI, which can query a ServiceNow agent for IT asset data, all through A2A without any of the three systems needing to understand each other's internal architecture. Native A2A support is now built into Google's Agent Development Kit, LangGraph, CrewAI, LlamaIndex Agents, Semantic Kernel, and AutoGen. Google's open-source Agent Development Kit reached stable v1.0 releases across Python, Go, and Java, with TypeScript support also available. It is a code-first framework optimised for Gemini but model-agnostic and deployable to any container or Kubernetes environment. The security layer includes Model Armor for defence against indirect prompt injection, zero-trust architecture applied to decentralised agent systems, and access management through Google Cloud IAM with audit logging. OpenAI's own enterprise agent push through Codex and systems integrator partnerships has reached three million weekly users. Anthropic's enterprise marketplace for Claude-powered tools is building an ecosystem through partners including Snowflake. Microsoft's Copilot is embedded in virtually every Fortune 500 company. AWS has Bedrock with its own agents framework maturing rapidly. The enterprise AI agent market is not a two-horse race. It is a five-way contest in which each competitor has a structural advantage the others lack. OpenAI has the strongest consumer brand and the most advanced reasoning models. Anthropic has the most trusted safety positioning and the fastest-growing enterprise revenue. Microsoft has the deepest enterprise distribution through Office and Azure. AWS has the largest cloud infrastructure base and the strongest developer gravity. Google's argument is that it is the only company that owns all four layers of the stack: the custom silicon (Ironwood TPUs), the frontier models (Gemini), the cloud platform (now unified as the Gemini Enterprise Agent Platform), and the enterprise distribution channel (Workspace with more than three billion users across Google's productivity tools). Kurian framed the strategy explicitly: "If you want to adopt a technology successfully, you need to pick a few important projects and do them well, rather than spraying on a lot of little projects." No other competitor controls the full vertical from chip to application. Google's own AI Agent Trends report, published ahead of the conference, found that 89% of business teams are already using AI agents and the average organisation runs 12. The most common enterprise use cases are customer service at 49%, marketing at 46%, security operations at 46%, and IT support at 45%. Early customer deployments suggest the productivity claims are not purely theoretical: Danfoss, the Danish industrial manufacturer, automated 80% of transactional decisions in email-based order processing using Google's agents, reducing response times from 42 hours to near real-time. Suzano, a Brazilian pulp and paper company, built an agent with Gemini Pro that translates natural language into SQL queries, cutting query time by 95% for 50,000 employees. The agents run on Google's Gemini model family, with the Gemini 2.5 generation being retired in October in favour of the 3.x line. Gemini 3 Pro and Gemini 3 Flash, released in late 2025 and iterated through early 2026, provide the reasoning backbone. Gemini 3 Flash delivers a 15% improvement in overall accuracy over Gemini 2.5 Flash and is optimised for high-frequency agentic workflows and real-time processing. Gemini 3.1 Pro, the most advanced reasoning variant, is available in preview. A new experimental model, GLM 5, targets complex systems engineering and long-horizon agentic tasks through the Model Garden. Gemini 3.2 is expected to be formally announced during the conference, with an expanded context window beyond one million tokens and optimised parameter counts for reduced inference latency. Demis Hassabis, DeepMind's chief executive, stated in January that his team is "focusing on Gemini 4 this year." Google also recently launched Gemma 4 open models under Apache 2.0 licensing, built from the same research as Gemini 3 and providing an open-weight alternative for enterprise customers who need to run models on their own infrastructure. The infrastructure beneath the models is equally central to the pitch. Ironwood, Google's seventh-generation TPU announced the same day, delivers 4.6 petaFLOPS per chip and scales to 9,216-chip superpods producing 42.5 exaFLOPS. Anthropic has committed to up to one million Ironwood units. The custom silicon means Google can offer inference at costs that customers buying Nvidia GPUs at retail cannot match, which in a market where inference is the dominant and growing expense, translates directly into pricing power for the agent services that run on top. Google Cloud holds roughly 11% of the cloud infrastructure market. AWS holds 31%. Azure holds 25%. The gap is significant and Cloud Next will not close it. But the agentic era, if it materialises at the scale Google is projecting, reshuffles the competitive dynamics in ways that favour a company with a vertically integrated stack over companies that assemble their AI capabilities from multiple vendors. Google is betting that the enterprise customer who adopts AI agents at scale will choose the platform where the model, the runtime, the silicon, the governance, and the productivity suite are all built by the same company and optimised to work together. It is a large bet. Cloud Next 2026 is where Google is asking enterprises to take it.
[6]
Gemini Enterprise Agent Platform lets you build, govern, and optimize your agents.
Gemini Enterprise Agent Platform is our new developer platform that has everything your technical teams need to build, scale, govern and optimize agents. Think of it as a one-stop-shop for all of your autonomous agents, built on top of our leading infrastructure and integrated with our data and security capabilities. This new platform, announced at Google Cloud Next '26, brings the model building and tuning services of Vertex AI together with new features for agent integration, security, DevOps and more. Agent Platform is designed to flex to your team's unique needs and provides access to Gemini 3.1 Pro, Gemini 3.1 Flash Image (Nano Banana 2) and Lyria 3. It also supports Anthropic's Claude Opus, Sonnet and Haiku. Plus, Agent Platform integrates with the Gemini Enterprise app, which acts as the front door for AI for every employee. Learn more about Gemini Enterprise Agent Platform on the Cloud blog.
[7]
Maximizing Gemini: Google Cloud makes its bid to build the operating system for enterprise AI - SiliconANGLE
Maximizing Gemini: Google Cloud makes its bid to build the operating system for enterprise AI Google LLC has emerged as the only cloud "hyperscaler" with a leading frontier artificial intelligence large language model - Gemini - and today it issued a raft of announcements designed to capitalize on that current advantage. The search giant's cloud unit launched the Gemini Enterprise Agent Platform as its new hub for building AI agents. Google also unveiled a new Gemini Enterprise application designed to transition AI from an isolated tool into a secure, collaborative autonomous engineer for the enterprise. The latest releases were described by Google Cloud Chief Executive Thomas Kurian (pictured) as the next chapter in the ongoing the AI saga. "You have moved beyond the pilot, the experimental phase is behind us," Kurian said during his keynote address at Google Cloud Next in Las Vegas. "How do you move AI into your entire enterprise? The answer is a unified stack." As SiliconANGLE analysts have noted, Google is one of the few key tech players that has the resources to optimize the stack end-to-end. Its focus, based on this week's announcements at Google Cloud Next, has been on maximizing the compute layer, the global network, security, data engines and the application platform to generate enterprise AI value. Gemini plays a central role in this strategy, as evidenced by its integration in a multitude of the announcements made today. The new Gemini Enterprise application is designed to solve frustrations around siloed AI agents that have proven to be tough to oversee. It adds a new "Inbox" for agentic management, providing a more centralized command for guiding and manage agents in use. Gemini also powers the newly announced Data Agent Kit, a data engineering experience for leveraging favored practitioner tools, and a new shared workspace feature, called Projects, for pivoting Gemini from a solo AI assistant to a collaborative tool. Gemini was featured prominently in Google Cloud's security announcements, wrapped around new governance tooling and agentic identity solutions. "We are moving in a bold and responsible way," said Sundar Pichai, CEO of Google and its parent company Alphabet Inc., who spoke to the conference in a prerecorded video. "Think of it as mission control for the agentic enterprise. One thing is perfectly clear: We are firmly in the agentic Gemini era." Being "mission control" for the agentic world will still require powerful hardware that can run the models for delivering the brainpower behind reasoning machines. Google addressed this as well with the announcement today of two new Tensor Processing Units or TPUs. The company introduced the TPU 8t and TPU 8i, custom silicon designed to serve as the workhorses for model training and inference. TPU 8t employs a specialized accelerator to address memory access issues for LLMs and memory bandwidth optimization problems that have hindered progress in AI deployment. "[TPU] 8t is a powerhouse optimized for training," Amin Vahdat, chief technologist for AI Infrastructure at Google, said in a presentation today. "We can now turn months of training into weeks." The custom-designed TPU 8i is architected to host a larger key-value cache at inference time for LLMs, which can significantly accelerate text generation. The technology behind the 8i design improves latency, another roadblock for AI, by shrinking the network diameter and the number of hops a data packet must take to cross the system. "We've finally broken the memory wall that slows long context decoding," Vahdat said. Though Google's announcements this week underscored its confidence in Gemini to anchor an agentic AI strategy, statements by company executives pointed toward a development worth watching in the evolution of AI for the enterprise. Competition for enterprise market share in enterprise AI will rely on the ability of the tech industry's major players to serve as the control layer where AI does its work. Pichai alluded to this in his description of "mission control," and Google's announcements this week of new features such as Agent-to-Agent Orchestration, Agent Gateway and Agent Observability spotlight the need for bringing a measure of order into the AI equation. "We built the agent platform to manage the entire lifecycle of an agent," Kurian noted. There are indications that Google's strategy is beginning to translate into financial results and market momentum. Alphabet reported 48% revenue growth year-over-year for its cloud operations in the fourth quarter of 2025, a number that represented the fastest growth rate among the "Big Three" hyperscalers. Cloud backlog also surged 55% quarter-over-quarter. Data points such as these offer evidence that the machine learning and AI wave is carrying Google Cloud to more success than it has previously seen. Google's bid to be the operating system for enterprise AI got much reinforcement this week and its future success will likely depend on whether this message influences the growing number of users who are embracing AI to get work done. "Companies are not just redesigning workflows, they are turning their employees into AI builders," Kurian said. "We offer you an integrated stack with the freedom to choose the world's best chips and models. This platform is ready, so what will each of you build?"
[8]
The agent control plane hits overdrive at Next 2026 - SiliconANGLE
The decisive layer in AI is still unclaimed: theCUBE's Google Cloud Next day one keynote analysis The fight for the agent control plane is underway -- and it might determine who controls enterprise AI for the next decade. Google LLC came into Google Cloud Next 2026 with a clearer positioning for Gemini: less as a standalone model and more as a connective layer. The emphasis is shifting toward how it ties together data systems, applications and the agent runtimes enterprises are starting to move into production, according to John Furrier (pictured, right), co-founder and chief executive officer of SiliconANGLE Media Inc. "The control plane is that horizontal layer that moves data around and it connects to all the systems," Furrier said. "It's like the main nerve center. It's like the backbone, the spine of all the systems -- and whoever owns the control plane kind of wins." Furrier and co-host Alison Kosik (left) conducted a day one keynote analysis at Google Cloud Next, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed Google's competitive positioning and what enterprise AI adoption really requires. (* Disclosure below.) No hyperscaler has fully established control over enterprise AI or the agent control plane -- a gap that this year's conference is focused on addressing. Multi-agent usage on the Databricks' platform grew 327% in just four months, a signal that production deployment has already crossed an inflection point, Furrier emphasized.. That growth underscores the stakes for Google: As agent orchestration becomes the enterprise default, the platform that routes those workflows wins. "The AI-native applications are real and you're starting to see coding become almost done by agents 100%," he said. "I saw Databricks set a stat that said they've crossed over -- less humans coding than machines. That's a major milestone." The model leaderboard, meanwhile, may be the wrong scoreboard entirely. Enterprise value is being created at the systems layer -- in the infrastructure, data pipelines and agent runtimes that models run on -- not in model capability alone, Furrier noted. That's the layer Google is targeting with Gemini, positioning itself as the platform agents depend on to function. "As I pointed out ... all the enterprises and all the real action is not what the models [are,] that's what the models are interfacing with," he said. "Those are the systems." But a strong product won't be enough on its own to navigate the contested market. Agentic AI is restructuring the enterprise from the inside out, with CFOs becoming operators and people officers managing agent workforces, Furrier explained. Consequently, the unit of value inside these organizations is shifting. "You have a new kind of currency going on with tokens and that's changing the organizational structures," he said. "That's changing how people are organizing their teams. That's changing how people work. It's a complete reset in the corporate world." Here's the complete video interview, part of SiliconANGLE's and theCUBE's coverage of Google Cloud Next.
[9]
With Gemini Enterprise Agent Platform, Google brings agentic development and control under one roof - SiliconANGLE
With Gemini Enterprise Agent Platform, Google brings agentic development and control under one roof Google Cloud is taking a massive leap toward building the autonomous enterprise with the launch of the Gemini Enterprise Agent Platform, an evolution of the existing Vertex AI platform that becomes its new hub for building artificial intelligence agents. Announced at Google Cloud Next 2026 in Las Vegas, the new offering brings together all of the model selection, development and agent building tools found in Vertex AI, together with new features designed to facilitate agent integration, orchestration, DevOps and security. It's positioned as a single destination for technical teams to develop AI agents that can then be delivered seamlessly to employees via the new Gemini Enterprise application also launched today, enabling every worker to begin automating their work. Google Cloud Vice President of Product Management Michael Gerstenhaber said in a blog post that the original Vertex AI platform was designed to enable the massive engineering required for building tools in the early days of generative AI. "But today, we're managing a different level of complexity with agents interacting across multiple systems -- and often without security and governance guardrails," he wrote. "To move toward a truly autonomous enterprise, one where agents can act with the same independence and reliability as a member of your team, you need a foundation that can sustain that level of trust." Moving forward, Gerstenhaber said, all of the services previously housed in Vertex AI, along with all of its future roadmap developments, can now be found within the Gemini Enterprise Agent Platform, along with everything needed to deliver multi-agent teams into the enterprise. The revamped platform is much more than just a facelift, with Gemini Enterprise Agent Platform designed to provide the infrastructure that handles the entire lifecycle of AI agents. According to Gerstenhaber, Google has broken this down into four main pillars: Building, scaling, governing and optimizing autonomous workforces. For those building AI agents, the main focus is on the new Agent Studio and Agent Development Kit or ADK - both of which have received significant upgrades. The first is designed for regular business users that need to design their own agents, and includes a low-code visual interface that makes it simple to drag-and-drop agent logic into place. For hardcore developers, the ADK is where it's at. Builders will be able to unlock more powerful reasoning by accessing the most powerful AI models and organizing their agents into a network of sub-agents capable of solving complex problems, using its new graph-based framework. Gerstenhaber said the new ADK supports native ecosystem integrations that make it simple to connect AI agents to internal data without building custom pipelines. Users will also be able to activate their data in platforms such as BigQuery and Pub/Sub with batch and event-driven agents in order to run massive, asynchronous tasks such as content evaluation and data analysis in the background. To scale AI agents from a fancy proof-of-concept to live environments, users need a platform that's able to handle the performance, state and security requirements of real-world work, and Gemini Enterprise Agent Platform delivers here too. It features a revamped Agent Runtime for the simple provisioning of new agents, plus support for multiday workflows to keep them running autonomously for days on end. There are tools for agent-to-agent orchestration too, enabling agents to easily delegate tasks to one another, so multiple specialized agents can work with one another on the most complex tasks given to them. To support the context required for agents running at large scale, Google has created a new Agent Memory Bank that dynamically generates and curates long-term memories from conversations. This can be accessed by tapping into new "Memory Profiles" that allow agents to recall high-accuracy details with low latency, ensuring that context is never lost. For governance, the Google Enterprise Agent Platform offers a secure-by-design architecture that applies enterprise policy controls to each agent that's deployed, whether customers build them themselves or source them from Google's partner ecosystem. These controls make it simple to assign each agent with an Agent Identity, just like each human has their own. Gerstenhaber explained that each agent receives its own, unique cryptographic ID that leaves a clear and auditable trail for every action it takes, and that these can be mapped back to predefined authorization policies. Users will also be able to maintain a central library of approved tools agents can access through the new Agent Registry, Gerstenhaber said. It indexes each internal agent, tool and agent skill, simplifying the discovery process while ensuring they can only access approved assets. Meanwhile, the Agent Gateway is designed to act like an air traffic control tower, allowing administrators to oversee their entire fleet of AI agents and enforce consistent security policies across them all. There's also a comprehensive range of tools for protecting agents against prompt attacks and monitoring their behavior in real time, found under the Agent Security dashboard. Finally, for optimizing agents, the Google Enterprise Agent Platform provides tools for testing them before they're shipped and then monitoring their performance in production environments. With Agent Simulation, users can test how their agents work on synthetic workloads using virtualized tools in a controlled environment. Once they're up and running, they'll be able to use the Agent Evaluation tools to continuously score each agent as it performs its work. The Agent Observability tool enables them to dig even deeper and visually trace the complex reasoning of each agent and debug issues as they occur. Should an agent fail to perform as expected, users can then pull up the Agent Optimizer to automatically refine its system instructions and enhance its accuracy. Although Google will undoubtedly push customers to use its fleet of Gemini models, it's still maintaining its commitment to an open model ecosystem. Users will be able to enjoy "first-class access" to a selection of more than 200 models, including Gemini 3.1 Pro and Gemini 3.1 Flash, its open-source Gemma 4 models and Lyria 3 for creating music and audio. It also lists numerous third-party models, including Anthropic PBC's Claude 3.5 Sonnet and Haiku.
[10]
Google Accelerates Agentic AI Shift With New Enterprise Platform | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The company announced these developments Wednesday (April 22) in conjunction with its Cloud Next event in Las Vegas. Google's new Gemini Enterprise Agent Platform provides a system for building, scaling, governing and optimizing agents. It builds upon the company's existing AI development platform, Vertex AI, by combining model selection, model building and agent building capabilities with new features for agent integration, DevOps, orchestration and security, according to a Wednesday press release. The company also introduced three new agents in Google Security Operations to help organizations defend against the malicious use of AI. These include a Threat Hunting agent that searches for novel attack patterns and stealthy adversary behaviors, a Detection Engineering agent that identifies coverage gaps and creates new detections for threat scenarios, and a Third-Party Context agent that enriches workflows with contextual data from third-party content, per a Wednesday press release. Google Cloud's new $750 million fund will provide resources and incentives to global consulting firms, systems integrators, software providers and channel partners to help their joint customers adopt agentic AI. Resources available to partners will include AI value assessments, Gemini proofs-of-concept, Gemini Enterprises practice building, agentic AI prototyping and development, Wiz security assessments and usage incentives, according to a Wednesday press release. The new agreement with Thinking Machines Lab will see Google Cloud provide the AI startup with additional AI infrastructure capabilities and capacity, including A4X Max VMs with Nvidia GB300 GPUs as well as services such as Kubernetes Engine, Spanner, Cluster Director, Cloud Storage and Anywhere Cache. Myle Ott, founding researcher at Thinking Machines Lab, said in a Wednesday press release that this infrastructure got the company running "at record speed." Sundar Pichai, CEO of Google and Alphabet, said in a Wednesday blog post that the pace of technological change has never been faster than it has been over the past year. "Our first-party models now process more than 16 billion tokens per minute via direct API use by our customers, up from 10 billion last quarter," Pichai said.
[11]
Google puts Gemini Enterprise at the heart of the new agentic taskforce for enterprise automation - SiliconANGLE
Google puts Gemini Enterprise at the heart of the new agentic taskforce for enterprise automation Google Cloud is on a mission to accelerate the adoption of artificial intelligence agents across enterprise computing environments, paving the way for a new era where AI can automate many of the most complicated, multistep tasks currently performed by humans. To that end, it has announced a major revamp of Gemini Enterprise, saying it intends to transition AI from an isolated productivity tool into a "secure, collaborative autonomous engine" for business. "Companies are ready to build their agentic task force, but this demands doing so within a secure and governed environment," the company said. "This includes creating and deploying agents with their own identity, registry, and gateway so they can always be traced, monitored, and managed." Announced today at Google Cloud Next 2026, the new Gemini Enterprise application arrives at a critical juncture for enterprise AI. Though many organizations have experimented with large language models, the novelty of summarizing emails and generating code is wearing off. Companies are increasingly frustrated by the "human-in-the-loop" bottleneck - the need for a person to sit and prompt an AI through every single step of a multi-stage project. With Gemini Enterprise, Google wants to solve the problems around siloed AI agents that are difficult to monitor and lack the persistence and context to automate long-term tasks such as monthly financial reconciliations and multiday sales prospecting. By providing a secure, governed environment that gives agents their own identities, tool registries and memories, it believes it can finally convince enterprises to embrace autonomous systems in their most complex workloads. Gemini Enterprise will provide business workers with access to a new breed of "long-running" AI agents that can be built using the new Gemini Enterprise Agent Platform that was also announced today. They'll be able to participate in agentic development too thanks to the newly enhanced Agent Designer tool, which provides low-code and no-code interfaces for them to build their own agents, using either natural language instructions or a visual flow designer. It combines generative intelligence with deterministic logic, or essentially strict business rules, to ensure that agents don't hallucinate or do anything that contradicts the company's compliance policies. Users will also be able to codify their unique expertise into "Skills" that essentially formalize specific workflows, such as applying brand guidelines to a project or formatting a report in a particular way, and save them so others can use the same actions. This means users will be able to avoid re-explaining the context each time they ask an agent to perform a more complex task. The agent will simply draw on the necessary skills to get the job done, only when they're required, to ensure it doesn't overload its reasoning process. In addition, Gemini Enterprise is adding a new "Inbox" for agentic management. It's basically a centralized command location for users to monitor, guide and securely manage all of the agents they're using. Notifications will be categorized into actionable groups, such as "needs your input," "errors" or "completed jobs," so users can quickly see at a glance how their agents are progressing. To address the "solo" nature of AI, Gemini Enterprise is adding new features called Projects and Canvas, which will transform it from a personal assistant into a team member that everyone can access. Projects provides a shared workspace for humans and agents to co-create, drawing on context from Google Workspace and Microsoft OneDrive. Meanwhile, Canvas is an integrated editor that enables teams to collaborate on Docs and Slides along with AI agents, eliminating the need to keep switching tabs. Google revealed that Gemini Enterprise now supports Bring Your Own Model Context Protocol, which can be thought of as a technical bridge that allows administrators to connect the platform to their internal tools and servers. With this, AI agents built with Gemini can now discover and use tools hosted on companies' private servers, whenever they need them to complete a task. There's also a new Agent Marketplace in the Agent Gallery, which allows enterprises to access specialized third-party agents developed by Google partners such as ServiceNow Inc., Oracle Corp. and Accenture Plc. Finally, Google said it's doubling down on governance in an effort to appease anyone concerned about what could possibly go wrong. For instance, each agent can be granted its own Agent Identity, essentially a traceable digital ID that allows its work to be tracked and audited. Admins can use the agent's digital ID to enforce "least privilege access" and eliminate any risk, while the new Agent Gateway helps to protect against the dangers of data leaks and prompt injection attacks, Google said. The message from Google is clear - the task force of the future won't just be made up of humans, but a hybrid of experts working alongside persistent, autonomous agents. As these updates roll out over the next few months, Google will help businesses cement a 360-degree view of their AI agents. The goal is to help them understand not only "what" their agents are doing, but also "why" they're doing it, while providing the necessary safeguards and controls to rein them in at any moment.
Share
Share
Copy Link
Google Cloud Next 2026 saw the company unveil its rebranded Gemini Enterprise Agent Platform, consolidating Vertex AI into a unified system for building, managing, and securing AI agents at enterprise scale. With 750 million Gemini users, $70 billion in cloud revenue, and a $240 billion backlog, Google is positioning itself against Amazon, Microsoft, and OpenAI by owning the full stack from custom silicon to consumer distribution.

Google Cloud Next 2026 marked a strategic shift as CEO Sundar Pichai and Cloud CEO Thomas Kurian unveiled the Gemini Enterprise Agent Platform, a complete rebranding of Vertex AI designed to address the challenge of managing and deploying AI agents at enterprise scale
1
. The platform consolidates Google's AI infrastructure into what Kurian described as "an end-to-end system for the agentic era" rather than individual services that must be cobbled together3
. This represents Google's direct answer to Amazon's Bedrock AgentCore and Microsoft Foundry as the competition for enterprise AI intensifies1
.The numbers behind the announcement reveal the scale of Google's ambition. Google Cloud now generates over $70 billion in annual revenue with 48% year-over-year growth and a backlog of $240 billion that doubled in just one year
4
. The Gemini app has reached 750 million monthly active users as of Q4 2025, while the Gemini API processed 85 billion requests in January 2026 alone, a 142% increase from 35 billion in March 20254
. Eight million paid Gemini Enterprise seats are deployed across 2,800 companies4
.In a notable strategic decision, Google positioned the Gemini Enterprise Agent Platform primarily for IT and technical teams, acknowledging that AI agents are furthest along for technical tasks like coding and that security remains a critical concern for enterprises
1
. The platform provides over 200 models including Gemini 3.1 Pro, Nano Banana 2, Gemma open models, and competitive options from Anthropic such as Claude Opus 4.7, Claude Sonnet, and Claude Haiku2
.Security and governance capabilities distinguish the platform from competitors. Agent Identity assigns every agent a unique cryptographic ID with defined authorization policies, creating an auditable trail of every action
3
. Agent Gateway enforces security policies and protects against prompt injection, tool poisoning, and data leakage, while Agent Anomaly Detection flags suspicious behavior by analyzing the intent behind agent actions3
. Google emphasized that the platform "provides the same level of oversight and auditability found in essential business applications like payroll or quarterly financial reporting"2
.While developers work with the Agent Platform, business users gain access to Workspace Studio, a no-code agent builder that lets employees create automations across Gmail, Docs, Sheets, Drive, Meet, and Chat by describing tasks in plain language
5
. Users can type instructions like "every Friday, ping me to update my tracker" and Gemini creates the automation5
. The tool connects to third-party applications including Asana, Jira, Mailchimp, and Salesforce, and can call external APIs via webhooks5
.The upgraded Agent Development Kit features a new graph-based framework for orchestrating multiple agents working together, with MCP support helping developers maximize reasoning capabilities by structuring agents into sub-networks
2
. Memory Bank gives agents persistent, long-term memory across sessions rather than starting from scratch each time, while faster runtime helps agents delegate to each other more efficiently2
.Related Stories
Sundar Pichai framed the conference around Search evolving from a retrieval engine into an "agent manager" and announced the Universal Commerce Protocol with Shopify, Target, and Walmart, positioning Google's full-stack integration from custom silicon to consumer distribution as an advantage competitors cannot replicate
4
. The company committed $175 billion to $185 billion in capital expenditure for 2026, nearly doubling the $91.4 billion spent last year4
.Thomas Kurian drew a deliberate contrast with competitors, stating that other vendors are "handing you the pieces, not the platform," leaving teams to integrate components themselves
5
. This positions Google against OpenAI, whose Operator agent is scoring 87% on complex browser task benchmarks, and Anthropic, whose Model Context Protocol has reached 10,000 servers and 97 million monthly SDK downloads5
. Partner agents from Box, Workday, Salesforce, ServiceNow, Dun and Bradstreet, and S&P Global are integrated into the platform5
.Pichai revealed that 75% of all code at Google is now AI-generated and approved by engineers, up from 50% last fall, demonstrating internal adoption that signals a broader shift toward agentic enterprise workflows
3
. The company's eighth generation TPU chips provide the infrastructure foundation3
. As businesses face the challenge of managing hundreds or thousands of AI agents simultaneously, Google's bet on vertical integration from chip design through consumer-facing applications represents a distinct approach in the intensifying competition for enterprise AI dominance.Summarized by
Navi
[1]
[3]
[4]
1
Policy and Regulation

2
Technology

3
Technology
