3 Sources
[1]
Google embeds AI agents deep into its data stack - here's what they can do for you
Google is introducing powerful tech for agents and data.They are also introducing a series of data-centric agents.A new command-line AI coding tool is now available. I am no stranger to hyperbolic claims from tech companies. Anyone who's on the receiving end of a firehose of press announcements related to AI understands. Everything is game-changing, world-changing, the most, the best, yada, yada, yada. And then there's Google. Google is no stranger to hyperbole. But when a company so steeped in data management as part of its core DNA talks about "fundamental transformation," and says that the world is changing because, "It's being re-engineered in real-time by data and AI," we can consider those claims as fairly credible. Also: Got 6 hours? This free AI training from Google can boost your resume today Just in time for Google Cloud Next Tokyo 2025, Google is making a series of announcements that herald a major change in how enterprises manage data. Yasmeen Ahmad, Google's managing director of Data Cloud, says in a blog post, "The way we interact with data is undergoing a fundamental transformation, moving beyond human-led analysis to a collaborative partnership with intelligent agents." She calls this agentic shift, which she describes as, "A new era where specialized AI agents work autonomously and cooperatively to unlock insights at a scale and speed that was previously unimaginable." Also: 5 ways to successfully integrate AI agents into your workplace From almost any other company, claims like this would seem like just so much hot air. But Google is dropping a series of announcements about new offerings that provide real-world capabilities to data scientists and engineers in pretty tangible support of the claims. There's a fairly fine line between AI chatbots and AI agents. Chatbots are conversational, while agents are tools that perform autonomous tasks. Some users employ chatbots to perform tasks, as I did when I used ChatGPT to analyze some business data. Agents, like ChatGPT Agent, use a conversational interface to receive instructions. A good way to think of agents is as surrogate team members. Perhaps one agent does data normalization (cleaning up data), while another does migration. Each agent does one or more defined tasks using AI capabilities. Also: Want AI agents to work together? The Linux Foundation has a plan In this context, Google is looking at agents that can automate and simplify tasks for data workers, can communicate with each other, and can free professionals from tedious work so they can focus on "higher-value tasks." Google is also trying to get agents to work together in virtual teams. There are, of course, questions about whether agents aren't actually freeing up the time of senior professionals, but are instead taking work away from more junior employees. On the other hand, I don't have anyone to do the grunt work when I'm fully immersed in a project. So anything I can hand off to an agent is more time for projects and writing. With all these agents running around, traditional databases just aren't up to the task of keeping them fed. Agents do their reasoning or automation processes across silos. They need access to both historical and live data. Classic data management methods like real-time OLTP (online transaction processing) and deep-dive OLAP (online analytical processing) isolate data too much for AIs to gain insights from trends and current activities. One way to help unify all of these capabilities is by enhancing their database offerings. A few years ago, Google added a columnar engine for AlloyDB. AlloyDB is the company's fully managed database service on Google Cloud Platform that focuses on PostgreSQL users, which is ideal for those who require a PostgreSQL-specific solution. Also: How AI agents can generate $450 billion by 2028 - and what stands in the way A columnar engine is one where workloads query specific columns of data, reading only the fields needed for analysis. This leads to faster queries and allows for vectorized execution, where operations are applied to an entire column of data at once. Now, Google is adding a columnar engine to Spanner, its globally distributed, strongly consistent database service that offers high availability and scalability, designed for enterprises needing global reach and high transactional integrity. This also adds power to BigQuery, Google's serverless, highly scalable, and cost-effective multi-cloud data warehouse designed for business agility. As the name implies, BigQuery is ideal for those who need to run fast, SQL-like queries on large datasets. The company says this new columnar capability in Spanner speeds up analytical queries by something like 200x on live transactional data. With performance like that, we're talking instant responsiveness to real-time situations. When building enterprise-based AI systems, you need agents to make decisions based on real data. Performing real-time actions based on hallucinated data can get ugly very quickly. This is where RAG (retrieval augmented generation) comes in. Essentially, RAG combines large language models with real-time data access. Also: 5 ways to be a great AI agent manager, according to business leaders You can start to see how vectorizing search in Spanner and BigQuery becomes necessary when you're feeding in real-time data along with historical information. But getting vector search to work efficiently has traditionally been painful. Google is adding adaptive filtering in AlloyDB to automatically maintain vector indexes and optimize for fast queries on live operational data. Google is also introducing autonomous vector embeddings and generation to BigQuery, which automatically prepares and indexes multimodal data for vector search. This is a key step in creating a sort of semantic memory for agents. The company is also introducing the ability to run AI queries right inside of BigQuery. This is, ahem, big. Now, BigQuery users can have AI do its magic across giant gobs of structured and unstructured data, ask complex questions (including subjective ones like "Which customers are frustrated?"), and get answers directly within existing analytics tools. In addition to building a foundation for agentic cooperation and data access, Google is announcing a series of new capabilities that embed agents in their biggest data tools. Let's look at each in turn. Data engineering agent: Built specifically for data engineers, this agent within BigQuery can simplify and automate complex data pipelines. The entire workflow can be driven by natural-language prompts, from data ingestion to transformations to data-quality assessment to normalization. Spanner migration agent: Related to the data engineering agent, the Spanner migration agent can simplify data migration from legacy systems to BigQuery. This sort of migration is normally exceptionally tedious and potentially dangerous, but now the agent can do most of the heavy lifting. Data science agent: Data scientists focus on analyzing and interpreting complex data, while data engineers focus on data infrastructure. According to Google, the new data science agent "triggers entire autonomous analytical workflows, including exploratory data analysis, data cleaning, featurization, machine-learning predictions, and much more. It creates a plan, executes the code, reasons about the results, and presents its findings, all while allowing you to provide feedback and collaborate in sync." Code interpreter: Built as an enhancement of the conversational analytics agent introduced last year, the code interpreter takes in business-analysis questions and converts them to Python code to prepare custom analysis for users. This all runs within Google Data Cloud and uses the Google Data Cloud security infrastructure. It also includes an API available for developers to incorporate conversational analytics agent and code interpreter capabilities in custom code. As part of this big series of announcements, Google is introducing an extension to Gemini CLI called Gemini CLI GitHub Actions. CLI stands for command line interface, basically a terminal interface to your computer. Even though most users left the terminal behind when MS-DOS migrated to Windows, coders to this day make heavy use of the command line. Working in terminal mode lets coders add tools and control the coding process much faster than when they have to find and select items from menus and icons. Also: Bad vibes: How an AI agent coded its way to disaster Last month, when Google introduced Gemini CLI, it basically made the features of the Gemini chatbot available in the terminal. Now, Google has extended that capability, providing some agentic features within the terminal environment. Some of you may be wondering how this compares with Jules, the Google coding agent I wrote about in May. First, Jules works in a secure cloud VM, while Gemini CLI GitHub Actions runs in terminal and integrates with GitHub Actions (the GitHub-based workflow tool). Google says there's a fairly narrow scope to Gemini CLI GitHub Actions compared to Jules. Jules can read your entire codebase, plan and present an approach to a coding challenge, and then execute on it. Gemini CLI GitHub Actions is specifically targeted to intelligent issue triage, accelerated pull-request reviews, and on-demand collaboration. Also: Most developers use AI in their daily workflows - but they don't trust it, study finds The issue-triage capability helps coders manage specific bug reports and feature requests. Pull requests are the way GitHub asks coders to confirm integrating coding changes into branches and master codebases. On-demand collaboration is essentially setting up a chat session whenever you want to talk about your code. I could easily see a programmer use both. Jules would be great for bigger projects and larger swings, and Gemini CLI GitHub Actions would work well for quicker updates and fixes. What do you think about the agentic shift Google is promoting? Have you started integrating intelligent agents into your own workflows? Which of Google's new data tools or capabilities intrigues you most -- the data engineering agent, the in-query AI reasoning, or something else? Do you see agents as helping senior professionals, replacing junior roles, or both? And how do you feel about running AI workflows directly in BigQuery? Let us know in the comments below.
[2]
Google Cloud's data agents promise to end the 80% toil problem plaguing enterprise data teams
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Data doesn't just magically appear in the right place for enterprise analytics or AI, it has to be prepared and directed with data pipelines. That's the domain of data engineering and it has long been one of the most thankless and tedious tasks that enterprises need to deal with. Today, Google Cloud is taking direct aim at the tedium of data preparation with the launch of a series of AI agents. The new agents span the entire data lifecycle. The Data Engineering Agent in BigQuery automates complex pipeline creation through natural language commands. A Data Science Agent transforms notebooks into intelligent workspaces that can autonomously perform machine learning workflows. The enhanced Conversational Analytics Agent now includes a Code Interpreter that handles advanced Python analytics for business users. "When I think about who is doing data engineering today, it's not just engineers, data analysts, data scientists, every data persona complains about how hard it is to find data, how hard it is to wrangle data, how hard it is to get access to high quality data,"Yasmeen Ahmad, managing director, data cloud at Google Cloud, told VentureBeat. "Most of the workflows that we hear about from our users are 80% mired in those toilsome jobs around data wrangling, data, engineering and getting to good quality data they can work with." Targeting the data preparation bottleneck Google built the Data Engineering Agent in BigQuery to create complex data pipelines through natural language prompts. Users can describe multi-step workflows and the agent handles the technical implementation. This includes ingesting data from cloud storage, applying transformations and performing quality checks. The agent writes complex SQL and Python scripts automatically. It handles anomaly detection, schedules pipelines and troubleshoots failures. These tasks traditionally require significant engineering expertise and ongoing maintenance. The agent breaks down natural language requests into multiple steps. First it understands the need to create connections to data sources. Then it creates appropriate table structures, loads data, identifies primary keys for joins, reasons over data quality issues and applies cleaning functions. "Ordinarily, that entire workflow would have been writing a lot of complex code for a data engineer and building this complex pipeline and then managing and iterating that code over time," Ahmad explained. "Now, with the data engineering agent, it can create new pipelines for natural language. It can modify existing pipelines. It can troubleshoot issues." How enterprise data teams will work with the data agents Data engineers are often a very hands-on group of people. The various tools that are commonly used to build a data pipeline including data streaming, orchestration, quality and transformation, don't go away with the new data engineering agent. "Engineers still are aware of those underlying tools, because what we see from how data people operate is, yes, they love the agent, and they actually see this agent as an expert, partner and a collaborator," Ahmad said. "But often our engineers actually want to see the code, they actually want to visually see the pipelines that have been created by these agents." As such while the data engineering agents can work autonomously, data engineers can actually see what the agent is doing. She explained that data professionals will often look at the code written by the agent and then make additional suggestions to the agent to further adjust or customize the data pipeline. Building an data agent ecosystem with an API foundation There are multiple vendors in the data space that are building out agentic AI workflows. Startups like Altimate AI are building out specific agents for data workflows. Large vendors including Databricks, Snowflake and Microsoft are all building out their own respective agentic AI technologies that can help data professionals as well. The Google approach is a little different in that it is building out its agentic AI services for data with its Gemini Data Agents API. It's an approach that can enable developers to embed Google's natural language processing and code interpretation capabilities into their own applications. This represents a shift from closed, first-party tools to an extensible platform approach. "Behind the scenes for all of these agents, they're actually being built as a set of APIs," Ahmad said. "With those API services, we increasingly intend to make those APIs available to our partners." The umbrella API service will publish foundational API services and agent APIs. Google has lighthouse preview programs where partners embed these APIs into their own interfaces, including notebook providers and ISV partners building data pipeline tools. What it means for enterprise data teams For enterprises looking to lead in AI-driven data operations, this announcement signals an acceleration toward autonomous data workflows. These capabilities could provide significant competitive advantages in time-to-insight and resource efficiency. Organizations should evaluate their current data team capacity and consider pilot programs for pipeline automation. For enterprises planning later AI adoption, the integration of these capabilities into existing Google Cloud services changes the landscape. The infrastructure for advanced data agents becomes standard rather than premium. This shift potentially raises baseline expectations for data platform capabilities across the industry. Organizations must balance the efficiency gains against the need for oversight and control. Google's transparency approach may provide a middle ground, but data leaders should develop governance frameworks for autonomous agent operations before widespread deployment. The emphasis on API availability indicates that custom agent development will become a competitive differentiator. Enterprises should consider how to leverage these foundational services to build domain-specific agents that address their unique business processes and data challenges.
[3]
Google unveils enterprise data science and engineering AI agents provide real-time analysis - SiliconANGLE
Google unveils enterprise data science and engineering AI agents provide real-time analysis In a bid to make the lives of enterprise data engineers and data scientists easier and developers easier, Google Cloud today announced the release of six new artificial intelligence agent tools. The new tools include a new data engineering agent for BigQuery and a data science agent for BigQuery Notebooks, to assist across the data lifecycle from preparation to providing an intelligent workspace for data scientists for managing infrastructure, respectively. Business users will receive conversational analytics and a code interpreter that allows them to ask questions about data in natural language. "To make this agentic reality possible, you need a different kind of data platform -- not a collection of siloed tools, but a single, unified, AI-native cloud," said Yasmeen Ahmad, managing director Data Cloud, Google Cloud. For data engineers, Google is introducing the Data Engineering Agent in BigQuery in preview, which will simplify and automate complex data pipelines. BigQuery is Google's cloud-based data warehouse that allows data engineers to store, manage and analyze massive datasets using Structured Query Language, using a high-performance search at scale. BigQuery enables straightforward data analysis with familiar tools using standard SQL, a widely recognized database query language, making it accessible to many users. Its architecture is also optimized for swift query execution, allowing for the analysis of vast amounts of data in seconds or minutes, providing easy integration into services such as Vertex AI for machine learning and Data Studio for data analytics. Engineers can simply describe what they want done, such as "Create a pipeline to load a CSV file, cleanse the columns, and join it with another table," and the agent will generate and build the entire workflow. The engineer can remain entirely hands off until the work is done and then modify it or fix it as they see fit, or iterate it with further prompts. Google is also launching Spanner Migration Agent in preview, which will allow engineers to quickly move data from legacy databases such as MySQL. Data scientists will get access to what Google calls a "reimagined" AI-first Colab Enterprise Notebook experience available in BigQuery and Vertex AI, featuring a new Data Science Agent in preview powered by Gemini, the company's flagship AI model. Colab notebooks, short for Google Colaboratory, are a cloud-based interactive environment that allows data scientists to write, execute and share Python code through a web browser. Combined with an AI agent, Google said users will be able to have the agent autonomously build entire analytical workflows, including exploratory data analysis, data cleaning, features and machine learning predictions. The agent can plan, execute code and reason about the results as it proceeds before it presents its findings, just like another teammate, allowing the user to provide feedback and collaborate with it. Last year, Google introduced a way for business users to "talk" to their data using natural language with the Conversational Analytics Agent. Today, the company is introducing Code Interpreter in preview. Code Interpreter allows business users to provide complex natural language questions and transform them into executable Python code. This allows users with little or no technical knowledge to execute otherwise highly technical data analysis workflows and receive important insights. "This enhancement supports the many critical business questions that go beyond what simple SQL can answer," explained Ahmad. The agent generates code, provides explanations and creates interactive visualizations. Google today announced an update to Gemini CLI, an open-source AI agent that brings the power of Gemini to the command line, which introduces a connection between developers and the GitHub repository. The command line interface is the text-based way for developers to interact with a computer's operating system. It differs from the easy-to-use drag-and-drop double-click icons of the desktop; instead, users must type out long strings of commands to get anything done. To ease this, Google's Gemini CLI agent types them out for the developer. Today, Google introduced Gemini CLI GitHub Actions in beta mode, which automates code repository actions, including pull requests, writing code, performing tests, providing review and implementation. Google said the company is launching three open-source workflows that the agent will automate to help developers code faster. The new agent will help automate the overhead of managing new issues by helping analyze, label and prioritize incoming issues, to provide focus on what matters. It can also provide insightful feedback on code changes by reviewing them for quality, style and correctness. This will free up reviewers to focus on more complex tasks. Finally, it will be possible to simply mention @gemini-cli within issues themselves to delegate tasks to the agent. For example, a developer has a feature they want implemented and submits it as an issue in the tracker, such as "Add a menu item to the Options drop-down that allows users to toggle the app to Dark Mode." The agent will then reply to the mention, begin work, make the code changes, generate unit tests and submit them to the repository for review. The developer can then pull them and check the build to make sure that the changes work properly and modify them as they see fit before committing them back, should they work or send them back to the agent for updates or modifications. Google said these initial workflows are open-source and fully customizable, allowing developers to design and configure their own.
Share
Copy Link
Google introduces a series of AI agents and tools to revolutionize data engineering, data science, and analytics, promising to streamline workflows and boost productivity for enterprise data teams.
Google has announced a series of groundbreaking AI agents and tools designed to transform how enterprises manage and analyze data. These innovations aim to address the longstanding challenges in data engineering, data science, and analytics, promising to streamline workflows and boost productivity for enterprise data teams 123.
At the heart of Google's announcement is the Data Engineering Agent in BigQuery, currently in preview. This AI-powered tool is designed to simplify and automate complex data pipelines. Data engineers can now describe their desired outcomes in natural language, and the agent will generate and build the entire workflow 3. For instance, a simple prompt like "Create a pipeline to load a CSV file, cleanse the columns, and join it with another table" will result in the agent autonomously creating the pipeline 3.
Source: VentureBeat
Google is also introducing the Spanner Migration Agent, which facilitates quick data migration from legacy databases such as MySQL to Google's cloud-based solutions 3.
For data scientists, Google is launching a reimagined AI-first Colab Enterprise Notebook experience in BigQuery and Vertex AI. This includes a new Data Science Agent, powered by Google's flagship AI model, Gemini 3. This agent can autonomously build entire analytical workflows, including exploratory data analysis, data cleaning, feature engineering, and machine learning predictions 23.
Google is expanding its Conversational Analytics Agent with a new Code Interpreter feature. This allows business users to pose complex questions in natural language, which the agent then transforms into executable Python code. This breakthrough enables non-technical users to perform advanced data analysis and gain critical insights without requiring coding skills 13.
Source: SiliconANGLE
These AI agents promise to address what Google calls the "80% toil problem" in data preparation. Traditionally, data professionals spend a significant portion of their time on tedious tasks like data wrangling and pipeline creation. Google's AI agents aim to automate these processes, potentially freeing up data teams to focus on more strategic, high-value tasks 2.
Yasmeen Ahmad, Google's managing director of Data Cloud, emphasizes the transformative nature of these tools:
"The way we interact with data is undergoing a fundamental transformation, moving beyond human-led analysis to a collaborative partnership with intelligent agents" 1.
To support these advanced AI capabilities, Google has made significant enhancements to its data infrastructure. This includes adding a columnar engine to Spanner, its globally distributed database service, which reportedly speeds up analytical queries by 200x on live transactional data 1.
Google is also introducing autonomous vector embeddings and generation to BigQuery, automating the preparation and indexing of multimodal data for vector search 1.
Google is taking an API-first approach with its Gemini Data Agents API, allowing developers to embed these AI capabilities into their own applications. This move towards an extensible platform could foster a rich ecosystem of AI-enhanced data tools 2.
Additionally, Google has updated its Gemini CLI, an open-source AI agent for command-line interactions, with new GitHub integrations. This includes automating code repository actions such as pull requests, code writing, testing, and review implementation 3.
As these AI agents become more integrated into enterprise data workflows, they could significantly alter the landscape of data management and analysis. While promising increased efficiency and insights, organizations will need to balance these gains with the need for oversight and control 2.
The introduction of these AI agents by Google signals a shift towards more autonomous, AI-driven data operations in the enterprise world. As these tools become standard rather than premium offerings, they may raise baseline expectations for data platform capabilities across the industry 2.
Summarized by
Navi
[2]
Qualcomm announces successful testing of OpenAI's gpt-oss-20b model on Snapdragon-powered devices, marking a significant step towards on-device AI processing.
2 Sources
Technology
23 hrs ago
2 Sources
Technology
23 hrs ago
Huawei is open-sourcing its CANN software toolkit for Ascend AI GPUs, aiming to compete with NVIDIA's CUDA and attract more developers to its ecosystem.
2 Sources
Technology
23 hrs ago
2 Sources
Technology
23 hrs ago
Anthropic's Claude AI model has demonstrated exceptional performance in hacking competitions, outranking human competitors and raising questions about the future of AI in cybersecurity.
2 Sources
Technology
15 hrs ago
2 Sources
Technology
15 hrs ago
The Productivity Commission's proposal for AI copyright exemptions in Australia has ignited a fierce debate between tech companies and creative industries, raising concerns about intellectual property rights and economic impact.
3 Sources
Policy and Regulation
15 hrs ago
3 Sources
Policy and Regulation
15 hrs ago
DigitalOcean reports strong Q2 2025 earnings, with revenue and EPS beating expectations. The company's focus on AI offerings and cloud services contributes to significant growth, leading to a nearly 29% stock price increase.
4 Sources
Business and Economy
23 hrs ago
4 Sources
Business and Economy
23 hrs ago