2 Sources
2 Sources
[1]
Google Search is now using AI to create interactive UI to answer your questions
In a move that could redefine the web, Google is testing AI-powered, UI-based answers for its AI mode. Up until now, Google AI mode, which is an optional feature, has allowed you to interact with a large language model using text or images. When you use Google AI mode, Google responds with AI-generated content that it scrapes from websites without their permission and includes a couple of links. The problem with AI mode is that it might not be as helpful or interactive as some websites. For example, Wikipedia has beautiful graphs or charts, and so does Investopedia. Now, Google is integrating Gemini 3 into AI mode, which would allow the search engine to generate new UIs using the large language model. For example, if you're a science student or someone curious about gene expression theory in humans, you may want to learn about RNA transcription, which is the first step. But text and images might not be enough. With AI mode, Google says it can generate an RNA polymerase simulator so you can understand how the enzyme works in action. This feature is very interesting, and it could redefine the web as we know it today. It could also disrupt the web economy because Google's AI is now not only showing answers and websites but also generating code and creating beautiful user interfaces. Google AI mode's interactive interface gives you another reason to stay on Google and avoid visiting links unless you want to do fact-based research, which most people don't.
[2]
Google wants AI to build web pages instead of just writing text
Google unveiled Generative UI on Monday, a technology that allows AI models to generate fully customized interactive interfaces in response to user prompts, powered by Gemini 3 Pro and rolling out in the Gemini app and Google Search's AI Mode to deliver dynamic experiences beyond static text responses. The core functionality of Generative UI involves creating diverse outputs such as web pages, interactive tools, games, and simulations based on any question or instruction provided by users. This approach shifts from conventional chatbot interactions, which typically output only text, to producing complete, interactive user interfaces tailored to specific needs. The rollout begins in the Gemini app, where users encounter these generated elements directly, and extends to Google Search's AI Mode, enhancing search results with interactive components. A research paper titled "Generative UI: LLMs are Effective UI Generators," released alongside the announcement, details the evaluation process. Human evaluators reviewed AI-generated interfaces against standard large language model outputs, excluding generation speed as a variable. The results showed a strong preference for the interactive interfaces, indicating their effectiveness in user engagement and comprehension. This paper, authored by Google researchers including Fellow Yaniv Leviathan, provides empirical support for the technology's viability. Within the Gemini app, Google tests two distinct implementations of Generative UI. The dynamic view leverages Gemini 3's coding abilities to design and code bespoke interfaces for each individual prompt. This process involves analyzing the prompt's context to adapt both the content presented and the interactive features included, ensuring relevance to the user's intent. For instance, the system generates code on the fly to build elements like buttons, forms, or visualizations that respond to user inputs in real time. The visual layout implementation, by contrast, produces magazine-style views featuring modular interactive components. Users receive a structured layout resembling a digital publication, with sections that can be expanded, modified, or interacted with further. This format allows for visual storytelling combined with functionality, such as draggable elements or embedded simulations, making complex information more accessible through graphical means. Google emphasizes the technology's ability to personalize outputs according to the audience. As stated in the company's research blog, "It customizes the experience with an understanding that explaining the microbiome to a 5-year-old requires different content and a different set of features than explaining it to an adult." This tailoring involves adjusting language complexity, visual aids, and interaction levels to match the recipient's knowledge and age, drawing on the model's contextual reasoning capabilities. In Google Search, access to Generative UI occurs through AI Mode, limited to Google AI Pro and Ultra subscribers in the United States. Users activate it by choosing "Thinking" from the model dropdown menu, which then processes queries to generate tailored interactive tools and simulations. This integration enriches search experiences by providing hands-on explorations of topics, such as financial calculators or scientific models, directly within the search interface. Video: Google The underlying system combines Gemini 3 Pro with specific enhancements: tool access enables image generation and web search integrations, allowing the AI to incorporate real-time data and visuals into interfaces. Carefully crafted system instructions guide the model's behavior to align with user expectations, while post-processing steps correct common errors like layout inconsistencies or factual inaccuracies. These components work together to refine outputs before presentation. To advance external research, Google developed the PAGEN dataset, comprising websites designed by experts across various domains. This collection serves as a benchmark for training and evaluating UI generation models. The dataset will soon become available to the broader research community, facilitating studies on AI-driven interface creation and improvement. Video: Google Current versions of Generative UI exhibit certain constraints. Generation times often exceed one minute, depending on the complexity of the prompt and interface required. Outputs occasionally contain inaccuracies, such as incorrect data representations or functional glitches, which Google identifies as active areas of research. Efforts focus on optimizing speed and reliability through iterative model updates and refined processing techniques. This unveiling aligns with the launch of Gemini 3, Google's most advanced AI model to date. Gemini 3 Pro achieved a score of 1,501 on the LMArena leaderboard, outperforming prior iterations in overall performance metrics. On the GPQA Diamond benchmark, designed for PhD-level reasoning tasks, it reached 91.9 percent accuracy. Additionally, without external tools, it scored 37.5 percent on Humanity's Last Exam, a comprehensive test of advanced knowledge across disciplines.
Share
Share
Copy Link
Google introduces Generative UI technology powered by Gemini 3 Pro, enabling AI to create fully interactive web interfaces, tools, and simulations in response to user queries, marking a significant shift from traditional text-based AI responses.

Google has unveiled Generative UI, a groundbreaking technology that enables artificial intelligence to create fully customized interactive interfaces rather than traditional text responses. Powered by Gemini 3 Pro, this innovation represents a fundamental shift in how AI systems interact with users, moving beyond static chatbot conversations to dynamic, functional web experiences
1
2
.The technology generates diverse outputs including web pages, interactive tools, games, and simulations based on user prompts. For example, when users inquire about complex scientific concepts like RNA transcription, Google's AI can create an RNA polymerase simulator, allowing hands-on exploration of how enzymes function rather than merely providing textual explanations
1
.Google has deployed two distinct implementations of Generative UI within the Gemini app. The dynamic view leverages Gemini 3's coding capabilities to design and code bespoke interfaces for individual prompts, analyzing context to adapt both content and interactive features in real-time. The visual layout implementation produces magazine-style views with modular interactive components, creating structured layouts resembling digital publications with expandable sections and embedded simulations
2
.The underlying system combines Gemini 3 Pro with specific enhancements including tool access for image generation and web search integrations, enabling real-time data incorporation. Carefully crafted system instructions guide the model's behavior, while post-processing steps correct layout inconsistencies and factual inaccuracies before presentation
2
.A key feature of Generative UI is its ability to personalize outputs according to the intended audience. As Google researchers explain, the system understands that "explaining the microbiome to a 5-year-old requires different content and a different set of features than explaining it to an adult." This personalization involves adjusting language complexity, visual aids, and interaction levels based on the recipient's knowledge and age, utilizing the model's contextual reasoning capabilities
2
.Currently, Generative UI is accessible through the Gemini app and Google Search's AI Mode, limited to Google AI Pro and Ultra subscribers in the United States. Users can activate the feature by selecting "Thinking" from the model dropdown menu, which processes queries to generate tailored interactive tools and simulations directly within the search interface
2
.Google's research paper "Generative UI: LLMs are Effective UI Generators" provides empirical support for the technology's effectiveness. Human evaluators consistently preferred AI-generated interactive interfaces over standard text outputs, demonstrating superior user engagement and comprehension. The research, authored by Google researchers including Fellow Yaniv Leviathan, excluded generation speed as a variable to focus on interface quality and functionality
2
.Related Stories
Despite its innovative capabilities, Generative UI faces several constraints. Generation times often exceed one minute depending on prompt complexity, and outputs occasionally contain inaccuracies suchs as incorrect data representations or functional glitches. Google has identified these as active research areas, focusing on optimizing speed and reliability through iterative model updates and refined processing techniques
2
.This development could significantly disrupt the traditional web economy. Google's AI now generates code and creates sophisticated user interfaces, potentially reducing user motivation to visit original websites. The technology provides users with comprehensive, interactive experiences directly within Google's ecosystem, raising questions about web traffic distribution and content creator compensation
1
.Summarized by
Navi
[1]
1
Business and Economy

2
Technology

3
Policy and Regulation
