Google's Gemini AI assistant preps Agentic Mode and Live Experimental Features for Android

2 Sources

Share

Google is developing major upgrades to Gemini Live including multimodal memory, proactive responses, and new agentic capabilities that let the assistant control phone tasks. Code strings reveal Live Thinking Mode and Deep Research enhancements, signaling Google's push to replace Google Assistant with a more capable universal AI assistant across Android devices.

Google Advances Gemini AI Assistant With New Capabilities

Google is developing substantial upgrades to its Gemini AI assistant for Android, with code discoveries revealing plans for Live Experimental Features, Agentic Mode, and enhanced voice interaction capabilities

2

. These AI developments signal the tech giant's continued effort to transform its assistant into what it calls a universal AI assistant, building on ambitions first outlined in Project Astra last year

1

.

Source: Gadgets 360

Source: Gadgets 360

Analysis of code strings in the latest Google app for Android points to a new Gemini Labs section, mirroring the experimental approach of Google Labs. The discoveries reference "assistant robin," Google's internal designation for the Gemini AI assistant, alongside multiple feature enhancements currently under development

2

.

Upgrades to Gemini Live Expand Voice Interaction

The most significant upgrades to Gemini Live include multimodal memory, improved noise handling, the ability to respond when it sees something, and personalized results based on user's Google apps

2

. Multimodal memory could enable the voice assistant to retain information from on-device interactions or camera feeds even after the visual input disappears, creating a more contextually aware experience.

Source: Android Authority

Source: Android Authority

Better noise handling appears designed to reduce ambient interference during conversations, while proactive responses suggest the assistant might initiate interactions based on visual cues without explicit user prompts. These capabilities would mark a substantial evolution from current reactive AI models to more anticipatory systems that understand context and user needs in real-time.

Live Thinking Mode Brings Deliberate Reasoning to Conversations

Google is developing Live Thinking Mode, described as "a version of Gemini Live that takes time to think and provide more detailed responses"

2

. While Thinking Mode already exists in the Gemini app, integrating it into the Live experience would allow users to request more thorough, reasoned answers during real-time voice conversations. This hybrid approach could bridge the gap between quick conversational responses and deep analytical thinking.

Additionally, enhancements to Deep Research are referenced with code mentioning the ability to "delegate complex research tasks"

2

. Though specific functionalities remain unclear, this suggests users might assign multi-step research projects to the assistant, which could autonomously gather and synthesize information.

Agentic Options Enable Direct Control Phone Tasks

Perhaps most transformative are the agentic functions referenced in UI Control strings stating "Agent controls phone to complete tasks"

2

. These agentic capabilities would allow the Gemini AI assistant to execute task automations directly on Android devices on behalf of users, moving beyond simple voice commands to autonomous action-taking.

While the specific tasks remain unspecified, this aligns with broader industry trends toward agentic AI systems that can navigate applications, manage workflows, and complete multi-step processes independently. Such functionality would represent a significant step toward the vision outlined in Project Astra—a universal AI assistant capable of working seamlessly across phone applications

1

.

What This Means for Android Users

These developments matter because they signal Google's determination to replace Google Assistant with a fundamentally more capable system. For Android users, this could mean transitioning from an assistant that responds to commands to one that anticipates needs, remembers context across interactions, and takes autonomous actions to complete complex tasks.

However, it's important to note that code strings don't guarantee feature releases. Developers sometimes use these as placeholders or exploration spaces that never materialize into actual products

2

. The timeline for any official announcements remains uncertain, keeping observers watching for Google's next move in the competitive AI assistant landscape.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo