Gemini task automation goes live on Galaxy S26, handling food orders and ride bookings for users

Reviewed byNidhi Govil

5 Sources

Share

Google's Gemini AI assistant can now automate tasks within apps on Samsung's Galaxy S26 series. Users can order food or book rides using simple voice commands while Gemini handles the workflow in the background. The feature supports apps like Uber Eats, DoorDash, and Starbucks, though users must confirm final purchases for security.

Gemini Task Automation Arrives on Galaxy S26

Google and Samsung have officially launched task automation for Gemini on the Galaxy S26 series, marking a significant step forward in AI integration on Android devices

1

. The feature, which was showcased during Samsung's Galaxy Unpacked event but wasn't immediately available, is now rolling out in beta form to users of the new flagship phones

2

. This capability transforms how users interact with apps on the user's behalf, allowing the AI assistant to handle complex workflows through simple voice commands.

Source: Digit

Source: Digit

The screen automation feature enables Gemini AI on Samsung devices to take control of specific Android apps and complete multi-step tasks without manual intervention. Instead of opening apps and navigating through multiple screens, users can issue straightforward requests like "Get me a ride to the airport" or "Order a coffee and a croissant," and Gemini handles the entire process in a virtual window

1

2

.

How the AI Assistant Automates Tasks Within Apps

When users activate Gemini to order food and book rides, the system works through several intelligent steps. In testing, the AI assistant successfully navigated Uber's interface, asking clarifying questions like which airport to select before adding destinations and skipping unnecessary steps such as airline specifications

1

.

Source: 9to5Google

Source: 9to5Google

For food delivery orders, Gemini can scroll through extensive menus, locate specific items, and even make contextual decisionsβ€”such as requesting that a chocolate croissant be warmed rather than served cold from the pastry case

1

.

Source: The Verge

Source: The Verge

Users receive notifications when Gemini is actively working on a task, with the option to watch the automation happen in real-time or continue using their phone for other activities

2

. The system operates in the background, adding items to carts and preparing orders while handling the workflow efficiently

4

.

Security Measures and Checkout Limitations

For security purposes, Google has implemented safeguards that prevent Gemini from completing final transactions autonomously. The AI assistant will build your cart, skip promotional add-on pages, and navigate to the final checkout screen, but it stops short of pressing the "place order" or "pay" button

2

4

. When the process reaches this stage, the phone sends a notification with a strong vibration, prompting users to review details and confirm the purchase themselves

4

.

This approach balances convenience with user control, ensuring that no unauthorized purchases occur while still eliminating the tedious steps of manual app navigation. In practical terms, Gemini handles the grunt work while users maintain final authority over financial transactions

4

.

Supported Apps and Current Limitations

At launch, the beta feature supports a limited but practical selection of apps focused on food delivery and rideshare services. The current list includes Uber, Lyft, Grubhub, DoorDash, Uber Eats, and Starbucks

3

4

. Users must have these apps installed on their phones for Gemini to access and automate tasks within them

3

.

A list of compatible apps appears in Gemini's settings, customized to show only the apps currently installed on each device

4

. While other obvious candidates like Instacart aren't yet available, Google has opened the door for additional apps to join the system as the feature expands

4

.

Device Availability and Geographic Rollout

The task automation feature is initially launching on the Galaxy S26 series and is also set to arrive on Pixel 10, Pixel 10 Pro, and Pixel 10 Pro XL devices, though the Pixel rollout doesn't appear to be live yet

2

4

. Geographic availability is currently limited, with the United States and South Korea among the first markets to receive access

2

.

Users can check for the feature by looking for "screen automation" in the Gemini app's settingsβ€”this indicator signals that the rollout has reached their device

4

. The feature represents a marked improvement from just a year ago, when AI assistants would struggle with basic calendar details

1

.

What This Means for AI Assistant Evolution

This development signals a shift from AI assistants that merely answer questions to ones that actively complete tasks on behalf of users. The ability to handle app control through natural language represents the kind of functionality that has been promised for years but is only now becoming practical reality

1

. While the feature isn't necessarily faster than manual app use for single tasks, its true value lies in enabling users to initiate processes through voice commands while multitasking

4

.

As Google continues testing and refinement, users should watch for expanded app support and improved reliability. Early testing has revealed occasional issues, including one instance where the automation preview locked a device into fullscreen mode, requiring a forced reboot. These growing pains are typical for beta features, and the system's core functionality appears to work as intended in most scenarios

1

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo