2 Sources
2 Sources
[1]
You can now use Gemini without leaving your apps, thanks to split-screen multitasking
Google's Gemini expands into split-screen multitasking, blending AI with phone apps Google is rolling out a major update to its Gemini AI platform that changes how mobile users interact with artificial intelligence on Android devices. With its latest enhancement, Gemini can now operate in split-screen mode alongside other apps, allowing the AI assistant to work in context with what's on your phone screen - without forcing you to switch between apps. Bringing AI into your workflow Traditionally, AI assistants on smartphones have existed in separate interfaces: you open a chat window, ask a question, then switch back to your app of choice once you've received an answer. Google's new split-screen implementation breaks that pattern. Now, Gemini can appear alongside another app in a dedicated pane, actively assisting you as you work. Recommended Videos For example, while composing an email or message, Gemini can suggest phrasing, refine text, or draft replies in real time. If you're reading a long article or document in a browser, the AI can pull out key points or summaries without interrupting your reading flow. In messaging apps, users can ask Gemini to help with reply suggestions or generate quick responses based on the conversation visible on the screen. This update is part of Google's broader effort to make its AI tools more assistive - not just reactive. Instead of waiting for a user to ask a question, Gemini can now be a contextual partner that actively contributes to your tasks. Already rolling out to select Android devices and compatible apps, the split-screen feature shows up as an option to "open Gemini" alongside supported applications. Once activated, the AI pane remains visible and interactive while the primary app stays in view. An important shift in mobile AI design This move reflects a broader shift in how manufacturers and developers are thinking about artificial intelligence on mobile platforms. Instead of treating AI as a separate service that users dip into occasionally, companies like Google are moving toward AI-augmented multitasking, where generative intelligence becomes part of everyday mobile workflows. Competitors such as Apple and Microsoft have also signaled interest in deeper AI integration into their respective operating systems. Microsoft is exploring AI tools within Windows apps, while Apple is preparing its on-device AI services in iOS. Google's split-screen implementation represents one of the more advanced examples of contextual AI integration on Android so far. For users, this evolution means less context switching. You no longer need to copy text from one app, open a separate AI interface, and paste it back - Gemini can be right there beside your content, understanding what you're doing and suggesting enhancements on the fly. The benefits may seem subtle at first glance, but they're significant in practice Streamlining tasks like drafting replies, summarizing long content, or generating ideas can save time and reduce friction in routine workflows. Students researching topics, professionals juggling communication, or casual users trying to extract insights from articles will find the new split-screen Gemini a practical addition. Privacy-minded individuals will also appreciate that Gemini's split-screen tools work within the context of their existing apps, rather than funneling data through separate windows or services. What's next for gemini and mobile AI Google's rollout is still in the early stages, and not all devices or apps support the split-screen feature yet. But the groundwork has been laid for even deeper integrations, where third-party apps might expose richer interfaces that Gemini can use to provide more tailored assistance. Developers could eventually allow Gemini access to app content in structured ways, similar to desktop AI plugins. As AI becomes more embedded into operating systems, experiences like split-screen multitasking may soon become commonplace, blurring the line between app and assistant. Google's latest move with Gemini hints at a future where your phone's AI doesn't just answer questions - it helps you get things done.
[2]
Gemini's Split-Screen Feature Might Be Coming to Non-Foldable Smartphones
The feature is reportedly being rolled out in a phased manner Google is reportedly expanding the split-screen compatibility of the Gemini app to non-foldable smartphones. As per the report, Gemini can now operate side-by-side in the split-screen mode even on regular handsets. So far, this capability has been limited to larger displays, such as foldables and tablets. But it appears that the Mountain View-based tech giant is slowly expanding it to all devices. The multi-tasking feature reportedly also lets users share their screen and ask the artificial intelligence (AI) chatbot questions about the second app in the split view. Gemini in Split-Screen Mode Could Be Coming to Regular Smartphones According to an Android Authority report, the tech giant is slowly expanding Gemini's split-screen compatibility. Currently, non-foldable phone users can still open the Gemini app in split-screen view. However, the functionality is very limited, and users cannot use two apps side-by-side when one of them is Gemini. However, the report claims that with the latest Google app version 17.5.42.ve.arm64, this is changing. The publication tested the feature on a Pixel 9 and found that the Gemini app can be opened side-by-side in the split-screen view. Not only that, but it is also said that a new feature activates when the AI assistant enters this mode. The report claims that instead of a blank screen, users will see a new option on the home screen of the app dubbed "Share screen and app content." Activating this button will let Gemini see what's open on the other app and answer queries about the app and the content displayed on the screen. This screen share support is said to be smoother than the overlay option that blocks the screen while Gemini answers queries. As per the report, the feature has two different behaviours depending on whether a browser is the second app or not. For browsers or similar apps, Gemini reportedly captures the URL and answers questions based on that. However, if the app is not web-based, then the AI assistant takes a screenshot and analyses the visual information to answer questions. During the latter, it also blacks out its own interface so that the AI is not confused. Do note, Gadgets 360 staff members were not able to spot the feature on their devices, including the recently launched iQOO 15. It could be that the tech giant is slowly expanding the feature to more devices; however, unless Google makes an announcement, it cannot be said for sure that the company plans to expand it beyond Pixel phones at this time.
Share
Share
Copy Link
Google is expanding Gemini AI's split-screen feature beyond foldables to regular Android smartphones. The update allows users to run Gemini alongside another app, enabling contextual assistance without switching apps. A new "Share screen and app content" option lets the AI chatbot analyze content on the second screen and answer queries in real time.
Google is expanding its Gemini AI platform with a significant update that transforms how users interact with artificial intelligence on their smartphones. The split-screen feature, previously limited to foldables and tablets, is now rolling out to non-foldable Android devices in a phased manner
2
. This enhancement allows Gemini to operate side-by-side with other apps, eliminating the need to constantly switch between interfaces while working1
.
Source: Gadgets 360
The update fundamentally changes mobile AI multitasking by enabling users to run Gemini alongside another app in a dedicated pane. Testing on devices like the Pixel 9 revealed that when activated in split-screen mode, users see a new "Share screen and app content" option on the home screen
2
. This capability allows the AI assistant to actively contribute to tasks rather than simply responding to isolated queries.Traditional AI assistants on smartphones required users to open a separate chat window, ask questions, then return to their primary app. Google's Gemini split-screen multitasking breaks this pattern entirely. While composing emails or messages, Gemini can suggest phrasing, refine text, or draft replies in real time without interrupting the user interface
1
. When reading articles or documents in a browser, the chatbot can extract key points or provide summaries while the content remains visible.The feature demonstrates two distinct behaviors depending on the application type. For browsers and similar apps, Gemini captures the URL and answers questions based on web content. For non-web-based apps, the AI assistant takes screenshots and analyzes visual information, blacking out its own interface during analysis to avoid confusion
2
. This contextual assistance without switching apps reduces friction in routine workflows significantly.This move reflects how manufacturers and developers are rethinking mobile AI. Instead of treating artificial intelligence as a separate service accessed occasionally, companies like Google are moving toward seamlessly embedded into operating systems where generative intelligence becomes part of everyday tasks
1
. Competitors including Apple and Microsoft have signaled similar intentions, with Microsoft exploring AI tools within Windows apps and Apple preparing on-device AI services in iOS.Google's implementation represents one of the more advanced examples of AI integration on split-screen on Android devices so far. Students researching topics, professionals managing communication, or casual users extracting insights from articles will find practical value in having Gemini analyze content on the second screen without context switching
1
.Related Stories
The feature is currently available through Google app version 17.5.42.ve.arm64, though the rollout remains limited
2
. Not all devices or apps support the split-screen feature yet, and some users have reported being unable to access it on non-Pixel devices. This suggests Google may be testing the capability before wider expansion across Android and potentially other operating systems.Looking ahead, the groundwork has been laid for even deeper integrations where third-party apps might expose richer interfaces that Gemini can use for more tailored assistance. Developers could eventually allow structured access to app content, similar to desktop AI plugins
1
. As AI becomes more embedded, experiences like split-screen multitasking may blur the line between app and assistant, creating an AI-augmented workflow where your phone's intelligence actively helps complete tasks rather than merely answering questions. Watch for announcements from Google regarding broader device compatibility and potential API access for developers seeking to integrate this contextual assistance into their applications.Summarized by
Navi
[1]
29 Aug 2024

10 Jul 2025•Technology

03 Mar 2025•Technology

1
Technology

2
Business and Economy

3
Technology
