Curated by THEOUTPOST
On Mon, 24 Mar, 4:01 PM UTC
12 Sources
[1]
Google is rolling out Gemini's real-time AI video features
Wes Davis is a weekend editor who covers the latest in tech and entertainment. He has written news, reviews, and more as a tech journalist since 2020. Google has started rolling out new AI features to Gemini Live that let it "see" your screen or through your smartphone camera and answer questions about either in real-time, Google spokesperson Alex Joseph has confirmed in an email to The Verge. The features come nearly a year after Google first demonstrated the "Project Astra" work that powers them. A Reddit user said the feature showed up on their Xiaomi phone, as spotted by 9to5Google. Today, the same user posted the video below demonstrating Gemini's new screen-reading ability. It's one of the two features Google said in early March would "start rolling out to Gemini Advanced Subscribers as part of the Google One AI Premium plan" later in the month. The other Astra capability rolling out now is live video, which lets Gemini interpret the feed from your smartphone camera in real-time and answer questions about it. In the demonstration video below that Google published this month, a person uses the feature to ask Gemini for help deciding on a paint color to use for their freshly-glazed pottery. Google's rollout of these features is a fresh example of the company's big AI assistant lead as Amazon is preparing the limited early access debut of its Alexa Plus upgrade and Apple has delayed its upgraded Siri. Both are supposed to have capabilities similar to the ones Astra is starting to enable now. Meanwhile, Samsung still has Bixby, but Gemini is nevertheless the default assistant on its phones.
[2]
Google Gemini's Astra (screen sharing) rolls out on Android for some users
At MWC 2025, Google confirmed it was working on screen and video share capabilities for Gemini Live, codenamed "Project Astra". At that time, Google promised that the feature would begin rolling out soon, and now some users have spotted it in the wild. According to a video shared by a Reddit user who owns a Xiaomi phone with a Gemini Advanced subscription, you can now share your phone's screen with Gemini Live and ask questions about it. The integration is interesting because you can ask AI questions about what's on your screen and have a lengthy discussion, and Gemini will remember the context. For example, if you open Chrome, then go to Wikipedia, and invoke Gemini Live, you can ask Gemini to summarize what you see on the screen. This means if you're browsing the Gross Domestic Product (GDP) page on Wikipedia, you can use Gemini to sum up the numbers for you or explain key economic terms to you. You can even ask it to read for you or turn the page into a melody or rephrase it in a language of your choice. Gemini will understand the context because it can see what you're doing with your phone. Google previously said that Gemini's screen-sharing capabilities would roll out to Gemini Advanced subscribers only, which starts at $19.99 per month.
[3]
A Week After Apple's AI Flounders, Google's Gemini Can Now "See" the World on Phones
Gemini can tell you what it's looking at on your phone or through the camera way before Apple Intelligence can. Google has beaten Apple to the AI punch. We knew it was likely what with the news some of the most ambitious Apple Intelligence features had been delayed and the resulting Siri executive shakeups. But now Google is rolling out more contextual abilities to Gemini on select Android devices. Google confirmed that it began rolling out Gemini's Project Astra-powered camera and screen share capabilities this week. It's available for those paying for Gemini Advanced and those on the Google One AI Premium plan. The features let you share your screen like you're broadcasting it to a class or use the live camera to get Gemini's input on stuff around you. You can take Gemini into the real world and use it to identify things, a la Google Lens, or get its guidance on your own projects-the example Google uses is have it help you shop for tiles at a store. There are only a few reports scattered around the internet from users who can invoke the features thus far, including this one from a user on Reddit, which is making this seem like less of a rollout than it is. I've confirmed with Google that the rollout is underway, though I've been unable to access it myself. The first user to report using it isn't even on a Google Pixel device. They're on a Xiaomi smartphone. 9to5Google has managed to get Gemini Live with screen-share going, enough to show in a YouTube video. © YouTube A screenshot from a video Google put together to show Gemini Live in action with camera access.Gemini's ability to "see" the world around it offers an edge. The iPhone has what Apple Intelligence calls "Visual Intelligence," which uses a camera to identify objects and text. Its current implementation is pretty limited and does not converse with you or engage in a back-in-forth about the colors of the tile the way that Gemini is shown doing in Google's demonstrations. It'll be interesting to see how Google hones on these abilities. Heck maybe it will even finally convince users of its utility, which would only further drive the stinger into Apple for trailing behind. I saw Project Astra last year during Google's annual developer conference, where I experienced the camera identification features in action. It didn't move the needle for me then, but it's decently impressive Google has pushed Gemini's ability to see through the camera lens less than a year after debuting it when you consider how badly Apple has apparently fumbled similar features. I look forward to when live video and screen-sharing capabilities roll out to my Pixel 9 Pro to get a feel for how I'll apply it to my real life. Google didn't specify the devices that were getting it, but I'd check your Pixel or Samsung Galaxy device in the coming weeks.
[4]
Google makes good on Gemini Live promise a year later with real-time Astra
4 reasons you should turn off Gemini AI training if you're interested in maintaining privacy Summary Gemini is evolving rapidly, with new live video AI features resembling Project Astra being added to the mix. A person with a Google One subscription on a Xiaomi device could share their screen in real-time with Gemini overlays. Google's initial release of the new features will focus on Gemini Advanced subscribers with a Google One AI Premium plan. Gemini has been Google's starchild for the past couple of years, with new capabilities added almost every week. What started as a chatbot has now taken over Assistant's role, permeated into Google Workspace, and also rivals other Search tools like Lens and Circle to Search. With so many overlapping tools existing concurrently, it's no surprise Google just added one more to the mix -- live video AI features for Gemini resembling Project Astra. Related Project Astra: Everything you need to know about the Google Deepmind project Google's new vision for AI assistants Posts We'd forgive you if Project Astra is now forgotten since its unveiling nearly a year ago at Google's annual developer conference in May 2024. It represented the company's short-term vision for a real-time assistant capable of multimodal audio and video prompting. While Android XR for mixed reality glasses is still gaining traction, Gemini Live gave us the first taste of the future. Now, 9to5Google reports users are activating Gemini overlays for the Astra-powered camera and screen sharing in Live (via The Verge). Gemini already supports Live discussions about a screenshot from your device, but a Reddit user with a Google One subscription on a Xiaomi device found they suddenly could share their screen in real time with the AI. They could tap the button called Share screen with Live just above the older Ask about screen prompt in the Gemini interface. Thereafter, you or anyone with this feature can ask about anything on screen. Real-time audio video Astra was spotted too Going beyond screenshots Support for screen sharing rolling out widely isn't a surprise per se, because it is one of the AI features Google promised to roll out in early March, and the same Redditor claims they had access for a limited time, but screen recording didn't work, perhaps due to privacy-related permissions. However, the search giant's initial release will focus on Gemini Advanced subscribers with a Google One AI Premium plan. Hopefully, limited demos will be available for free-tier users to drum up excitement for the subscription. When prompting Astra, which uses real-time audio and video from your device's camera and mic, Gemini switches to a phonecall-style UI. Here, you can pause the feed if interrupted, switch cameras, toggle video prompting on or off, switch to screen sharing, or end the session. In our testing, neither Share screen with Live nor Astra were available, but the Reddit user claims they aren't a beta tester, and they didn't see an update for the Gemini app either. These signs suggest we are witnessing the early signs of a staggered worldwide release through server-side changes, provided Google keeps its word and follows through.
[5]
Google is still rolling out Astra camera and screen sharing to Gemini Live
Besides making 2.5 Pro (experimental) available to all users, Google on Saturday provided an update on Gemini Live's Astra camera and screen sharing capabilities. At the start of March, Google said Project Astra-powered screen and live camera sharing would "start rolling out to Gemini Advanced subscribers as part of the Google One AI Premium plan on Android devices later this month." It was also demoed to MWC 2025 attendees: Some Gemini Live users started seeing those capabilities on March 22, and more reports have gradually emerged since then. The launch does not appear specific to Pixel or Samsung Galaxy devices, with having Gemini Advanced being the main requirement. On Saturday evening, Google (via @GeminiApp) said: "...we see (and share) the excitement around this feature and are working hard to make it available to more people! The ability to share your camera or screen in Gemini Live conversations will continue to roll out and we'll provide updates on this page as the feature expands." There's another demo of it on the latest episode of the Waveform Podcast (queued to timestamp below): The rollout will presumably continue over the coming weeks, with Google only saying that users will start seeing Astra this month and not that it would be fully available in March. Google took several weeks to launch Gemini Live's 2.0 Flash upgrade. It started in February and was officially detailed with the March Pixel Feature Drop. That model update "made improvements in understanding and multilingual conversation," with users able to speak in "any combination of over 45 languages" without having to first specify in settings.
[6]
Gemini can now see your screen and judge your tabs
These upgrades are powered by Project Astra, Google's AI R&D umbrella Google's Gemini Live is finally getting the gift of sight. The tech giant has quietly begun rolling out features that transform your humble smartphone into an all-seeing eye for its AI assistant. The new abilities were uncovered by a Reddit user who later shared a video of the features in action. The upgrade lets Gemini peer through your screen or camera lens and process what it sees. The rollout marks the debut of Google's much-discussed and much-anticipated Project Astra. Based on the video, Gemini's 'eyes' can analyze your screen in real-time through a "Share screen with Live" button. Gemini has long been able to digest static screenshots, but the update maintains a continuous gaze on your screen, looking at whatever you are on your phone for better or for worse. The other tool makes your phone's camera Gemini's eye. Google has demonstrated that the AI can precisely discern colors and objects. Whether the final product matches the platonic ideal of the demos isn't clear just yet. A short demo of Project Astra (Share screen with Live) from r/Bard The new feature is arriving first for Gemini Advanced subscribers paying $20 a month for the Google One plan with extra AI. The rollout is notably democratic in where the feature appears, though, judging from the Xiaomi phone shown by the Reddit user. Google had previously hinted that Pixel and Galaxy S25 owners would have faster or better access to Project Astra. Other AI assistants with similar seeing tools exist, but they are mostly tied to third-party apps like Microsoft Copilot, ChatGPT, Grok, and even Hugging Face's new HuggingSnap app. Having a real-time screen and camera-connect AI built into Android would certainly help entice users interested in an AI assistant to at least try Gemini. And Google's timing in releasing the feature is notable as it tries to carve out a lead among AI assistants. Though Amazon has been hyping its new "Alexa Plus" update, it has yet to arrive. Meanwhile, Apple's upgraded Siri has been delayed multiple times. That leaves Google with a temporary but very real lead in the AI assistant race. Gemini, for all its early hiccups and rebranding drama (RIP Bard), is now doing things that neither Alexa nor Siri can match for the moment. Google has promised that Project Astra will be the "next-generation assistant" everyone wants to use all day. So keep your (and Gemini's) eyes peeled for new features to arrive in the weeks ahead.
[7]
Google Starts Rolling Out AI That Uses Your Smartphone Camera to 'See' The World
Google has started rolling out "Project Astra," an AI that can see via a person's smartphone camera, enabling it to answer questions about the world around it. Project Astra, which was first announced almost a year ago, has begun showing for some Gemini Live Users. The Verge explains that the Astra features allow Google's AI model Gemini to interpret the live feed from the user's smartphone camera in real-time. Google recently shared a demonstration video in which someone called Amy asks what paint color she should use on her newly fired vase. It's an interesting exhibit and, if it works as well as Google makes out, it's a tool that might come in handy for photographers. It could be used to ask, similar to the pottery video, which colors would go well with a particular background or even a technical question; which focal length to use, or information on how to operate a particular camera. In Google's announcement video for Astra last year (at the top of this page), the AI model is taken around an office and asked all manner of questions, including where the user left her glasses -- that really could be useful. Astra also has a feature where it can "see" a user's screen. A Redditor shared a short demo of it where the gist of what it will do is clear. 9to5Google reports Project Astra will be rolled out slowly to users so it might be worth checking your Gemini app to see if it is there. Astra is a step toward a personalized AI assistant which tech companies seem to believe will be the next big thing. In a leaked recording of a Meta meeting last month, CEO Mark Zuckerberg predicted that this year, a "highly intelligent and personalized" digital assistant will reach one billion users. "I think whoever gets there first is going to have a long-term, durable advantage towards building one of the most important products in history," Zuckerberg says in the recording, while also stating his belief that AI agents will start working for Meta; writing software.
[8]
Gemini app finally gets the world-understanding Project Astra update
At MWC 2025, Google confirmed that its experimental Project Astra assistant will roll out widely in March. It seems the feature has started reaching out to users, albeit in a phased manner, beginning with Android smartphones. On Reddit, one user shared a demo video that shows a new "Share Screen With Live" option when the Gemini Assistant is summoned. Moreover, the Gemini Live interface also received two new options for live video and screen sharing. Recommended Videos Google has also confirmed to The Verge that the aforementioned features are now on the rollout trajectory. So far, Gemini has only been capable of contextual on-screen awareness courtesy of the "Ask about screen" feature. Please enable Javascript to view this content Project Astra is the future of Gemini AI In case you aren't familiar, Project Astra is the most futuristic iteration of an AI that can understand text, audio, picture, video, and live camera feed in real-time. Google Deepmind's research director, Greg Wayne, likened it to a "little parrot on your shoulder that's hanging out with you and talking to you about the world." When you summon Gemini and enable the Share Screen With Live option in any app, it will analyze the on-screen content and will answer queries based on it. For example, users can ask it to describe the current activity happening in an app, break down or summarize any article they were reading, or talk to them about the scene during video playback. The more impressive capability is the world understanding system. When you switch to the Live mode for Gemini, it now shows a video feed option that opens the camera. In this mode, if you point your camera at any object, Gemini will be able to see, comprehend, and answer questions based on what it sees. Pointing the camera at a book passage, asking Gemini to tell more about a monument, getting advice on decor, or solving problems written on a board or book -- Gemini Live's Project Astra upgrade can do it all. It is not too different from Apple's Visual Intelligence on iPhones, or the open-source HuggingSnap app that promises an offline world-understanding AI. Digital Trends got a demo of Project Astra at MWC earlier this year, getting an early taste of a massively upgraded AI assistant experience on smartphones. It is worth pointing out that Gemini Live's Project Astra upgrade will be limited to customers who pay for a Gemini Advanced subscription. It seems the scale of this Project Astra update rollout isn't too wide at the moment. I have a Gemini Advanced subscription via Google One AI Premium bundle, but don't see it yet on any of my Pixel phones running the latest stable version of Android 15 or the beta build of Android 16.
[9]
Google is Rolling Out Live Video and Screen Sharing to Gemini Live
You can soon share your screen, and use your camera to provide real time information for Gemini. Google is reportedly rolling out new features for Gemini to analyse a user's surroundings in real-time using the device's camera or screen, providing instant answers to questions. These features are a part of Project Astra, which was announced by the company last year. A user on Reddit claimed to have received the update on Gemini and shared a video demonstrating how the feature works. This feature is a part of Gemini Live, which lets users converse with the AI tool in real-time and using natural language. Later, a spokesperson from Google confirmed to The Verge that the company has indeed started rolling out the features. Recently, at the Mobile World Congress in Barcelona, the company announced that 'live video' and 'screen sharing' capabilities will start rolling out to Gemini Advanced subscribers as part of the Google One Premium plan on Android devices this month. In December last year, the company announced more improvements and features to Project Astra, such as better dialogue, latency and memory capabilities and the ability to use external tools. During this time, the company also unveiled Multimodal Live API with Gemini 2.0, which could read information from the user's screen and offer real-time advice. OpenAI already has a feature for ChatGPT Plus and Pro subscribers to use the camera to feed real-time visual information for the advanced voice mode feature. Recently, Google announced Audio Overviews for Gemini, which brings NotebookLM's capabilities to the AI assistant. The company also announced native image generation and image editing features for the Gemini 2.0 Flash model. Meanwhile, Google announced Gemma 3, the next iteration in the Gemma family of open-weight models -- a successor to the Gemma 2 model released last year. In the Chatbot Arena, Gemma 3 27B outperformed DeepSeek-V3, OpenAI's o3-mini and Meta's Llama 3-405B model. Models in Chatbot Arena are evaluated against each other through side-by-side evaluations by humans.
[10]
Gemini Live can finally see what's on your screen
According to The Verge, Google has begun rolling out real-time screen and camera analysis features for Gemini Live, available now to select Google One AI Premium subscribers. The update, powered by last year's Project Astra, enables the AI to interpret visual inputs and answer questions on the fly, a company spokesperson confirmed. A Reddit user demonstrated Gemini's screen-reading capability on a Xiaomi device this week, sharing a video of the AI identifying app icons and summarizing content. 9to5Google first spotted the post, which aligns with Google's March announcement that these features would launch by month's end. In a company-published example, a user points their phone camera at a pottery piece and asks Gemini to suggest paint colors. The AI analyzes the glaze texture and surface area before recommending options. The rollout gives Google an edge as rivals scramble: Amazon's Alexa Plus enters limited testing soon, Apple delayed its Siri overhaul, and Samsung -- despite maintaining Bixby -- defaults to Gemini on its devices. All aim to match Astra's real-time visual processing, which Google deployed nearly a year after its initial demo.
[11]
You Might Soon Get to Try Out Gemini Live Video, Screen-Sharing Features
Gemini Live and screen-sharing features were also previewed at MWC 2025 Google is reportedly rolling out its two major Gemini features -- live video and screen-sharing. The Mountain View-based tech giant first unveiled these features at Google I/O 2024. Developed by Google DeepMind under Project Astra, these features come with live multimodal data processing capabilities and allows the artificial intelligence (AI) chatbot to answer queries about the user's device and their surroundings in real-time. The company had previously said that these new features will be rolling out by March. Notably, currently these features are only available to Gemini Advanced subscribers on the mobile apps. First spotted by 9to5Google, Reddit user Kien_PS recently posted a screenshot on the Bard (Gemini's older name) subreddit showcasing the "Share-screen with live" feature. The same user again posted a demo video of the feature on Sunday, highlighting how it works. Separately, Google spokesperson Alex Joseph told The Verge that the new AI features are rolling out to Gemini Live. Apart from screen-sharing, Gemini will also be able to access the user's device's camera and answer queries about whatever the user sees in real-time. This real-time data processing capability will now allow users to ask Gemini queries about outfit suggestions by showing it their wardrobe, or identifying a monument or a store when outdoors. The screen-sharing feature, which is an enhanced version of the existing "Talk about the screen" feature will let Gemini help the user as they navigate across various screens on their smartphone. Both of these features are part of Gemini Live which was rolled out to users last year, and can have a two-way live voice conversation with users. Google had previously said that it wants to make Gemini more useful in real-time situations. Notably, the Gemini Live video feature is similar to OpenAI's Advanced Voice Mode with Vision feature for ChatGPT and the real-time video feature in Ray-Ban Meta Smart Glasses. As AI and the underlying infrastructure behind the technology advances, and cloud servers get more capable, tech giants can now offer faster inference for real-time use cases. Notably, these two Gemini features are only available to the Gemini Advanced subscribers currently. The company has not shared any information on when and if it will be expanded to the free tier. Gemini Advanced subscription can be purchased as part of the Google One AI Premium plan at the price of Rs. 1,950.
[12]
The Gemini App Gets 'Project Astra' Support For Android Users
You can share your screen or camera with Gemini and voice chat with near-zero latency. At MWC 2025, Google demonstrated Project Astra in Gemini and said the much-anticipated AI feature will be available to Android users by the end of March. And finally, Google is walking the talk. A Reddit user has revealed that Project Astra is already live on their Xiaomi phone. In case you are unaware, Project Astra lets you share your screen or camera to allow Gemini to see and voice chat in real-time. In the demo, Gemini is able to see the phone's screen and chat with the user in real-time. And to share your camera, you can open Gemini Live and show your surroundings to interact with Gemini instantly. Google has confirmed to The Verge that the feature is indeed rolling out. Note that Google has integrated Project Astra into the Gemini app, and it's only available to Google One AI Premium subscribers which costs $19.99 per month. Currently, it's limited to Android phones. OpenAI has also rolled out screen-sharing and live camera on ChatGPT and it costs $20 per month. Microsoft Copilot, on the other hand, is offering the same feature for free in the US. Some users have already received access to Copilot Vision on Android and iOS without any subscription. While Google is late to the party, we hope the feature is worth the wait. It can assist users while studying and can analyze on-screen information and data. You can also use Project Astra while traveling. You can point the camera around your surroundings and ask questions to get a response instantly.
Share
Share
Copy Link
Google has begun rolling out new AI features to Gemini Live, allowing it to analyze live video feeds and shared screens in real-time. This development, part of Project Astra, is currently available to Gemini Advanced subscribers.
Google has begun rolling out new artificial intelligence features to Gemini Live, enabling the AI to analyze live video feeds and shared screens in real-time. This development, part of the company's Project Astra initiative, marks a significant advancement in AI-powered digital assistance 1.
The new features, which are being gradually introduced to Gemini Advanced subscribers as part of the Google One AI Premium plan, include:
Screen Sharing: Users can share their device screens with Gemini, allowing the AI to answer questions about on-screen content in real-time 2.
Live Video Analysis: Gemini can interpret the feed from a smartphone camera in real-time, offering insights and answering questions about what it "sees" 1.
Early reports from users indicate that the new features offer a range of practical applications:
The rollout of these features appears to be gradual and not limited to specific device brands:
This development positions Google ahead of competitors in the AI assistant space:
As these features continue to roll out, they represent a significant step forward in the integration of AI into everyday digital interactions, potentially reshaping how users engage with their devices and access information.
Reference
[2]
Google has announced that Gemini Live's camera and screen sharing capabilities, previously limited to specific devices and subscriptions, will now be available for free to all Android users through the Gemini app.
4 Sources
4 Sources
Google introduces groundbreaking features for Gemini, including live video and screen sharing capabilities, enhancing AI-powered assistance and interaction.
16 Sources
16 Sources
Google rolls out Gemini Live's real-time visual AI capabilities, initially showcased as Project Astra, to a wider range of Android devices, enhancing AI-powered visual recognition and interaction.
21 Sources
21 Sources
Google has made Gemini Live, its conversational AI assistant, freely available to all Android users. This move brings advanced voice AI capabilities to a wider audience, challenging competitors in the AI assistant space.
7 Sources
7 Sources
Google introduces Gemini Live, a premium AI-powered chatbot to rival OpenAI's ChatGPT. The new service offers advanced features but faces scrutiny over its pricing and rollout strategy.
6 Sources
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved