Curated by THEOUTPOST
On Thu, 3 Apr, 12:01 AM UTC
21 Sources
[1]
Gemini Live Isn't Just Conversational AI -- It Now Has Eyes. I Tried It Out
Blake has over a decade of experience writing for the web, with a focus on mobile phones, where he covered the smartphone boom of the 2010s and the broader tech scene. When he's not in front of a keyboard, you'll most likely find him playing video games, watching horror flicks, or hunting down a good churro. There I was, walking around my apartment, taking a video with my phone and talking to Google's Gemini Live. I was giving the AI a tour - and a quiz, asking it to name specific objects it saw. After it identified the flowers in a vase in my living room (chamomile and dianthus, by the way), I tried a curveball: I asked it to tell me where I'd left a pair of scissors. "I just spotted your scissors on the table, right next to the green package of pistachios. Do you see them?" It was right, and I was wowed. Gemini Live will recognize a whole lot more than household odds and ends. Google says it'll help you navigate a crowded train station or figure out the filling of a pastry. It can give you deeper information about artwork, like where an object originated and whether it was a limited edition. It's more than just a souped-up Google Lens. You talk with it and it talks to you. I didn't need to speak to Gemini in any particular way - it was as casual as any conversation. Way better than talking with the old Google Assistant that the company is quickly phasing out. Google and Samsung are just now starting to formally roll out the feature to all Pixel 9 and Galaxy S25 phones. It's available for free for those devices, and other Pixel phones can access it via a Google AI Premium subscription. Google also released a new YouTube video for the April 2025 Pixel Drop showcasing the feature, and there's now a dedicated page on the Google Store for it. All you have to do to get started is go live with Gemini, enable the camera and start talking. Gemini Live follows on from Google's Project Astra, first revealed last year as possibly the company's biggest "we're in the future" feature, an experimental next step for generative AI capabilities, beyond your simply typing or even speaking prompts into a chatbot like ChatGPT, Claude or Gemini. It comes as AI companies continue to dramatically increase the skills of AI tools, from video generation to raw processing power. Similarly to Gemini Live, there's Apple's Visual Intelligence, which the iPhone maker released in a beta form late last year. While it works differently than Google's offering, it wouldn't be surprising if the iPhone eventually gained a similar feature. My big takeaway is that a feature like Gemini Live has the potential to change how we interact with the world around us, melding our digital and physical worlds together just by holding your camera in front of almost anything. Somehow Gemini Live showed up on my Pixel 9 Pro XL a few days early, so I've already had a chance to play around with it. The first time I tried it, Gemini was shockingly accurate when I placed a very specific gaming collectible of a stuffed rabbit in my camera's view. The second time, I showed it to a friend when we were in an art gallery. It not only identified the tortoise on a cross (don't ask me), but it also immediately identified and translated the kanji right next to the tortoise, giving both of us chills and leaving us more than a little creeped out. In a good way, I think. In the tour of my apartment, I was following the lead of the demo that Google did last summer when it first showed off these Live video AI capabilities. I tried random objects in my apartment (fruit, books, Chapstick), many of which it easily identified. Then I got thinking about how I could stress-test the feature. I tried to screen-record it in action, but it consistently fell apart at that task. And what if I went off the beaten path with it? I'm a huge fan of the horror genre -- movies, TV shows, video games -- and have countless collectibles, trinkets and what have you. How well would it do with more obscure stuff -- like my horror-themed collectibles? First, let me say that Gemini can be both absolutely incredible and ridiculously frustrating in the same round of questions. I had roughly 11 objects that I was asking Gemini to identify, and it would sometimes get worse the longer the live session ran, so I had to limit sessions to only one or two objects. My guess is that Gemini attempted to use contextual information from previously identified objects to guess new objects put in front of it, which sort of makes sense, but ultimately neither I nor it benefited from this. Sometimes, Gemini was just on point, easily landing the correct answers with no fuss or confusion, but this tended to happen with more recent or popular objects. For example, I was pretty surprised when it immediately guessed one of my test objects was not only from Destiny 2, but was a limited edition from a seasonal event from last year. At other times, Gemini would be way off the mark, and I would need to give it more hints to get into the ballpark of the right answer. And sometimes, it seemed as though Gemini was taking context from my previous live sessions to come up with answers, identifying multiple objects as coming from Silent Hill when they were not. I have a display case dedicated to the game series, so I could see why it would want to dip into that territory quickly. Gemini can get full-on bugged out at times. On more than one occasion, Gemini misidentified one of the items as a made-up character from the unreleased Silent Hill: f game, clearly merging pieces of different titles into something that never was. The other consistent bug I experienced was when Gemini would produce an incorrect answer, and I would correct it and hint closer at the answer -- or straight up give it the answer, only to have it repeat the incorrect answer as if it was a new guess. When that happened, I would close the session and start a new one, which wasn't always helpful. One trick I found was that some conversations did better than others. If I scrolled through my Gemini conversation list, tapped an old chat that had gotten a specific item correct, and then went live again from that chat, it would be able to identify the items without issue. While that's not necessarily surprising, it was interesting to see that some conversations worked better than others, even if you used the same language. Google didn't respond to my requests for more information on how Gemini Live works. I wanted Gemini to successfully answer my sometimes highly specific questions, so I provided plenty of hints to get there. The nudges were often helpful, but not always. Below are a series of objects I tried to get Gemini to identify and provide information about.
[2]
Gemini Live Can Now 'See' What You Show It on Galaxy S25, Pixel 9
The update rolls out Monday, starting with Samsung Galaxy S25 and Google Pixel 9 phones. It comes at no additional cost. It's also available to Gemini Advanced users on Android via the Gemini app. Gemini Live, Google's AI model that's baked into the latest Samsung Galaxy and Pixel phones, lets you have a back-and-forth conversation for tasks like brainstorming or practicing for an interview. Now, you can also show Gemini Live what you're looking at through your camera or by sharing your screen. So if you're piecing together an outfit, reorganizing your closet or trying to make a purchasing decision, Gemini can offer feedback. To activate Gemini Live, press and hold the side power button on your Galaxy S25, then tap the three lines with the star in the bottom corner of the bar. On the Galaxy 25 and Pixel 9, you can also go into the Gemini app and tap those same three vertical lines at the bottom, or say "Hey Google" to activate the AI assistant. Prior to this update, you could also upload a photo to Ask Gemini (also triggered by holding down the power button) to include with a voice or text prompt, but the Gemini Live feature allows for real-time back-and-forth. Visual AI has become a staple of various artificial intelligence models, from ChatGPT to Apple's Visual Intelligence tool to Google Lens. Samsung has more deeply embedded Gemini into its latest Galaxy phones, and this new visual capability potentially bolsters its goal of making the Galaxy S25 a holistic "AI companion." Google, too, has increasingly touted the AI capabilities on its Pixel lineup, including on the new budget-friendly Pixel 9A. So, when your real-life friends aren't picking up the phone to offer feedback on your outfit, at least Gemini can potentially fill the void.
[3]
Gemini Live's real-time camera mode gets a wider release
When Google Assistant is phased out, what happens to our smart speakers? Summary Project Astra is a supercharged version of Gemini Live with real-time video capabilities. Google is now widely rolling out Gemini Live with Astra capabilities to paid subscribers. The feature will work on Android 10+ devices. Google unveiled Project Astra at I/O 2024, giving us a glimpse into future AI-powered assistants that could analyze real-time video feeds. After a few months of silence, Google confirmed at MWC 2025 that it will release Project Astra as a Gemini Live capability in March. Sticking to its promised timeline, Google pushed out this Gemini Live improvement to select users in late March. Now, a couple of weeks later, it seems the company has commenced a wider rollout of Gemini Live's multimodal audio and video prompting. Related Gemini Live now understands your Spanglish 45+ languages now ready in beta Posts 2 9to5Google reports that several Android users report seeing an overlay about sharing their phone's camera feed or screen when triggering Gemini Live. The pop-up describes Project Astra as a way to "talk through ideas, learn about your environment, or get help with what's on your screen." There are also several reports on Reddit from users confirming they have access to Gemini Live's Astra capabilities on their phones. In my case, the camera and screen-sharing buttons appeared on my Xiaomi 15 Ultra today without the overlay. Close In a post on X, Google also teased that the feature will soon roll out to more users: With Project Astra, you can interact with Gemini while showing a real-time camera feed and asking relevant questions. For example, you can ask Gemini for help with fixing a broken appliance or with your homework. Alternatively, you can share your phone's screen with Gemini and have a conversation with the AI assistant based on what you are doing. Gemini Live Astra requires a Gemini Advanced subscription Close While you can access Gemini 2.5 Pro, Google's most advanced AI model in the Google app, Gemini Live and Project Astra use the older Gemini 2.0 model. Initially, Gemini Live's Astra capabilities seemed limited to the Pixels and the Samsung Galaxy S25 series. However, the company recently updated its support document to reflect that Gemini Live's video streaming and screen-sharing capabilities will work on Android 10+ devices. That said, the feature is only available to Gemini Advanced or Google One AI Premium subscribers. If you are on the correct paid Google One tier and don't have access to Gemini Live's Astra features yet, ensure you have installed the latest Google app version on your phone. Also, try force-stopping the Google app and relaunching it to see if it enables the Astra capabilities.
[4]
Samsung brings Google Gemini's Project Astra mode to the Galaxy S25 series
Summary Gemini Live's live-video mode is expanding to more Androids, now rolling out to the Galaxy S25 series. Initial tests indicate the "live" video is snapshot-based, acting like conversational Google Lens, not continuous analysis. Samsung says that the feature is rolling out to all S25 series devices at "no additional cost." Google Gemini Live's real-time camera feed analysis capabilities, essentially the I/O 2024-announced Project Astra, are now starting to land on more Android devices. The multimodal assistant feature, which is able to understand audio and video prompts at the same time, first started landing on select devices roughly two weeks ago. Fast-forward to late last week, the feature began going live for some Pixel users, and it is now starting to make its presence known on Samsung's latest flagship. Related Gemini Live's real-time camera mode gets a wider release Only for Gemini Advanced subscribers, though Posts Rolling out now to the Galaxy S25 series, "Real-Time Visual AI" via Gemini Live is now free for all Galaxy S25 series users, even if they don't have a Gemini Advanced subscription. According to Samsung, for S25 users, the functionality comes "at no additional cost." It is currently uncertain whether that's just a sly way of saying that 'your new device already comes with a free Gemini Advanced Promo,' and hence the functionality comes "at no additional cost," or if Gemini Live's real-time camera feed analysis indeed free-to-use now for non-Advanced users. For what it's worth, I can see the video mode live on an account that is subscribed to Gemini Advanced. My free account, on the other hand, still hasn't surfaced the new feature. The tool acts like Google Lens with an assistant attached Although welcome, the current implementation feels like a patchwork of different technologies. In our brief time using it, the 'live' aspect of the video mode wasn't truly consistent, and it felt as though the feature wasn't constantly watching and analyzing the video feed. Instead, the feature seems to capture a snapshot of what the camera sees at the moment when a query is posed. . For example, asking Gemini Live to count fingers held up in real-time shows that the tool doesn't continuously process the video feed. It seemingly bases its answer on the snapshot it takes when queried. So essentially, in its current implementation, Gemini Live's video mode might just be Google Lens with a conversational assistant attached to it. Similarly, in other environments, like in a car, it was able to analyze that I was in a moving vehicle (passenger seat), though it made up the name of the street I was approaching. In a different example, it said that I was approaching a road that was actually nearby, but I wasn't driving towards it. While not entirely certain, it could be that the AI tool grabs information from Google Maps when in dynamic environments like a moving vehicle. According to Google, it intends to extend the functionality to Android 10+ devices in the near future.
[5]
Gemini's Project Astra live camera mode won't just be a Pixel exclusive
Summary Gemini Live video and screen sharing features will be available on any Android device with Android 10 or later. You need a Gemini Advanced subscription, part of the $20/month Google One AI Premium plan, to use these features. Access to Gemini Live's tools is rolling out gradually, offering seamless integration thanks to being built into the Gemini app. At this year's Mobile World Congress, Google finally confirmed a long-awaited Gemini AI feature it previously teased as Project Astra. This Gemini Live capability lets users analyze real-time video and share their screens effortlessly. Despite earlier belief that it would be exclusive to the Pixel and Galaxy S25 phones, Google has set the record straight: Gemini Live's video streaming and screen-sharing capabilities work on any Android device. Related Project Astra: Everything you need to know about the Google Deepmind project Google's new vision for AI assistants Posts Google's updated support page has officially made it clear that Gemini Live's video streaming and screen sharing work on any Android device running Android 10 or later (via 9to5Google). This update contradicts earlier marketing that made it seem like these Project Astra-based features were exclusive to the latest Pixel and Galaxy flagships. Turns out, way more devices can use them than we first thought. Gemini Advanced subscription required According to the support page, these features are available on any Android phone, but only if you have a Gemini Advanced subscription, which is part of the $20/month Google One AI Premium plan. Rollout is happening in stages, so some users might have to wait a bit before they get access. Google hasn't outright said whether Gemini Live's video features work across different Android devices like phones, tablets, or foldables. But all signs point to yes. Since a Gemini Advanced subscription is required -- and the Gemini app only runs on Android 10 or later -- any device meeting that criteria should be good to go. Gemini Live taps into Gemini's multi-modal smarts to process real-time video from your camera, letting you share live visuals with the AI. Meanwhile, screen sharing works just as you'd expect -- letting Gemini analyze whatever's on your screen, whether it's a website, app, or anything else you need help with. While Google's move isn't exactly breaking new ground (ChatGPT's Advanced Voice Mode has had similar video and screen sharing features since last year), what sets Gemini apart is the seamless integration. You won't need to bounce between apps, since everything is built right into the Gemini app.
[6]
Gemini Live widely rolling out camera & screen sharing on Pixel 9, S25 series
After starting in March, Google is now officially rolling out camera and screen sharing in Gemini Live for Android to the Pixel 9 and Galaxy S25 series. To date, Gemini's conversational mode has only accepted voice, image, PDF, or YouTube video input. Thanks to Project Astra, Gemini Live can now see what's on your screen. Upon sharing your display, you can navigate and scroll to something and ask questions about it. One way to quickly access this is by launching the Gemini overlay and tapping the new "Share screen with Live" chip. Android will make users confirm that they want to share their entire screen with the Google app, which powers Gemini. (The "Share one app" option has been disabled.) There's a call-style notification that shows a live count next to the time in the status bar. Clicking that pill opens the fullscreen Gemini Live experience. You also get a blue/purple Gemini waveform at the bottom of the screen. There's a subtle vibration before Gemini Live starts responding to your query. Another way to launch is by opening Gemini Live as you normally would and hitting the screen share button. A small redesign shrinks the circular buttons into smaller pills. Meanwhile, you can share your rear camera feed with Gemini Live to have a conversation about what you're seeing in the real world. A live preview will take up most of the screen, while you can switch to the front-facing camera from the bottom-right corner of the viewfinder. Google notes how: "For better results, capture objects with steady movements." There's also a "To interrupt Gemini, tap or start talking" reminder. The screen has to be active for Gemini Live to receive video. Google first teased Project Astra -- DeepMind's goal to build a "universal AI agent that is helpful in everyday life." -- last May at I/O 2024. New features made possible by Gemini 2.0 were detailed that December, while Google talked about it again in January at the Galaxy S25 launch. As of today, Google is bringing these Astra-powered Live capabilities to "more people, starting with all Gemini app users on Pixel 9 (officially, it's a one-feature Pixel Drop) and Samsung Galaxy S25 devices." Force stop the Gemini/Google app to get it. It will "soon" be available to all Gemini Advanced subscribers ($19.99 per month Google One AI Premium) on Android devices.
[7]
Gemini Live Video hands-on: The world-facing camera we've waited for [Video]
Google has been rolling out Gemini Live's visual Astra-powered video functions super slowly, but it is starting to expand. Here's what it's like to use and what you need to know. The company is suggesting that the arrival of the camera and screen sharing controls are part of an "April 2025 Pixel Drop" but a non-scheduled set of features feels odd given that the function will work on practically any Android phone. When you launch the Gemini app on your Android phone, a pop-up will indicate that Gemini Live's Astra-powered features are ready to test. The mini pop-up says you can "talk through ideas, learn about your environment" or "get help with what's on your screen." The latter refers to the screen-sharing function, which is like a souped-up version of Circle to Search. To access any of the new visual modes, you will need a Google One AI Advanced subscription plan. If you have a Pixel 9 Pro, Pro XL, Pro Fold, or some Galaxy S25 models, you will have received a substantial free trial of this paid tier. The trial is 12 months on Pixel 9 Pro models and 6 months on S25 devices. Google has given us a couple of ways to access the new controls. The easiest is via the dedicated Gemini app. When you launch Gemini Live, the usual call-style UI will gain an extra couple of buttons. There is a camera button and a presentation icon. You can use your voice with the "Hey, Google" wake phrase. Above the compact Gemini pop-up bar you'll see a "Share screen with Live" tappable button that will start AI-powered screen sharing. You are not able to select a single app yet. Instead, you have to share your entire screen. This might be intrusive for some people. A status bar chip will indicate that this is active. Tapping allows you to close or end your session quickly. The initial start-up and introduction is very fast, but the viewfinder opens and is very easy to decipher. It somewhat mimics the Pixel camera UI, so it's instantly familiar to someone who has used Google phones for a while. You don't have to use the rear camera. There is a toggle to switch to the selfie camera if you want to put yourself or your background in the frame and ask questions or advice. When using the Gemini Live Astra mode, I have found that focus in the camera is a little finicky, as lenses switching automatically can be annoying up close. It advises you to stay still or keep the subject as still as possible. I can attest to this, but despite some subjects not being in focus, Gemini is quick to determine what something is or produce information without too much difficulty. Not being able to zoom in, even with dedicated telephoto lenses on your phone also feels shortsighted. If you want to point out something in the world, you're going to have to get very close up, or you hope that Gemini can work out what you're pointing at. For simple or basic queries, it's perfectly adequate. However, the best use cases are for deeper questions like help or advice on objects, areas, and locations rather than asking simple things you probably could search for yourself. A prime example is getting the calorific date on food items or allergen information. I even tried some real-time translations, but while they seemed to be fairly accurate, Gemini Live does not give or show on-screen text. You only get audio cues and answers. Google Lens overlays the translated text, so it may be a better option for translating signs, text, and more. At the moment, you have to wait until you close a session before you can "see" some of the text-based responses. So you do get a chat log of what was said, what Gemini responded and any actions you can take. This could be a little better implemented, but for now it's fine. In tandem with other apps like Google Maps and various other services, you could probably use this as a visual learning aid or tutorial option. I tested Gemini Live's Astra-style video functions on the Pixel 9 Pro XL, and although it wasn't always instant, it felt pretty smooth, and responses were almost always fast enough to feel fluid. Screen sharing feels a little less "natural" than using a camera. Because you get no visual feedback, merely audio cues and information, it doesn't feel integrated in the same way. Getting webpage summaries is about the extent of the feature's usability or getting further information without leaving a page or screen. I'd wager this will change as we get more integrations. Think the ability to add things to shopping lists with URLs or flight information to your calendar when making travel plans. We can't do any of that yet. Like almost all AI platforms, Gemini is not a perfect system. In fact, it can get things wrong in lots of scenarios. Where it does well is with information recall. I found that certain items caused issues where Gemini wasn't able to give me accurate information. This is going to be fine for things you know about, and you can effectively interrupt and course-correct the wayward AI. The problem here is that hallucinations about things you might not know all that well could lead to problems. For instance, I asked about a Gameboy game cartridge, and Gemini misidentified the cart and gave me wildly incorrect information about the title and gameplay. Making sure you have a clear view of something within the viewfinder helps to mitigate this. Just adjusting where I "filmed" instantly resolved the problem without intervention. You simply can't trust the information blindly at this stage. Gemini is still prone to errors, and they range from minor to major in equal measure. Android XR and the AR platform will rely heavily on Gemini as the means to interact with and get information about the world around us. Gemini Live utilizing Astra functions is the first step in realizing that end goal. Per the demos shown late last year, we might be a few years away from Google-made AR glasses, but this early introduction isn't a bad way to kick things off. Like any AI product, be careful putting 100% of your trust in the information spewed out. That said, this seems like a great start and a solid way to interact with the world around you or help you when you get stuck. It'll improve over time and get better as more data points are introduced, so at least in theory, it's the worst that Gemini Live video modes will ever be - which isn't all that bad to begin with. Sharing your screen is very limited at this stage, and while it is a nice secondary option, it is even more limited. If it can develop to play nicely with more of your other services, it'll be a useful tool. Right now, it's a parlor trick that does little more than regular Gemini Live.
[8]
Google Gemini Live brings AI-powered vision to Galaxy S25 and Pixel 9 -- here's how it works
Gemini can now answer questions about what your camera sees in real time Google first released Gemini Live last year, letting users have "free-flowing, hands-free conversation" with AI. As of today Google has started rolling out a major update to Gemini Live with features called Gemini Live Video and screensharing. These features are coming to the Google Pixel 9, Samsung Galaxy S25 and Gemini Advanced subscribers. Gemini Live video is similar to Apple's Visual Intelligence. You point your phone's camera at things in the real world, and Gemini's multi-modal capabilities let you ask questions to learn more about it. The mode is available at the push of a button, and I like to think of it as a next-generation version of Google Lens. But rather than having to press the shutter button every time you want something scanned or analyzed, Gemini Live video does everything continuously. That means it's ready to respond to your verbal queries right away. Screensharing is pretty self-explanatory, and lets you share whatever's on your phone screen with Gemini. Like Live Video, this means you can ask questions about what's on a specific website or app. Google isn't exactly breaking new ground with any of these features, and alternatives have existed from various other companies. Not only does Apple intelligence offer a similar AI Vision feature, Chat GPT's Advanced Voice Mode has offered live video and screensharing options since last year. But having more choice is always a good thing, especially if you're already a happy owner of either a Pixel 9 or Galaxy S25. Both phones will get Live Video and screensharing capabilities free or charge, as part of the Gemini app. Other Android phones are said to be getting these features before the end of the month, but they'll be locked behind the Gemini Advanced subscription. That'll cost you $20 a month, and there's no word on whether it might eventually be available to free users. Live Video and Screensharing have both appeared on my Pixel 9 Pro after updating the Gemini app, and seem to be functioning as promised. I do have a Gemini Advanced subscription, though, and I'm not sure if that makes any difference. You can easily check for yourself by checking the Gemini app and loading up Gemini Live from the search bar. If you have the Video and Screensharing icons at the bottom of the Live interface, then you should be good to go.
[9]
Google confirms Gemini Live's next big AI upgrade will be widely available on Android - with one catch
Real-time camera and screen access is being added to Gemini Live Last year we got a tantalising preview of Project Astra, Google's next-gen, multi-modal AI upgrade that interprets the world through your phone's camera. Now it seems the upgrade will be appearing on more Android phones than originally thought. According to a Google support article (via 9to5Google), the ability to share your camera or screen with Gemini Live inside the Gemini app - so key aspects of the Project Astra update - is going to be available on "any Android device with Gemini Advanced". There had been rumors that this enhanced functionality was going to be exclusive to Gemini running on flagship Google Pixel and Samsung Galaxy devices, but it appears that won't be the case. The catch is, you'll need to pay $19.99 / £18.99 / AU$32.99 a month for Gemini Advanced. The features are actually in the process of rolling out now, as we reported last week, but Google advises that "these features are being released gradually, so they might not be available to you just yet". So far we haven't seen them pop up on the phones of the TechRadar team. There are so many modes and sub-modes in Google's Gemini AI chatbot that you'd be forgiven for being a little confused about what each one does. Gemini Live is the more natural, conversational chat mode on mobile that lets you interrupt it, while speaking in a more human-like way compared to the standard Gemini voice mode. The upgrade on the way now will give Gemini Live access to your camera feed and phone screen, so you can ask it about anything you're looking at in the real world or on your mobile device. The early demos look promising, with Gemini identifying objects, remembering locations, and even solving math problems. Gemini can already analyze images and screenshots you give to it, and identify what's in them, but the upgrade will make all of this a real-time experience you can call on as you make your way around the world. You can access the core Gemini Live features - minus the new Project Astra upgrades - with or without a Gemini Advanced subscription. In the Gemini app for Android or iOS, tap the sound wave icon on the far right, next to the text input box.
[10]
Samsung Introduces Real-Time Visual AI on Galaxy S25 Series With Gemini Live Update
Beginning April 7, Galaxy S25 series users can experience new visual conversation capabilities through a free software update Samsung Electronics today announced the rollout of a new AI experience with Gemini Live, bringing real-time visual conversations with AI to Galaxy users. The feature will begin rolling out on April 7, starting with the Galaxy S25 series available for any users free of charge. Through AI-powered assistance, Galaxy users can more naturally engage in conversational interactions that make everyday tasks easier. Just press and hold the side button to show Gemini Live what you see while simultaneously interacting with it in a live conversation. Imagine picking out an outfit or reorganizing a closet. Gemini Live can now make those everyday decisions easier. By simply pointing the camera, users can get suggestions on how to categorize items and optimize space, or share their screen while browsing online retailers to receive personalized style advice. With the ability to see what the user sees and respond in real time, Galaxy S25 series feels like a trusted friend who's always ready to help. "Together with Google, we are marking a bold step toward the future of mobile AI, delivering smarter interactions that are deeply in sync with how we live, work and communicate," said Jay Kim, Executive Vice President and Head of Customer Experience Office, Mobile eXperience Business at Samsung Electronics. "With this new visual capability, Galaxy S25 series brings next-generation AI experiences to life, setting new standards for how users engage with the world through their devices." On April 7, Gemini Live with camera and screen sharing capabilities will start rolling out to all Galaxy S25 series users at no additional cost. For more information about Galaxy S25 series, please visit: Samsung Newsroom, Samsungmobilepress.com and Samsung.com.
[11]
Your Samsung Galaxy S25 just got a huge free Gemini upgrade that gives your AI assistant eyes
Visual AI allows users to ask questions about what they see in real time on screen or via camera access Samsung just announced that all Galaxy S25 users will get a brand-new Visual AI experience on their smartphones, and it's starting to roll out today - April 7, 2025 - as an update. The upgrade, powered by Google Gemini, allows S25, S25+, and S25 Ultra smartphone owners to have 'real-time visual and conversational interactions' with Gemini Live. In the press release published on Samsung's website, the company gives multiple examples of how Visual AI can help users get even more from Gemini, which is built into the best Galaxy smartphones. With Visual AI, you can grant Gemini Live access to your camera and screen sharing, and the AI voice assistant will be able to tell you what you're looking at. Samsung says the new upgrade to Gemini Live means the AI can 'have a real-time conversation with users about what it sees - making everyday tasks easier.' If you own a Samsung S25 device, the free update is starting to roll out today. So, you'll want to keep your eyes peeled for an update to take full advantage of this new Visual AI feature. "Together with Google, we are marking a bold step toward the future of mobile AI, delivering smarter interactions that are deeply in sync with how we live, work and communicate," said Jay Kim, Executive Vice President and Head of Customer Experience Office, Mobile eXperience Business at Samsung Electronics. The partnership between the two companies is stronger than ever, with Galaxy AI deeply intertwined with Google Gemini on S25 devices. Kim added, "With this new visual capability, Galaxy S25 series brings next-generation AI experiences to life, setting new standards for how users engage with the world through their devices." I'm excited to try Visual AI, as I absolutely love the ability to ask Gemini questions about what's on my screen via text on Android. Adding that functionality to Gemini Live will make S25 devices feel even more alive, taking a huge leap towards a future where we all have an AI voice assistant in our pockets capable of managing the mundane elements of life.
[12]
We've tried Google Pixel 9's new Gemini Astra upgrade, and users are in for a real treat
The powerful AI tool is free, and it arrived on Samsung S25 devices yesterday Google Pixel 9, 9 Pro, and 9a owners just got a huge free Gemini upgrade that adds impressive Astra capabilities to their smartphones. As we reported yesterday (April 7), Gemini Visual AI capabilities have started to roll out for Samsung S25 devices, and now Pixel 9s are also getting the awesome features. So what is Gemini Astra? Well, you can now launch Gemini Live and grant it access to your camera, and it can then chat about what you see as well as what's on your smartphone's screen. Gemini Astra has been hinted at for a long time, and it's immensely exciting to get access to it via a free update. You should see the option to access Gemini's Astra capabilities from the Gemini Live interface. If you don't have access yet, be patient, as it should be available to all Pixel 9 users in the coming days. While I don't personally have access to a Google Pixel 9 to test Gemini Live's Astra prowess, my colleague and TechRadar's Senior AI Editor, Graham Barlow, does. I asked him to test out Gemini Astra and give me his first impressions of the new Pixel 9 AI capabilities, and you can see what he made of it below. Once you're in Gemini Live you'll notice two new icons at the bottom of the screen - a camera icon and a screen share icon. Tap the camera icon and Gemini switches to a camera mode, showing you video of what your phone is looking at, but the Gemini Live icons remain at the bottom of the screen. There's also a camera reverse button, so you can get Gemini to look directly at you. I tapped that, and asked Gemini what it thought of my hair, to which it replied that my hair was "a lovely natural brown color". Gee, thanks Gemini! I tested Gemini Live with a few objects on my desk - a water bottle, a magazine, and a laptop, all of which it identified correctly and could tell me about. I pointed the phone at the window towards a fairly nondescript car park and asked Gemini which city I was in, and it instantly, and correctly, told me it was Bath, UK, because the architecture was quite distinctive, and there was a lot of greenery. Gemini can't use Google search while going live, so for now it's great for brainstorming, chatting, coming up with ideas, or simply identifying what you're looking at. For example, it could chat with me about Metallica, and successfully identified the Kirk Hammett Funko Pop I've got on my desk, but it couldn't go online and find out how much it would cost to buy. The screen share icon comes up with a message prompting you to share the screen with Google, then when you say "Share screen" it puts a little Gemini window at the top of the screen that looks like the phone call window you get when you start to use your phone while you're on a call. As you start to interact with your phone the window minimizes even further into a tiny red time counter that counts how long you've been live for. You can keep using your phone and talking to Gemini at the same time, so you could ask it, "What am I looking at?", and it will describe what's on your phone screen, or "Where are my Bluetooth settings?", and it will tell you which parts of the Settings app to look in. It's pretty impressive. One thing it can't do, though, is interact with your phone in any way, so if you ask it to take you to the Bluetooth settings it can't do it, but it will tell you what to tap to get you there. Overall I'm impressed by how well Gemini Live works in both of these new modes. We've had features like Google Lens that can use your camera like this for a while now, but having it all inside the Gemini app is way more convenient. It's fast, it bug-free, and it just works.
[13]
You don't need a Pixel to experience Google Gemini's live camera mode
Earlier this year, Google announced a long-awaited Gemini AI feature, previously known as Project Astra. Recently, it has been confirmed that Gemini Live's camera and screen-sharing features will not be exclusive to Pixel devices. As uncovered by 9to5Google, a Google support article states that camera and screen sharing in Gemini Live will be available on "any Android device with Gemini Advanced." This means that the tools will be compatible with any device running Android 10 or later. Initially, this feature was believed to be only available on Pixel devices and possibly the Samsung Galaxy S25 series. Recommended Videos The new features are now being rolled out, and a "Share Screen with Live" button is included above the existing "Ask about screen" suggestion in the Gemini overlay. This allows users to share their entire screen. Additionally, the update introduces real-time camera capabilities, which can be accessed by opening the Gemini as a whole Live interface and starting a video stream. With the live camera feature, users can point their phones at objects and ask Gemini questions about what they see, such as identifying landmarks, getting decor advice, solving written problems, or analyzing text in books. The screen-sharing capability enables Gemini to examine and discuss whatever is displayed, maintaining context throughout the conversation. To access these new capabilities, you need a Gemini Advanced subscription, which is part of the Google One AI Premium plan and costs $19.99/month. (A free one-year subscription to Gemini Advanced comes with new Pixels.) It's anyone's guess when these new features will roll out to your phone. However, they should begin arriving sooner rather than later, so stay tuned. It the time of this writing, my Pixel 9 Pro XL has yet to receive the update. These features significantly advance Google's AI assistant capabilities, bringing Gemini closer to providing real-time, context-aware assistance through visual information.
[14]
Google just gave vision to AI, but it's still not available for everyone
Google has just officially announced the roll out of a powerful Gemini AI feature that means the intelligence can now see. This started in March as Google began to show off Gemini Live, but it's now become more widely available. Recommended Videos Before you get too excited though, at this stage at least, it's only available on the Google Pixel 9 and Samsung Galaxy S25. Up until now Gemini has been a little limited, albeit in an impressive way. It's been able to understand voice, images, PDFs and even YouTube videos. Now, thanks to Project Astra, Gemini can see what's on your screen too. This means you can simply give the AI access to your screen and then ask questions about what's going on for you and it will be able to understand and answer. Perhaps even more usefully, you can share your rear camera with Gemini to talk about what you're seeing in the physical world too. Sound familiar? Yup, this is very similar to the tech Apple Intelligence was being teased as getting last year. Yet Apple has been rumoured to be struggling with this release and we may have to wait until iOS 19, or longer, before we see it arrive on iPhones. While the release is limited right now, it will soon be available to all Gemini Live subscribers using Android devices. How to get Gemini Live activated on your phone One way to open this is to launch the Gemini overlay and select the "Share screen with Live". Another way is to launch Gemini Live then select the screen share icon. In either case there is a small red timer icon at the top of the screen to show you're being viewed and listened to by Gemini Live, that you can tap into for more details. The whole experience is a bit like being on a call with a real person - blurring the lines between human and AI ever further.
[15]
Gemini Live Can Now 'See' Your Phone (to a Point)
Gemini Live is the chatty, natural conversation mode inside Google's Gemini app, and it just got a significant upgrade: The AI can now instantly answer questions about what it's seeing through your phone's camera and on your phone's screen in real time. The feature is coming first to Google Pixel 9 and Samsung Galaxy S25 phones. You've long been able to offer up photos and screenshots for Gemini to analyze, but it's the real-time aspect of the upgrade that makes this most interesting -- it's as if the AI bot can actually see the world around you. You may remember some of this functionality was shown off by Google under the Project Astra name last year. Samsung says it "feels like a trusted friend who's always ready to help," while Google says you could use the improved features to get personalized shopping advice, troubleshoot something that's broken, or organize a messy space. You can have a discussion with Gemini Live about anything you can point your camera at. It's now available as a free update on Pixel 9 and Galaxy S25 phones, with further Android devices getting it soon -- though wider availability will be tied to a Gemini Advanced subscription. As yet, there's no definitive list of which phones are in line for the update, though presumably it needs a certain level of local processing power to work. There's no word yet on it coming to the Gemini app for the iPhone. As always, the official advice is to "check responses for accuracy," so just because there's a fancy new interface to make use of doesn't mean the Gemini AI is any more reliable than it was before. You're also going to need an active internet connection for this to work, so the app can get some help from the web. The feature is easy to find: You can launch the Gemini Live interface by tapping the button to the far right of the input box in any Gemini chat (it looks a bit like a sound wave). From there, you'll see two new icons at the bottom: One for accessing the camera (the video camera icon), and one for accessing the phone's screen (the arrow inside a rectangle). Close down the Gemini Live interface, and you'll find your conversation has been recorded as a standard text chat, so you can refer back to it if needed. As the new features have appeared on my Google Pixel 9, I tested them out using questions I already knew the answers to, to check for any unhelpful hallucinations. First up, I loaded the camera interface and asked Gemini Live about the Severance episode I was watching on my laptop. Initially, the AI thought I was watching You -- presumably confusing its Penn Badgleys with its Adam Scotts -- but it quickly fixed its mistake, identifying the right show and naming the actors on screen. I then asked about a package with a UN3481 label: lithium-ion batteries packed inside equipment (over-ear headphones, in this case). Gemini Live correctly figured out that lithium-ion batteries were involved, needing "extra care" when handled, but gave no more information. When pushed, it said these batteries were packed separately, not in equipment. Wrong answer, Gemini Live -- you're thinking of code UN3480. Gemini Live was also able to tell me how to reset my Fitbit Charge 6 when I pointed my phone camera at it (though the AI originally thought it was a Fitbit Charge 5, which is an easy enough mistake to make). It's easy to see how this could come in handy if you're trying to troubleshoot gadgets, and aren't quite sure about the makes and model numbers of the devices. Sharing your screen with Gemini Live is interesting. The app shrinks to a small widget, so you can use your phone as normal, and then ask questions about anything on the screen. Gemini Live did a good job of identifying which apps I was using, and some of the content in those apps, like movie posters and band photos. It also accurately translated a social media post in a foreign language for me. Regarding a website showing the recent Leicester v Newcastle soccer match, Gemini Live correctly told me what the score was and which players got the goals -- all information that was already on screen. When I asked when the match was though, the AI got confused, and told me it happened on May 22, 2023 (the same teams playing, but nearly two years ago). There was no faulting the speed with which Gemini Live came back with answers, and the calm and reassuring manner that it responded, but there are still issues around the quality of the results. Of course the convenience of using this -- pointing the camera and saying "how do I fix this?" rather than crafting a complex Google query -- means that many people may well prefer using it even with the mistakes, but it's still a worry. Essentially, this is just an enhanced, instant version of visual search: Previously, you might just type "UN3481 label" into Google for the same query. But whereas the traditional search results list of blue links lets you see the information you're looking up, and make a judgment on its reliability and authoritativeness, Gemini Live is much more of a closed box that doesn't show its workings. While it feels almost like magic at times, because of that interface, having to double-check everything it says isn't ideal.
[16]
Google's Crazy Gemini Live Camera Abilities Landing on Pixel 9...
We may earn a commission when you click links to retailers and purchase goods. More info. A month ago, Google started to rollout one of Gemini Live's most important abilities, a way for AI to use your phone's camera or see what's on your screen in order to return info. It's powerful because it brings Gemini into your world, not just as a prompt, but an AI essentially seeing what you see in order to bring back the best results. Today, Google says that all Gemini app users with a Pixel 9 or Galaxy S25 series phone should have access to this new capability. I'll just note before we get into this, that even as someone who rarely dips into the world of AI and is skeptical of almost all of it, this is pretty slick and I haven't quite figured out yet just how I'll use it going forward. How this works (in the quickest way) is you fire up Gemini and then tap the button that says "Share screen with Live." This will activate Gemini Live for those casual AI conversations, but it lets Gemini see your screen for questions or analysis. It can either look at whatever is on your screen or you could then open your camera to let it see your real world. You can also just open a Gemini Live conversation and now find camera and screen share shortcuts next to the pause/stop buttons. I used this today for a couple of items. The first, which you can see above, was to identify the dish in a photo (which it properly noticed was shakshuka) and then get me a recipe. It even asked during that conversation if I had any food allergies it should know about in order to fine-tune the results it brought back. For the other request, I fired up my camera, pointed it at my coffee mug and made sure there were some other objects in the background. I keep a Jigglypuff figure on my desk, so I asked Gemini if there were any Pokemon around. It found the "pink" figure and asked if it should identify it, so I confirmed that it should. It quickly realized, even in my dimly lit office space, that it was indeed Jigglypuff. Pretty cool. Like with almost all AI, Google thinks you'll use this to brainstorm ideas or become more organized. They like to think of you using Gemini and Gemini Live to get inspiration for a creative project, organizing your living room, getting shopping advice, or get feedback on work you've already done. You'll want to grab the latest Google App and Gemini updates through Google Play to get started. Oh, you'll also need to own a Pixel 9 or Galaxy S25 phone or have a subscription to Gemini Advanced.
[17]
Samsung introduces Gemini-powered Visual AI on Galaxy S25 series
Samsung is bringing real-time visual AI conversations to Galaxy users with the Gemini Live update, starting April 7. This new feature will be available free of charge, beginning with the Galaxy S25 series. Gemini Live offers AI-powered assistance for more natural conversational interactions to simplify everyday tasks. Users can activate the feature by pressing and holding the side button, allowing Gemini Live to see what they see and engage in live conversation. The new visual capabilities make everyday decisions easier. For example, users can point their camera to get suggestions on how to categorize items and optimize space, or share their screen while browsing online retailers for personalized style advice. "Together with Google, we are marking a bold step toward the future of mobile AI, delivering smarter interactions that are deeply in sync with how we live, work and communicate," said Jay Kim, Executive Vice President and Head of Customer Experience Office, Mobile eXperience Business at Samsung Electronics. He added that the Galaxy S25 series sets new standards for user engagement through their devices. Gemini Live, including camera and screen sharing capabilities, will be accessible to all Galaxy S25 series users without any additional cost starting April 7.
[18]
Samsung Galaxy S25 users get real-time AI help with Gemini Live - Phandroid
Galaxy S25 Gemini Live is finally official, and it's way more than just another AI gimmick. Starting April 7, Samsung is rolling out Google's Gemini Live to all Galaxy S25 phones at no extra cost. It's like having a second brain in your pocket, one that actually sees what you see and talks you through your mess. With Gemini Live, you just hold down the side button, point your camera at anything, and the AI kicks in with real-time suggestions. Messy closet? Gemini Live can help you sort it out. Can't decide what to wear? Point the camera at your options, and it'll weigh in with outfit ideas. Even while online shopping, you can share your screen to get live style advice, so you're not aimlessly scrolling forever. Samsung and Google are betting big on this. Jay Kim, Samsung's customer experience lead, calls it a bold step for mobile AI. Bold's one way to put it. Gemini Live basically turns your phone into a chatty assistant that helps you organize your fridge, pick dinner recipes, and more. This feature feels less like a robotic tool and more like a sharp-eyed friend keeping you in check. The feature isn't limited to camera views either. Screen sharing opens up even more scenarios, like getting instant tips while browsing for furniture or home gadgets. Gemini Live takes the friction out of daily decisions, no complicated setup needed.
[19]
Pixel 9, Galaxy S25 Series Receiving New Gemini Live Features
Samsung Galaxy S25 series users will need the Gemini subscription Google is rolling out the screen and video sharing features in Gemini Live to the Pixel 9 and Samsung Galaxy S25 series. The Mountain View-based tech giant previewed the features at the Mobile World Congress (MWC) 2025 last month. These features were developed by Google DeepMind as part of Project Astra, and offer real-time video processing capability to users. The tech giant confirmed that the Pixel 9 series, including the Pixel 9a, will get the screen-sharing and live video features with the April 2025 Pixel drop. In a press release, Google said that both the Pixel 9 series and the Galaxy S25 series are now getting the new Gemini Live features. These artificial intelligence (AI) features will be available to Pixel 9 users for free, the tech giant stated. Notably, the feature is not tied to the one-year free Gemini Advanced subscription, so these features are expected to be available on the device even when the subscription expires. Galaxy S25 users would require a Gemini Advanced subscription to access the feature, however. Gadgets 360 staff members have not spotted the feature in the eligible Google or Samsung phones yet, but it should arrive once the April Pixel Drop arrives in a day or two. Google is likely releasing the update in a phased manner. For Galaxy S25 devices, a similar Gemini update is expected to be rolled out soon. Google said that the screen-sharing feature can be accessed by opening the Gemini assistant overlay and tapping the "Share screen with Live" floating action button (FAB). Once tapped, Android shows a confirmation message asking users if they want to share their entire screen with the Google app. Since it is a Gemini Live feature, when it is active, users will see a call-style live notification in the status bar indicating that live processing of data is enabled. Users can also activate the feature by opening Gemini Live and tapping the screen share button. Similarly, to share the device's live video feed via the rear camera, users can open Gemini Live and tap on the newly added video button at the bottom of the interface. The feature only works as long as the screen is active. Google recommends that users keep the camera movement steady for best results.
[20]
Hands-On: Testing Gemini Live's Screen and Camera Capabilities on Pixel 9
It can identify objects and provide information in real time quite accurately with minor hiccups to speak of. At last year's Google I/O, the company demonstrated the next evolution of its Gemini with Project Astra. It allows Gemini to see what's on your phone screen or the world around you, and lets you engage with the AI in real time. Now, Google is ready to finally bring this Gemini Live experience to everyone as the camera and screen sharing features are now available for a few Pixel and Samsung phones. Google has announced that Gemini Live's camera and screen sharing features are available for free on Pixel 9 and Galaxy S25 series of devices. This includes the Pixel 9a, Pixel 9, Pixel 9 Pro, Pixel 9 Pro XL, Pixel 9 Pro Fold, Galaxy S25, Galaxy S25+, and Galaxy S25 Ultra. Earlier, we reported that the feature was only available for users with a Google One subscription. We have the feature running on our Pixel 9 Pro XL, and I gave it a shot to test its AI smarts. I asked it about some of the props we have at the Beebom office, as well as other stuff. It recognized everything accurately, and even shared additional information on it. I didn't have to point at it or anything, it understands what's in focus and hence what I am asking about. The same goes when screen sharing. I asked about a recent OnePlus story, and Gemini Live summarized it quite well. Moreover, I tried a more practical use, like asking it to "help me set up bedtime mode". It helped me with step-by-step instructions throughout the process without any fuss. The best part, I can engage with Gemini in real-time without waiting for it to process and fetch the results. It's like having a friend on a video call and asking them about stuff, and it will only get better. Even the Gemini UI shows up similar to a call notification, with Hang up and Hold buttons. It almost had me impressed, but then I came across the inaccuracies. However, Gemini's not perfect. It needed a second guess at times to correctly identify some objects, and struggled with fancy fonts as it couldn't make out what's written on my water bottle, mistaking a "B" for a "G". When asked about when the Pixel 6 will receive its last update, Gemini incorrectly mentioned 2024 as it's last supported year. Even though the device is said to receive updates till 2026, after Google shared their change of plans for older Pixels. This isn't a few minor hiccups, as the problems continued when screen sharing with Gemini Live. I asked about the specs of the upcoming OnePlus 13T, as written in the article on the screen. It kept struggling with the processor name, repeating Snapdragon 8 Gen 3, even though the correct name is right there on the screen. I corrected it a couple of times, and it went further back, quoting the phone will feature 8 Gen 2. That's when I gave up on it and tried again. On the second attempt, it summarized the story with flying colors, as I have mentioned above. These problems make me wonder whether this feature uses an older model of Gemini, which doesn't have up-to-date information. That can't be the case, and even if it is, the AI shouldn't mistake the data right there on the screen. Google will eventually iron out these minor inconveniences, so I'll only focus on the positives. Having worked in tech, it is rare for me to get amazed by some new technology, especially something AI-driven. But this one had me impressed, as I can see real-life implications of it right away. The feature has its quirks, so I wouldn't say you can be completely dependent on it. I won't say it for any AI-powered feature. But there is potential, and it is genuinely amazing what Google has done here. I will continue testing this feature and report back on my usage sometime later. Till then, if you have a Pixel 9 or a Galaxy S25 device, then do give it a shot. Let us know your thoughts on this new Gemini feature. Try it out, and tell us if you faced any inaccuracies or issues with it in the comments below.
[21]
Gemini Live with camera and screen sharing rolls out to Pixel 9 and Galaxy S25 series
Google has introduced new camera and screen sharing capabilities for Pixel and Galaxy smartphones, powered by Gemini Live. These features are designed to simplify everyday tasks with real-time AI support. Gemini Live works in over 45 languages and supports real-time conversations using your phone's camera or screen. Here are the key ways it can help: Get Context with Your Camera: Point your camera at a dish, landmark, storefront, or object to get instant information. Ask follow-up questions, and the AI updates details as your view changes. Organize and Declutter: Show a cluttered shelf, drawer, or closet to get suggestions on categorizing, maximizing space, or deciding what to donate or keep. Plan and Navigate: While browsing events or places, use screen sharing to get local information and recommendations to support travel planning or city exploration. Compare and Shop Smarter: Share your screen while shopping online to compare features, prices, and reviews. You can also show wardrobe items using the camera to get suggestions on what complements your style. Troubleshoot Everyday Issues: Point your camera at items like a squeaky chair or glitchy device to get help identifying the problem and possible fixes. Generate Ideas: Share visual inspiration through screen sharing -- like textures or photos -- and brainstorm ideas for creative writing, design, or projects. Understand Tutorials and Recipes: Use "Talk Live about video" while watching a recipe or tutorial to scale ingredients or clarify steps without pausing or rewinding. Get Input on Personal Projects: Share visual or written content to receive feedback on design, layout, captions, or improvements. Samsung has confirmed that Gemini Live is rolling out to Galaxy S25 devices through a free software update. Press and hold the side button to activate the AI, which can assist in real time using on-screen or camera input. For example, while organizing a wardrobe, pointing the camera allows Gemini to suggest ways to categorize or pair items. During online shopping, screen sharing can offer personalized product advice. Speaking about this collaboration, Jay Kim, Executive Vice President and Head of Customer Experience Office, Mobile eXperience Business at Samsung Electronics, said:
Share
Share
Copy Link
Google rolls out Gemini Live's real-time visual AI capabilities, initially showcased as Project Astra, to a wider range of Android devices, enhancing AI-powered visual recognition and interaction.
Google has begun rolling out enhanced visual AI capabilities for its Gemini Live feature, previously known as Project Astra, to a wider range of Android devices. This expansion marks a significant step in integrating advanced AI functionalities into everyday smartphone use, allowing users to interact with their environment through their device's camera in real-time 12.
Gemini Live now offers real-time video analysis and screen-sharing capabilities, enabling users to have interactive conversations about their surroundings or on-screen content. Initially thought to be exclusive to the latest Pixel and Samsung Galaxy S25 series, Google has confirmed that these features will be available on any Android device running Android 10 or later 35.
The rollout began with select users in late March 2025 and is now expanding more broadly. While the feature comes at no additional cost for Galaxy S25 users, it generally requires a Gemini Advanced subscription, which is part of the Google One AI Premium plan priced at $20 per month 34.
Users can activate Gemini Live by pressing and holding the side power button on compatible devices or through the Gemini app. The AI can then analyze the camera feed or shared screen in real-time, offering insights, answering questions, and providing assistance based on what it sees 2.
Early tests have shown impressive capabilities:
While marketed as "real-time," some users have noted that the current implementation appears to work more like an enhanced version of Google Lens. The AI seems to capture snapshots at the moment of query rather than continuously analyzing the video feed 4.
Gemini Live utilizes the Gemini 2.0 model, which, while powerful, is not the most advanced AI model in Google's arsenal. The Gemini 2.5 Pro model, available in the Google app, is not yet integrated into this feature 3.
This rollout represents a significant step in Google's vision for AI-powered assistants that can seamlessly interact with the physical world through smartphone cameras. It also positions Google competitively against other tech giants like Apple, which has been developing similar visual AI capabilities 15.
As AI continues to evolve, features like Gemini Live have the potential to dramatically change how users interact with their devices and the world around them, blurring the lines between digital and physical experiences 1.
Google has hinted at further expansions and improvements to Gemini Live's capabilities. As the rollout continues, users can expect refinements to the real-time analysis features and potentially more advanced AI models being integrated into the system 35.
This development also raises questions about the future of traditional voice assistants like Google Assistant, as more sophisticated AI models like Gemini take center stage in providing interactive and visual AI experiences 3.
Reference
[3]
[5]
Google introduces groundbreaking features for Gemini, including live video and screen sharing capabilities, enhancing AI-powered assistance and interaction.
16 Sources
16 Sources
Google has begun rolling out new AI features to Gemini Live, allowing it to analyze live video feeds and shared screens in real-time. This development, part of Project Astra, is currently available to Gemini Advanced subscribers.
12 Sources
12 Sources
Google has announced that Gemini Live's camera and screen sharing capabilities, previously limited to specific devices and subscriptions, will now be available for free to all Android users through the Gemini app.
4 Sources
4 Sources
Google introduces Gemini Live, a premium AI-powered chatbot to rival OpenAI's ChatGPT. The new service offers advanced features but faces scrutiny over its pricing and rollout strategy.
6 Sources
6 Sources
Google unveils Gemini AI integration across its ecosystem, challenging Apple's AI efforts. The Pixel 9 and Pixel Buds Pro 2 showcase advanced AI capabilities, signaling a new era in smartphone technology.
12 Sources
12 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved