2 Sources
2 Sources
[1]
Google’s Smart Glasses Can Create Fake Photos on the Fly
As big tech companies ready their excursion into smart glasses, a similar playbook is cementing, and that playbook is looking a lot like the one already set out by Meta and its Ray-Ban-branded AI glasses. Hardware from companies like Google, and potentially Samsung and Apple, seems to center around a few main key components. You've got cameras, some kind of AI/computer vision, speakers, a voice assistant, navigation, maybe a screen, and, of course, a streamlined way to use generative AI for faking real photosâ€"wait, what? In a recent demo of its upcoming smart glasses, which are set to launch sometime this year, Google's Dieter Bohn showed off a few capabilities. While most of them are pretty par for the course for the smart glasses field (using computer vision to get directions to places or parse stuff in your surroundings), one feature in particular is not something I've seen yet. By linking the smart glasses to Google's image generator, Nano Banana, Bohn shows how you can instruct them to doctor up an image on the fly. In the video demo, Bohn asks Gemini to take a picture of people in the room using the smart glasses, but then superimpose them over the "really cool church in Barcelona that I forget the name of." Based on the demo, it seems to do exactly that, taking people in the room and using AI to essentially Photoshop them in, so it looks as though they're standing in front of the Sagrada Familia in Barcelona. It's not a trick we haven't seen before. Google has been leaning into AI photography for years now with its Pixel phones. But it's a first for the smart glasses form factor, being able to (theoretically) shorten the friction between taking a picture and using AI to alter the ever-loving f*ck out of it. And even if we've seen Google lean into AI photography in the past, it's certainly moving the needle just a bit further in that directionâ€"the direction where whether a photo is real or fake apparently doesn't matter. For context, other smart glasses can kind of do this already, but not to this degree. For instance, you can ask Meta AIâ€"the AI inside the Ray-Ban Meta AI glasses and the Meta Ray-Ban Displayâ€"to "re-style" a photo to make it more like an oil painting or cartoonish, but it's not meant for recreating anything photorealistic. Meta's version is more focused on how to turn your images into AI slop and less fixated on how you can basically fake an image altogether. How well Google's Gemini to Nano Banana pipeline works on smart glasses is a question mark, since this is just a short, well-planned demo. Google even emphasizes that the time in the video is edited, which tells me the command either didn't work as planned initially or it took too long for Google's taste. Either way, it's a new trick in smart glasses that we can look out for, and I guess theoretically great news for anyone who doesn't care about photos representing reality anymore.
[2]
Google's new Android XR smart glasses use Gemini to AI-edit your world while you're still taking the photo
It's been a minute since we last saw Google's prototype smart glasses in action but the tech giant recently held demos for them during MWC 2026. Google's Dieter Bohn showcased a number of features including the ability to use the camera and Nano Banana in combination to take a photo and edit it on the fly. In Dieter's example, he took a photo of a group of people and used Banana to "reimagine" it in front La Sagrada Familia, a famous church in Barcelona -- also an excellent board game. To be fair, Meta's Ray-Ban Meta AI glasses and the newer Ray-Ban Display specs can sort of do this. You can ask the glasses to "re-style" an image into something more hyper stylized like cartoons or oil paintings, but it doesn't do photorealistic work like what Google showed off. Think of it like AI slop versus deep fakes. Most of the features Bohn showed off are par for the course for the current generation of AI-powered glasses. The Google-Xreal collaboration Project Aura glasses, for example, have many of the same features that we previewed at the end of last year. When we last saw the Google prototypes in December, we were able to use Live Translation to transcribe and translate Chinese to English. Bohn's demo showcased a similar example. Additionally, he showed the ability to identify real world objects and use Gemini to prompt off of those items. For example, in one case the glasses identified a Queen album and then played a song. In another example, a poster with an address was used to find walking directions and displayed a map. He was also able to take a live Google Meet call that involved sharing video and live chatting with a coworker. The future of Google glasses Dieter Bohn made sure to emphasize that these glasses are still prototypes and not what "the final versions will look and feel like." He noted that the MWC versions had clip-on prescriptions that won't be in the final versions, though he did not elaborate on whether prescription lenses would be possible. He also promised more information about display-free AI glasses, Project Aura, and Samsung's Galaxy XR in the coming months. Samsung has promised its Android XR glasses will launch this year and will probably feature many of the same Gemini-based tools. Most likely the next time Google says anything about the future of Android XR and smart glasses will be at Google I/O 2026 which will take place starting on May 19. Tom's Guide will be on hand for any news Google drops that week, so keep us in your tabs. Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.
Share
Share
Copy Link
Google showcased its upcoming smart glasses at MWC 2026, revealing a controversial new feature that uses Gemini AI and Nano Banana to create photorealistic fake images on the fly. The demo showed how users can take a photo and instantly superimpose subjects onto different backgrounds, raising fresh questions about photo authenticity in wearable tech.
Google smart glasses are pushing the boundaries of AI photo editing in ways that blur the line between reality and fabrication. During recent demonstrations at MWC 2026, Google's Dieter Bohn showcased the upcoming Android XR smart glasses and revealed a feature that stands apart from anything currently available in the wearable tech market
1
2
. By linking the glasses to Google's image generator Nano Banana, users can instruct Gemini AI to doctor images in real-time, essentially creating photorealistic fake photos without ever touching a phone or computer.
Source: Tom's Guide
In the demonstration, Bohn asked the glasses to capture a photo of people in the room, then commanded them to superimpose subjects onto different backgrounds—specifically, placing them in front of Barcelona's Sagrada Familia. The glasses appeared to execute this command seamlessly, using AI to generate what looks like an authentic photograph of people standing before the famous church, even though they never left the room
1
. This represents a significant advancement in AI photography, compressing the timeline between capture and manipulation to near-instantaneous speed.While other smart glasses offer image modification capabilities, Google's implementation takes a notably different direction. Meta's Ray-Ban Meta AI glasses and the newer Ray-Ban Display can "re-style" photos into stylized formats like oil paintings or cartoons, but these transformations are clearly artificial
1
2
. Meta's approach focuses on artistic interpretation rather than photorealistic deception. Google's prototype smart glasses, however, are designed to AI-edit your world in ways that could pass as genuine photographs, raising immediate concerns about photo authenticity.The distinction matters. When Meta transforms your image into cartoon-style "AI slop," no one mistakes it for reality. But when Google's glasses can seamlessly transport you to a different location while maintaining photorealistic quality, the technology enters murkier ethical territory
2
. This capability represents real-time photo editing that could fundamentally change how we perceive images captured through wearable devices.The AI photo manipulation feature wasn't the only capability demonstrated at MWC 2026. Bohn showcased several functions that align with current smart glasses standards, including computer vision for object identification, live translation capabilities that can transcribe and translate foreign languages in real-time, and navigation features that display walking directions
2
. In one example, the glasses identified a Queen album and prompted Gemini AI to play a song. In another, they recognized an address on a poster and generated a map with directions.
Source: Gizmodo
The glasses also demonstrated the ability to handle live Google Meet calls with video sharing, suggesting they could serve as a hands-free communication device for remote collaboration. These features position Google's offering alongside competitors like Samsung and Apple, all following a similar hardware playbook that includes cameras, AI-powered computer vision, speakers, voice assistants, and navigation capabilities.
Related Stories
Google emphasized that the timing in the demo video was edited, which raises questions about actual performance
1
. This admission suggests either the command didn't work as planned initially or the processing time exceeded expectations. How well the Gemini AI to Nano Banana pipeline actually performs in real-world conditions remains uncertain, as controlled demonstrations often mask practical limitations.Bohn made clear that these remain prototypes and don't represent what "the final versions will look and feel like"
2
. The MWC 2026 versions featured clip-on prescriptions that won't appear in final models, though whether prescription lenses will be available remains unaddressed. Google promised more details about display-free AI glasses, Project Aura, and Samsung's Galaxy XR in coming months, likely at Google I/O 2026 starting May 19.The introduction of generate photorealistic fake photos capabilities in smart glasses represents a significant shift in how wearable technology might reshape our relationship with images. Google has been leaning into AI photography for years through its Pixel phones, but the smart glasses form factor dramatically reduces friction between capture and manipulation
1
. What once required opening an app and making deliberate edits can now happen through a simple voice command while the glasses are already on your face.This development arrives as Samsung prepares to launch its Android XR glasses this year, likely featuring many of the same Gemini-based tools
2
. The technology signals a future where distinguishing authentic images from AI-generated fakes becomes increasingly difficult, particularly when the tools for creating convincing fabrications are literally worn on your face. Whether photos represent reality or not may soon become a question we can't answer by looking alone.Summarized by
Navi
16 May 2025•Technology

08 Dec 2025•Technology

12 Dec 2024•Technology

1
Technology

2
Technology

3
Business and Economy
