3 Sources
3 Sources
[1]
Google’s Smart Glasses Can Create Fake Photos on the Fly
As big tech companies ready their excursion into smart glasses, a similar playbook is cementing, and that playbook is looking a lot like the one already set out by Meta and its Ray-Ban-branded AI glasses. Hardware from companies like Google, and potentially Samsung and Apple, seems to center around a few main key components. You've got cameras, some kind of AI/computer vision, speakers, a voice assistant, navigation, maybe a screen, and, of course, a streamlined way to use generative AI for faking real photosâ€"wait, what? In a recent demo of its upcoming smart glasses, which are set to launch sometime this year, Google's Dieter Bohn showed off a few capabilities. While most of them are pretty par for the course for the smart glasses field (using computer vision to get directions to places or parse stuff in your surroundings), one feature in particular is not something I've seen yet. By linking the smart glasses to Google's image generator, Nano Banana, Bohn shows how you can instruct them to doctor up an image on the fly. In the video demo, Bohn asks Gemini to take a picture of people in the room using the smart glasses, but then superimpose them over the "really cool church in Barcelona that I forget the name of." Based on the demo, it seems to do exactly that, taking people in the room and using AI to essentially Photoshop them in, so it looks as though they're standing in front of the Sagrada Familia in Barcelona. It's not a trick we haven't seen before. Google has been leaning into AI photography for years now with its Pixel phones. But it's a first for the smart glasses form factor, being able to (theoretically) shorten the friction between taking a picture and using AI to alter the ever-loving f*ck out of it. And even if we've seen Google lean into AI photography in the past, it's certainly moving the needle just a bit further in that directionâ€"the direction where whether a photo is real or fake apparently doesn't matter. For context, other smart glasses can kind of do this already, but not to this degree. For instance, you can ask Meta AIâ€"the AI inside the Ray-Ban Meta AI glasses and the Meta Ray-Ban Displayâ€"to "re-style" a photo to make it more like an oil painting or cartoonish, but it's not meant for recreating anything photorealistic. Meta's version is more focused on how to turn your images into AI slop and less fixated on how you can basically fake an image altogether. How well Google's Gemini to Nano Banana pipeline works on smart glasses is a question mark, since this is just a short, well-planned demo. Google even emphasizes that the time in the video is edited, which tells me the command either didn't work as planned initially or it took too long for Google's taste. Either way, it's a new trick in smart glasses that we can look out for, and I guess theoretically great news for anyone who doesn't care about photos representing reality anymore.
[2]
Google's new Android XR smart glasses use Gemini to AI-edit your world while you're still taking the photo
It's been a minute since we last saw Google's prototype smart glasses in action but the tech giant recently held demos for them during MWC 2026. Google's Dieter Bohn showcased a number of features including the ability to use the camera and Nano Banana in combination to take a photo and edit it on the fly. In Dieter's example, he took a photo of a group of people and used Banana to "reimagine" it in front La Sagrada Familia, a famous church in Barcelona -- also an excellent board game. To be fair, Meta's Ray-Ban Meta AI glasses and the newer Ray-Ban Display specs can sort of do this. You can ask the glasses to "re-style" an image into something more hyper stylized like cartoons or oil paintings, but it doesn't do photorealistic work like what Google showed off. Think of it like AI slop versus deep fakes. Most of the features Bohn showed off are par for the course for the current generation of AI-powered glasses. The Google-Xreal collaboration Project Aura glasses, for example, have many of the same features that we previewed at the end of last year. When we last saw the Google prototypes in December, we were able to use Live Translation to transcribe and translate Chinese to English. Bohn's demo showcased a similar example. Additionally, he showed the ability to identify real world objects and use Gemini to prompt off of those items. For example, in one case the glasses identified a Queen album and then played a song. In another example, a poster with an address was used to find walking directions and displayed a map. He was also able to take a live Google Meet call that involved sharing video and live chatting with a coworker. The future of Google glasses Dieter Bohn made sure to emphasize that these glasses are still prototypes and not what "the final versions will look and feel like." He noted that the MWC versions had clip-on prescriptions that won't be in the final versions, though he did not elaborate on whether prescription lenses would be possible. He also promised more information about display-free AI glasses, Project Aura, and Samsung's Galaxy XR in the coming months. Samsung has promised its Android XR glasses will launch this year and will probably feature many of the same Gemini-based tools. Most likely the next time Google says anything about the future of Android XR and smart glasses will be at Google I/O 2026 which will take place starting on May 19. Tom's Guide will be on hand for any news Google drops that week, so keep us in your tabs. Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.
[3]
Google Smart Glasses Can Take a Photo and Immediately Edit It with AI
Google has shown off a prototype of its AI smart glasses that has an eyebrow-raising feature: it can edit photos with AI on the fly. Dieter Bohn, formerly of The Verge, took to Reddit and X to share a short demonstration of the prototype glasses. Much of it was the usual stuff, like live translation, directions, and computer vision AI that can tell the user about the objects they're looking at. But one feature appeared novel. "Of course AI glasses have a camera and because we have Gemini, you can do really fun things with the camera all in one motion," Bohn explains. He then takes a photo of the people nearby and "puts them somewhere fun," says Bohn. "Maybe like the, er, really cool church in Barcelona that I forget the name of." Gemini's AI image generator is called Nano Banana and it works in the background to create an eerily convincing image of Bohn's colleagues standing in front of La Sagrada FamÃlia, the church in Barcelona that Bohn was thinking of. Gizmodo notes that this technology is a new feature not seen before in smart glasses. Meta Ray-Ban AI glasses do have a "re-style" function, which gives photos a cartoonish makeover, but its not photorealistic like this one. Quite why anyone would want to take a photo and pretend they're somewhere else is an entirely different issue. The AR glasses that Bohn showed off have a display inside the lenses, but he adds that a non-display version will also be offered by Google, still powered by Gemini. Google's AI smart glasses are expected to be released later this year.
Share
Share
Copy Link
Google demonstrated prototype smart glasses at MWC 2025 that can take a photo and immediately alter it using AI. The glasses use Gemini and Nano Banana to create photorealistic fake photos on the fly, transporting subjects to different locations like Barcelona's La Sagrada FamÃlia. This marks a shift in smart glasses capabilities, raising questions about photo authenticity as the technology prepares for launch later this year.

Google has unveiled a striking new capability for its upcoming AI-powered smart glasses during recent demonstrations at MWC 2025. The prototype showcased by Dieter Bohn, formerly of The Verge, reveals how the company is pushing the boundaries of AI photography by enabling users to generate photorealistic fake photos instantly while capturing images
1
. By connecting the smart glasses to Gemini and its image generator Nano Banana, users can command the device to take a photo and immediately manipulate it in ways that blur the line between reality and fabrication2
.In the demonstration, Bohn captured an image of colleagues in a room and instructed the glasses to place them in front of La Sagrada FamÃlia, the iconic church in Barcelona. The result was a convincingly altered image that appeared to show the subjects standing at the famous landmark
3
. This on-the-fly photo editing capability represents a notable departure from existing smart glasses offerings and signals Google's intent to make AI manipulation seamlessly integrated into everyday photography.While Meta has already entered the smart glasses market with its Ray-Ban Meta AI glasses, Google's approach takes AI-edit your world functionality to a different level. Meta's Ray-Ban offerings include a "re-style" feature that transforms images into stylized formats like oil paintings or cartoons, but these alterations are clearly artistic rather than photorealistic
1
. The distinction matters: Meta's version creates what some describe as "AI slop," while Google's technology aims for convincing photorealistic results that could easily be mistaken for genuine photographs2
.This capability builds on Google's established history with AI photography through its Pixel phones, where computational photography and AI-assisted editing have become standard features. However, the smart glasses form factor dramatically reduces the friction between capturing and manipulating images, making the process nearly instantaneous
1
. The Android XR smart glasses also include other expected features like live translation, computer vision for identifying objects, navigation assistance, and the ability to take Google Meet calls with video sharing2
.The technology raises immediate concerns about photo authenticity in an era where distinguishing real from fabricated images grows increasingly difficult. Google has not yet addressed how these images will be labeled or whether any metadata will indicate AI manipulation. The demonstration itself showed edited timing, suggesting the feature either required multiple attempts or took longer than the final video portrayed
1
. This hints at potential technical challenges that Google may need to resolve before the consumer launch.Bohn emphasized that the demonstrated glasses remain a prototype and don't represent the final design
2
. The MWC version featured clip-on prescriptions that won't appear in the finished product, though details about prescription lens options remain unclear. Google plans to offer both display-equipped versions and display-free AI glasses through Project Aura, its collaboration with Xreal2
.Related Stories
Google expects to launch its smart glasses sometime in 2025, with more details likely emerging at Google I/O 2025, scheduled to begin on May 19
2
. Samsung has also committed to releasing its Galaxy XR glasses this year, which will likely feature similar Gemini-based tools given the Android XR platform collaboration2
. Apple is also rumored to be developing smart glasses, suggesting the market may see significant competition in the coming months.For consumers and professionals who rely on photography to document reality, this development signals a critical juncture. The ability to seamlessly create convincing fake images through wearable technology could have implications for journalism, legal evidence, social media authenticity, and personal documentation. As Google refines this technology ahead of launch, the industry will be watching closely to see how the company addresses concerns about misuse while delivering on the promised convenience of integrated AI manipulation.
Summarized by
Navi
[2]