5 Sources
5 Sources
[1]
Google's Putting It All on Glasses Next Year: My Demos With Project Aura and More
Nearly 20 years writing about tech, and over a decade reviewing wearable tech, VR, and AR products and apps What if I told you that, just a couple months after Google and Samsung released the AI-infused, immersive Galaxy XR headset, I got to wear a pair of display glasses that can do almost the same thing? Xreal's Project Aura, a glasses-sized alternative to bulky headsets, is ready to slide into your jacket pocket next year. Google announced its Android XR intentions a year ago, promising a return to AR and VR fueled by a lot of Gemini AI in a range of product forms from VR headsets to glasses. The Samsung Galaxy XR, a mixed reality headset similar to Apple Vision Pro, was the first Android XR product released, and these Xreal glasses could be the next. . Project Aura wasn't the only thing Google showed me. I also got to try on the latest iteration of Google's upcoming competitors to Meta Ray-Ban smart glasses. They're coming next year from eyewear partners Warby Parker and Gentle Monster, and they'll work with Google's watches. And Samsung's Galaxy XR headset added more features: a Windows PC connection app, and photo-real avatars in beta called Likenesses that are similar to Apple's Vision Pro Personas. Google's trumpeting out that it's serious again about glasses, just as Meta is ramping up its efforts and Apple could be around the corner with glasses of its own. While you may not even know if you want to wear smart glasses yet, Google's taking a multi-product approach that makes a lot of sense now that I've done the latest demos. It's the way these new glasses work with phones, apps and even watches that make me the most interested. Sitting on a sofa with Project Aura on my face, the prototype glasses immediately felt like VR shrunken to a far smaller form. I launched a computer window wirelessly, streamed from a nearby PC, and controlled it with my hands. I laid out multiple apps. I circled a lamp in the room with a gesture, which caused a Google search. And I launched and played Demeo, a VR game, using my hands in the air. The most astonishing part to me? All this was possible with just a pair of glasses, even if they were tethered to a phone-sized processor puck. They also had a 70-degree field of view. Yes, that's smaller than what I see with VR headsets, but honestly more than enough to experience immersion. It felt like my VR and AR worlds were colliding. Project Aura uses an adapted form of Xreal's existing display glasses. The puck contains the same Qualcomm XR Gen 2 Plus chipset used in the Galaxy XR. Using Aura, wandering around the room, was the closest I've seen to full AR glasses outside of Meta's Orion demo a year ago, or Snap's bulkier Spectacles. But unlike Orion, Aura's being released as a real product next year, at a price that should be lower than Vision Pro or Galaxy XR. (Snap's next version of Spectacles are coming next year, too.) Also unlike Orion, Project Aura doesn't have full transparent lenses. Instead, it bounces its displays down from the top of the glasses, giving an AR effect but with some extra prism-like lens chunks in between (much like Xreal's One Pro glasses). Xreal already has a whole lineup of tethered display glasses that work like headphones for your eyes. Glasses in this category have existed for years, acting as plug-in monitors for phones, laptops and game handhelds. Aura's difference is it also adds three cameras that can do full-room tracking and hand tracking, plus take photos and videos that can be recognized by Gemini's AI. Inside, there's a larger and higher-resolution micro OLED display that's better than what I saw on Xreal's existing glasses line. Project Aura really does look like the rest of Xreal's glasses, and it can also work like them, too, plugging into laptops and phones. But the processing puck gives it extra power. Google's team told me the Qualcomm chip (Snapdragon XR 2 Gen 2 Plus) can run all of the apps that the Galaxy XR headset can, and I tried a few demos to back those claims. First, I ran through the same setup demo I tried on Galaxy XR, where I pinched floating 3D cubes that hung in the air, using my fingers. Aura's hand tracking overlaid onto my own hands, and everything felt like augmented reality floating in the room in front of me. I also used a new PC Connect app to wirelessly hook into a nearby laptop, casting the Windows monitor in front of me. Xreal's glasses, and others, can do this already when tethered with a USB-C cable. But the PC Connect mode also lets me use hand tracking to point and click on apps and control the Windows screen, something Apple still can't do with Macs via Vision Pro. I was able to launch Android XR apps side by side, too, like a YouTube video window. I even played a bit of Demeo, a Dungeons and Dragons-like tabletop game made for VR. It ran on the glasses, projecting the board in the room in front of me, and I used my hands to zoom in and control. Game cards even sprouted from my hand when I looked at them, just like in the AR versions of Demeo on Vision Pro and Galaxy XR. It was damn impressive. My demo wasn't perfect: sometimes the room tracking slipped a bit, drifting off before re-centering. But this early demo showed me what could be done in hardware so small. Xreal's glasses lean on their own custom chip that can recognize three cameras at once and help with hand tracking, doing all the things VR headsets usually do. Aura doesn't have eye tracking, but the hand tracking was enough for me to point and click in apps and do everything I needed to on the fly. I even walked across the room, pinched my fingers to invoke Google's Circle to Search, and drew a line around a floor lamp in the room. I saw instant Google results pop up in front of me on where to buy it. Much like the Galaxy XR headset, these glasses can circle-search my world, too. In a lot of ways, Aura reminded me of the promises of glasses-connected computing that I saw with Spacetop earlier this year, but Aura can go further. It's flat displays, and it's also 3D. It's really, truly an AR device. I used prescription lens inserts to wear Aura, much like I do with Xreal's existing glasses line. They worked well, and everything looked great. But these aren't all-day glasses. They're meant to be used on the go, like a work device, and then folded away. But considering all they could be capable of, they seem like a far better proposition than lugging a bigger mixed reality VR headset. According to Google and Xreal, this will be an actual product going on sale next year, not a development kit. Much like Samsung Galaxy XR was previous called "Project Moohan," Project Aura should get a proper name next year, and a price. Chi Xu, Xreal's CEO and founder, tells me that these are very much a stepping stone as well as a doorway to wireless glasses in the future. They'll also be the first Google-partnered glasses-form devices that will run full Android apps. "We're not trying to solve the all-day wearable capability," says Xu. "But you're going to find that you can totally use this for hours." Xu says Google's working on standalone all-day glasses that will eventually aim for what Aura can already do, but in the meantime, this is building the pieces on how it'll work with phones. "We really want to polish this experience first," he said. "And once we really have a great experience with the [processing] puck, with glasses, the next question will be, wow, when can we replace the puck with our phone?" I also tried on Google's Ray-Ban-like smart glasses again, which are nearing release next year too. The Samsung, Qualcomm and Google co-developed hardware will be arriving via Warby Parker and Gentle Monster, in versions with and without displays. According to Google, the release of the glasses will be a bit staggered in 2026, possibly with the non-display models coming first. It's also a lot clearer how they'll work. I tried a few app demos that show how the glasses will connect with phones. These glasses won't run apps, necessarily, but will show rich notifications pulled from Android phones, along with hook-ins that feel like apps on demand. For instance, I asked for Google Maps directions, saw turn by turn instructions float in front of me. When I tilted my head down, I saw a map on the floor that turned as I turned, showing my location. An Uber demo showed how heads-up info could appear on the glasses, and I could look down to see walking directions to my pickup spot. Google's glasses will support a wide range of prescriptions, I was promised, unlike Meta's limited prescription support for its Display glasses this fall. "Suffice it to say, we take it pretty seriously that these are glasses first and foremost," says Juston Payne, Google's director of product management for XR The glasses will also work with iPhones, but via the Google app. On Android phones, however, Gemini will hook into the full OS to do a lot more, feeling like an extension in a similar way to how Pixel earbuds and Android watches already do. The glasses will also work with watches, Payne confirmed, supporting a limited set of gestures and taps. They'll work as optional control accessories, almost like Meta's neural band for its glasses, but Google's glasses could also send things onto watch screens. "If you have the glasses with no display, and you take a picture, you can look at your wrist and see what that picture looks like," says Payne. "You're talking to Gemini, Gemini has some visual response, look at your wrist." But the display-enabled glasses will let you see more, on a bigger screen, and not need you to look at your watch at all. Google's display-enabled glasses can work with head gestures to make different things appear when you look up or down, like my Maps and Uber demos showed me. They'll also let you watch YouTube videos: I saw a quick clip as a demo. The Micro LED color display, made by Raxium, was viewable, but in a smallish virtual window in one that's not as vibrant as a regular phone screen (and semi-transparent). But it's good enough for catching a quick social clip, or maybe heads-up instructions. The display area's large enough, too, to make a video call via Google Meet which showed me a Google rep on-screen while I shared my video view of a bookshelf in front of me, showing that thumbnail in a smaller sub-screen next to it. The glasses also look smaller than Meta's Ray-Ban Displays, more similar to the display-free Meta Ray-Bans...and lightweight, too. Although the glasses I tried are just developer hardware prototypes, I'd still wear them around. I also got a peek at a dual-screen pair that, while not arriving in 2026, could come in 2027. I watched 3D video clips on it, also via a color Micro LED display. The glasses weren't much bigger in size, even with the dual displays onboard. Where does this all leave Google and Samsung's just-released VR/AR Galaxy XR headset? Good question. While Google and Samsung are clearly glasses-focused, the messaging is clearly that Galaxy XR still represents the full-feature set where the companies are aiming for. It'll remain the testbed for apps and deeper AI, even if it might not be the thing most people end up wearing. I got to look at a few new Galaxy XR features, too, which are arriving now. One's a PC-connecting app called Connect that bridges a wireless link to Windows PCs, and supports hand tracking to control apps and windows. A new more photoreal set of avatars, called Likenesses, are rolling out in beta too. They resemble Apple's sometimes-uncanny Persona avatars, although Google and Samsung still haven't unleashed theirs to move outside of chat windows. I also saw peeks at new 3D autoconverting tools using AI: in Maps, photos of places can now look nearly 3D-scanned using Gaussian splats, a trick that'll be moving into photos soon (it looks a lot more immersive than the 3D conversions that already exist). Google's also auto-converting YouTube videos to 3D, too, but at lower frame rates. I'm more interested in what the glasses bring. And I'm sure others will be, too. But right now, Samsung Galaxy XR is Google's only product that's out. At least, until sometime in 2026 when the glasses floodgates open in a whole bunch of ways.
[2]
Here's how Google is laying the foundation for our mixed reality future
Today, during the XR edition of The Android Show, Google showed off a bunch of updates and new features headed to its mixed reality OS. And while most of the news was aimed at developers, I got a chance to demo some of the platform's expanded capabilities on a range of hardware including Samsung's Galaxy XR headset, two different reference designs and an early version of Xreal's Project Aura smart glasses and I came away rather impressed. So here's a rundown of what I saw and how it will impact the rapidly growing ecosystem of head-mounted displays. First up was one of Google's reference design smart glasses with a single waveguide RGB display built into its right lens. I've included a picture of it here, but try not to read too deeply into its design or aesthetics, as this device is meant to be a testbed for Android XR features and not an early look at upcoming models. After putting them on, I was able to ask Gemini to play some tunes on YouTube Music before answering a call simply by tapping on the touchpad built into the right side of the frames. And because the reference model also had onboard world-facing cameras, I could easily share my view with the person on the other end of the line. Naturally, I was curious about how glasses had the bandwidth to do all this, because in normal use, they rely on a Bluetooth or Bluetooth LE connection. When asked, Max Spear, Group Product Manager for XR, shared that depending on the situation, the device can seamlessly switch between both Bluetooth and Wi-Fi, which was rather impressive because I couldn't even detect when that transition happened. Spear also noted that one of Google's focuses for Android XR is making it easier for developers to port over the apps people already know and love. This means for devices like the reference design I wore that feature a built-in display (or displays), the OS actually uses the same code meant for standard Android notifications (like quick replies) to create a minimalist UI instead of forcing app makers to update each piece of software to be compliant with an ever-increasing number of devices. Alternatively, for models that are super lightweight and rely strictly on speakers (like Bose Frames), Google has also designed Android XR so that you only need mics and voice controls to access a wide variety of apps without the need for visual menus. Meanwhile, if you're hoping to take photos with your smart glasses, there's a surprising amount of capability there, too. Not only was I able to ask Gemini to take a photo, the glasses were also able to send a higher-res version to a connected smartwatch, which is super handy in case you want to review the image before moving on to the next shot. And when you want to inject some creativity, you can ask Gemini to transform pictures into practically anything you can imagine via Nano Banana. In my case, I asked the AI to change a shot of a pantry into a sci-fi kitchen and Gemini delivered with aplomb, including converting the room into a metal-clad setting complete with lots of light strips and a few bursts of steam. However, one of the most impressive demos was when I asked Google's reference glasses to look at some of that same pantry environment and then use the ingredients to create a recipe based on my specifications (no tomatoes please, my wife isn't a fan). Gemini went down an Italian route by picking pasta, jarred banana peppers, bell peppers (which I thought was a somewhat unusual combination) and more, before launching into the first steps of the recipe. Sadly, I didn't have time to actually cook it, but as part of the demo, I learned that Gemini has been trained to understand human-centric gestures like pointing and picking things up. This allows it to better understand context without the need to be super specific, which is one of those little but very impactful tricks that allows AI to feel way less robotic. Then I had a chance to see how Uber and Google Maps ran on the reference glasses, this time using models with both single and dual RGB displays. Surprisingly, even on the monocular version, Maps was able to generate a detailed map with the ability to zoom in and out. But when I switched over to the binocular model, I noticed a significant jump in sharpness and clarity along with a higher-fidelity map with stereoscopic 3D images of buildings. Now, it may be a bit early to call this, and the perception of sharpness varies greatly between people based on their head shape and other factors, but after seeing that, I'm even more convinced that the smart glasses with dual RGB displays are what the industry will settle on in the long term. The second type of device I used was the Samsung Galaxy XR, which I originally tried out when it was announced back in October. However, in the short time since, Google has cooked up a few new features that really help expand the headset's capabilities. By using the goggle's exterior-facing cameras, I was able to play a game of I Spy with Gemini. Admittedly, this might sound like a small addition, but I think it's going to play a big part in how we use devices running Android XR, because it allows the headset (or glasses) to understand better what you're looking at in order to provide more helpful contextual responses. However, the biggest surprise was when I joined a virtual call with someone using one of Google's new avatars, called Likeness. Instead of the low-polygon cartoony characters we've seen before in places like Meta Horizon, Google's virtual representations of people's faces are almost scary good. So good I had to double-check that they weren't real and from what I've seen they're even a step up from Apple's Personas. Google says that headsets like the Galaxy XR rely on interior sensors to track and respond to facial movements, while users will be able to create and edit their avatars using a standalone app due out sometime next year. Next, I got a chance to test out the Android XR's PC connectivity by playing Stray on the Galaxy XR while it was tethered wirelessly to a nearby laptop. Not only did it run almost flawlessly with low latency, I was also able to use a paired controller instead of relying on hand-tracking or the laptop's mouse and keyboard. This is something I've been eagerly waiting to try because it feels like Google has put a lot of work into making Android XR devices play nicely with other devices and OSes. Initially, you'll only be able to connect Windows PCs to the Galaxy XR, but Google says it's looking to support macOS systems as well. Finally, I got to try out Xreal's Project Aura glasses to see how Android XR works on a device primarily designed to give you big virtual displays in a portable form factor. Unfortunately, because this was a pre-production unit, I wasn't able to take photos. That said, as far as the glasses go, I was really impressed with their resolution and sharpness and the inclusion of electrochromic glass is a really nice touch, as it allows users to change how heavily the lenses are tinted with a single touch. Alternatively, the glasses can also adjust the tint automatically based on whatever app you are using to give you a more or less isolated atmosphere, depending on the situation. I also appreciate the Aura's increased 70-degree FOV, but if I'm nitpicking, I wish it were a bit higher, as I occasionally found myself wanting a bit more vertical display area. As a device that's sort of between lightweight smart glasses and a full VR headset, the Aura relies on a wired battery pack that also doubles as a touchpad and a hub for plugging in external devices like your phone, laptop or even game consoles. While using the Aura, I was able to connect to a different PC and multitask in style, as the glasses were able to support multiple virtual displays while running several different apps at the same time. This allowed me to be on a virtual call with someone using a Likeness while I had two other virtual windows open on either side. I also played an AR game (Demio) while I moved around in virtual space and used my hands to reposition the battlefield or pick up objects with my hands. Now I will fully admit this is a lot and it took me a bit to process everything. But upon reflection, I have a few takeaways from my time with the various Android XR devices and prototypes. More than any other headset or smart glasses platform out now, it feels like Google is doing a ton to embrace a growing ecosystem of devices. That's really important because we're still so early in the lifecycle for wearable gadgets with displays that no one has really figured out a truly polished design like we have for smartphones and laptops. And until we get there, this means that a highly adaptable OS will go a long way towards supporting OEMs like Samsung, Xreal and others. But that's not all. It's clear Google is focused on making Android XR devices easy to build for. That's because the company knows that without useful software that can highlight the components and features coming on next-gen spectacles, there's a chance that interest will remain rather niche -- similar to what we've seen when looking at the adoption of VR headsets. So in a way, Google is waging a battle on two fronts, which makes navigating uncharted waters that much more difficult. Google is putting a major emphasis on Android XR's ability to serve as a framework for future gadgets and support and address developer needs. This mirrors the approach the company takes with regular Android and the opposite of Apple's typical MO, because unlike the Vision Pro and visionOS, it appears Google is going to rely heavily on its partners like Xreal, Warby Parker, Gentle Monster and others to create engaging hardware. Furthermore, Google says it plans to support smart glasses that can be tethered to Android and iOS phones, as well as smartwatches from both ecosystems, though there will be some limitations for people using Apple devices due to inherent OS restrictions. That's not to say that there won't be Pixel glasses sometime down the road, but at least for now, I think that's a smart approach and possibly a lesson Google learned after releasing Google Glass over a decade ago. Meanwhile, hi-res and incredibly realistic avatars like Likenesses could be a turning point for virtual collaboration, because, in a first for me, talking to a digital representation of someone else felt kind of natural. After my demos, I had a chance to talk to Senior Director of Product Management for XR Juston Payne, who highlighted the difference between smart glasses and typical gadgets by saying "Smart glasses have to be great glasses first. They need to have a good form factor, good lenses with prescription support, they need to look good and they have to be easy to buy." That's no simple task and there's no guarantee that next-gen smart glasses and headsets will be a grand slam. But from what I've seen, Google is building a very compelling foundation with Android XR.
[3]
Hands-on: Google Glass is now real with 'monocular' Android XR glasses coming in 2026
After demoing Android XR glasses for the first time in December 2024, my takeaway was that rumors about how Google was behind in augmented reality were greatly exaggerated, if not outright wrong. A year later, Google is on the verge of releasing "AI glasses" with a display in 2026 and they fully realize the vision of Google Glass. We will see "AI glasses" -- as Google is branding this form factor -- next year with and without displays. The latter with just cameras, microphones, and speakers is straightforward enough. However, Google is integrating Android XR with Wear OS in cool ways that finally make Android's cross-platform "Better Together" work interesting. When you take a picture on your display-less glasses, a notification lets you preview the capture in full on your watch. Gestures will also be available to control Android XR. I thought screen-less glasses would be the extent of what Google and its partners (Samsung with Warby Parker and Gentle Monster) will do next year. What's genuinely surprising is that Google says Android XR devices with a single display (at the right), or "monocular" glasses, are launching in 2026, and the screen is very good. The Google demo I had last week using monocular prototypes started with asking Gemini to play a song. A now playing screen appeared as a compact rectangle that maybe had two or three colors and tiny album artwork. Expanding it with a tap of the side touchpad revealed the wavy Material see bar that's on Android 16 today in a nice example of consistency. Next, I answered a video call -- much like how One day... ends -- and was shocked to see a full rectangular feed of their face. It was truly a floating display in my line of sight made possible by the microLED tech that Google has been actively developing since its acquisition of Raxium in 2022. The resolution at this distance looks sharp, while the colors are vibrant and phone-like. To top that off, Google had me share my point-of-view camera on the call, and I saw two side-by-side video feeds: the caller's and my own. That screen literally expanding vertically blew my mind. I was then asked to take a picture and add whatever I wanted to the scene with Nano Banana Pro. Besides the side touchpad, the top of the stem is home to a camera button, while the underside further back has a button to turn on/off the display, specifically Gemini's response transcript. That generated image appeared right there in my line of sight after a few seconds. The next unrealized Google advantage is the Android app ecosystem. On day one, mobile applications from your phone are projected to Android XR glasses. This approach gives glasses rich media controls and notifications without developers having to do any work, though optimization is possible. The latest Android XR SDK (Developer Preview 3) released today lets them start doing that with an emulator also available. Google is also detailing their Glimmer design language/guidance that incorporates Material Design. In essence, the most complex UI you will have on Android XR is something akin to a homescreen widget, and that's the right call. The killer UX interaction and capability that Google has involves AR navigation, or Google Maps Live View from your phone but on glasses. When you're looking straight ahead, you just see a pill of directions, but tilting your head down reveals a map that's basically the corner guide in a video game. The transition animation as you move your head back and forth is absolutely delightful and fluid thanks to the display. Third-party apps, like Uber, can take advantage of this, and I got a demo of using their Android XR experience to navigate to a pickup spot in an airport with step-by-step directions and images. The screen-less version will come first, but the monocular display version is coming in 2026 and this is such a surprise. In fact, Google is giving these monocular dev kits to developers today and will expand access over the coming months. Until then, Android Studio offers an emulator for optical passthrough experience. Google also demoed binocular glasses to me where each lens has a waveguide display, and this allows you to watch a YouTube video in native 3D with depth. Meanwhile, the same Google Maps experience gives you a richer map that you can zoom in and out on. These are coming later, and will start to unlock productivity use cases that one day could replace the phone at some tasks. When monocular Android XR glasses launch next year, I think people will be incredibly surprised that Google has frames that fit what they've been imagining augmented reality to be. It's unfortunate how hard this form factor is to visually capture. Google's videos accurately capture what you can do, but not the novelty of it happening in your field of view. Google's modern approach to AR glasses is clearly framed -- sorry -- by Google Glass where the company tried to develop in public. That did not work, but I don't think it should blight how Google fundamentally had the right vision as captured by the concept video that showed how augmented reality could fit in day-to-day. Since Glass, Google kept all their very real progress on hardware and software internal until they had something helpful. Compared to sunglasses, the company very much wants to offer something that you'll want to wear throughout the day. It wasn't a very exciting decade for fans of AR and Google, but I can't fault them now that I've seen it work. I thought we needed a few more hardware display breakthroughs before we got real AR glasses. Those advances are here -- in full -- and coming next year as something you can buy.
[4]
I just saw the future with Google's Android XR smart glasses -- and Meta and Apple are in trouble
So I'm chatting with Google Gemini while wearing a pair of Android XR smart glasses, and I tell the assistant to brighten up an image before I've even taken the pic. Gemini happily obliges. I also ask for directions for a nearby restaurant and Google Maps shows me turn-by-turn directions right in my field of view. And when I look down briefly I can see the whole map to reorient myself. This is just scratching the surface of what these Android XR glasses can do. The ones I tested are a prototype from Google, but the glasses are coming out for real in 2026 through partners like Samsung, Warby Parker and Gentle Monster. I also tried out Xreal's amazing Project Aura glasses, which squeeze a lot of what the Samsung Galaxy XR headset can do down into a pair of sleek specs, as well as a killer upgrade for the Galaxy XR itself. And I think Meta (and Apple) could be in trouble. First, let's focus on the prototype display Galaxy XR glasses. I tried everything from music playback and Google Maps to live translation, and these glasses delivered a pretty smooth experience -- without the need for a neural wristband like the Meta Ray-Ban Display glasses. More important, a ton of Android apps will "just work" at launch without developers having to lift a finger. The smart glasses are smart enough to simulate a very similar experience you might get from an app in the Quick Settings menu on your phone. The display on the monocular glasses was fairly bright and crisp, and I like how the screen is centered and slightly below your usual sightline. That way you don't have to look like a fool constantly moving your eyes. By default the display shows the time and temperature, but things get a lot more interesting once you start trying out apps. For example, once we fired up YouTube Music on a nearby phone, I could easily pause playback by tapping the right arm of the glasses or skip to the next trap just by swiping forward. It was easy to make out album art, too. From there I used the Android XR glasses to ask Gemini what meal I could make while looking at ingredients on a shelf. And I also used Google Maps to get directions to the Standard Grill. I saw the next turn in the glasses, and as I looked downward I could see the larger map view automatically. Google Maps has a different voice than Gemini, which was a bit jarring at first, but Google says they left the voices distinct on purpose so you don't try asking Google Maps for things only Gemini can answer. Even though others can't see your face during video calls, it was pretty cool to join a Google Meet call and enable others to see my field of view as I panned around. But the even cooler communication trick is live translation. I could see the words being transcribed from Chinese to English as I spoke to a woman. To see my translation I would have to show her my phone. (Yeah, it's going to take a while for everyone to have smart glasses.) My only main complaint so far is that the display locked washed out when I looked out the window. But Google promises that the final version will have a brighter display and that models with transition lenses that turn darker in direct sunlight will help mitigate this issue. I also tried on a prototype "binocular" pair of smart glasses in order to get the sense of what it's like to have two displays going at once for each eye. And there's a couple of benefits right now. First, glasses can instantly turn 2D videos into 3D, and I got a taste of that while watching Tom Holland's Spider-Man sling webs while walking up a tall building. This is actually tailor-made for YouTube Shorts. You also get a larger view for Google Maps, giving you more info at a glance. It's worth noting that it's easy to turn off the displays on both pairs of smart glasses at any time by pressing the button on the underside of the right arm. There's a separate button on the top right of all of these glasses for capturing photos and videos. This is an easy way to save power, and you can always use your voice to get stuff done, whether it's opening apps or asking Gemini to get something done for you. Plus, Google promises glasses are coming without a display at all, similar to the Ray-Ban Meta (gen 2). So if you want longer battery life and a cheaper price that will be the way to go. My favorite Android XR demo of what's coming is Project Aura from Xreal. It squeezes the Galaxy XR headset experience down into a fairly sleek pair of glasses. There's no need for video passthrough. You just see then real world in front of you. The micro OLED displays inside the Aura are astonishingly sharp and colorful, which I experienced while playing the Demeo game. I could pick up the individual game pieces and make out very fine detail even in a room with a lot of ambient light. The 70-degree field of view is definitely narrower than the Galaxy XR (100 degrees), but it's the widest ever seen in AR glasses, and a trade-off I think many will be willing to make to wear something that's lighter. Plus, you get about double the battery life of Samsung's headset at about 4 hours. The battery itself is placed in a pack that's also houses the Snapdragon XR2+ Gen 2 chip. Unfortunately, the glasses need to be tethered to the pack at all times via a cable, but at least this pack comes with a built-in clip for attaching to your pants. Bonus: you can use the top of this pack as a wireless mouse, which worked fairly well when I navigated between a Windows 11 desktop within Android XR and a YouTube video playing in a separate window. The front-facing cameras did a solid job tracking my finger movements and the various Android XR gestures (pinch, scrolling, etc) worked well. But I had to lift my hands a bit more versus using the Galaxy XR. Last but not least Samsung let me try out some new Android XR upgrades that are coming to the Galaxy XR headset. This includes PC Connect, which lets you connect immediately to your laptop or desktop using a dedicated Android XR app. After firing up the app and selecting the PC I wanted to connect to, I could instantly see the Windows desktop in front of my eyes. I picked up a gaming controller and started playing "Stray" with no lag at all. Previously, you needed to have a Galaxy Book to connect to a PC through the Galaxy XR, so this app really opens things up. Just keep in mind that you'll see the best results if you PC has a dedicated graphics card. I also got a chance to try out Likeness, which is Google's version of the Apple Vision Pro's Persona. But instead of using the headset to create your avatar, you use a phone. It's a similar process to setting up Face ID on your iPhone, scanning your face and expressions. I didn't get to try my own Likeness but I could see someone else's during a Google Meet call in my headset. And her Likeness looked pretty realistic, including the blinks and smile. (Doing teeth is hard in mixed reality.) For more, see our guide to all of the new Galaxy XR upgrades. Google is clearly behind Meta right now by not having a pair of smart glasses on the market yet. But based on my demos I think it could easily overtake the Meta Ray-Ban Display. You'll instantly get access to a ton more apps, and Gemini is further ahead as an AI assistant versus Meta. And given that Apple is rumored to only offer a display-less pair of smart glasses to start -- and that the new Siri is still delayed until 2026. The big question is what all of these smart glasses are going to cost. Clearly, the screen-free AI glasses will be the most affordable, and then you'll pay more for the monocular display glasses and even more for the binocular glasses. Wired XR glasses like Project Aura from Real has a lot of potential, especially for business travelers, gamers, or anyone who wants an immersive mixed reality experience at home and on the go.
[5]
I tried the next-gen Android XR prototype smart glasses, and these frames are ready for your close-up
It's becoming a familiar feeling, that moment of delight when a bit of information, video, photos, navigation, or even live translation floats before my eyes. I've seen it now with Meta Ray-Ban Displays and Meta Orion. This is the new state of the art for smart glasses, and I was excited to see Android XR finally joining the party. This week marks a critical turning point for the Google Android XR journey, one that began somewhat inauspiciously with the Samsung Galaxy XR but is now set to soon deliver on a promise of wearable, take-anywhere, Gemini AI-backed smart glasses. At Monday's Android Show: XR Edition, Google is unveiling two smart glasses prototype developer editions: one a monocular experience, and the other a dual-display experience. It's also joining with partner Xreal to give a first look at the Xreal Project Aura, the first near-production-grade smart glasses to feature Android XR. I had a rare chance to try all three new smart eyewear (and even an upgrade to the Samsung Galaxy XR) and came away not only impressed but anxious to start wearing a pair full-time. We started with the monocular Android XR prototype frames, which were notable for being only slightly chunkier than traditional glasses. They'd been prefitted with prescription lenses to adjust for my eyesight, which sat right behind what looked like your typical clear eyeglass lenses. Google is using waveguide technology for both the mono and dual display versions. Basically, this uses tiny displays embedded in the edges of the frames. Imagery is then projected through the lens and delivered via the waveguides to the wearer's eye or eyes. It creates the illusion of a floating screen or images. One of the neat tricks here is that while sharp images of, say, the time and temp can remain floating in front of your eyes, they never occlude your vision. Instead, you're simply changing focus from near (to view the AR imagery) to far to see what's really in front of you. This, by the way, stands in contrast to high-resolution micro displays used by most mixed reality platforms like Vision Pro and Galaxy XR. With those, you're never actually looking through a lens; instead, the whole world, both real and augmented, is presented to you on the stereo micro displays. Google kept both prototypes thin, light, and comfortable by handing most processing duties to a paired Google Pixel 10 (the plan is to make them work with iPhones, as well). This seems like the preferred strategy for these types of wear-all-the-time smart glasses, and, to be honest, it makes sense. Why try to recreate the processing power of a phone in your glasses when you will almost certainly have your phone with you? It's a marriage that, in my brief experience, works. Google calls the frames "AI Glasses", leaning into the always-ready assistance Google Gemini can provide. There will be display-free models that listen for your voice and deliver answers through the built-in speakers. Throughout my demos, I saw that, even with the displays turned off, Gemini at your beck and call could still be useful via audio interactions. Still, there's something about the in-lens displays that is just so compelling. While the monocular display shows for only one eye, your brain quickly makes the adjustment, and you interpret the video bits as being shown to both eyes, even if some of it is slightly left of center. My initial experience was of a small floating time and temperature; I could focus in to view it or look past it to ignore it. You can also, obviously, turn off the display. In some ways, the experience that followed was very much like the one I had just a few months ago with Meta Ray-Ban Display. The frames are fitted with cameras that you can use to either show Gemini your world or to share it with others. Summoning Gemini with a long press on the stem, I asked it to find me some music that fit the mood of the room. I also asked it to play a Christmas song by David Bowie. Floating in front of my eye was a small YouTube playback widget connected to the Bowie/Bing Crosby version of "Little Drummer Boy," I could hear through the glasses' built-in speakers. Google execs told me they didn't need to write any special code for it to appear in this format. At another point, I looked at a shelf full of groceries and asked Gemini to suggest a meal based on the available ingredients. There were some cans of tomatoes, so naturally, I got tomato sauce. I took every opportunity to interrupt Gemini and redirect or interrogate it. It handled all of this with ease and politeness, as someone used to dealing with rude customers. Taking a picture is easy, but the frames also have access to Gemini's models and can use Nano Banana Pro to add AI enhancements. I looked at a nearby window shelf and asked Gemini to fill the space with stuffed bears. Like most other requests, this went from the glasses to the phone to Google's cloud, where Nano Banana Pro quickly did its work. Within seconds, I was looking at a photorealistic image of stuffed bears adorably situated on the windowsill. The imagery is always relatively sharp and clear, but without ever fully blocking my vision. Someone on the Google team called the glasses using Google Meet; I answered and saw a video of them. Then I showed them my view. One of the more startling demonstrations was when a Chinese speaker entered the room, and the glasses automatically detected her language and translated on the fly. I heard her words translated into English in my ears, but could also read them in front of my eyes. The speed and apparent accuracy were astonishing. Naturally, the glasses could be an excellent heads-up navigation system. I asked Gemini to find a nearby museum, and once we settled on the Museum of Illusions (visual, not magic), I had it provide turn-by-turn directions. When I looked up, I could see where I needed to turn next, and when l looked down, I could see my position on the map and which direction I was facing. Google partnered with Uber to carry this experience indoors. They showed me how the system could help me navigate inside an airport based on Uber's navigational data. I next donned the dual-display prototypes. They appeared to be no bigger or heavier than the monocular versions, but delivered a starkly different visual experience. First, you get a wider field of view, and because it's two displays (one for each eye), you get instant stereovision. In maps, this gives you 3D overviews of cities that change based on how you view them. The frames can make any image or video 3D, though some of this looked a little weird to my eyes; I will always prefer viewing spatial content that was actually shot in 3D. Still, it's useful in Google Maps where, if you go inside an establishment, the bi-displays turn every interior image into a 3D image. Google's Android XR team gave me a brief early hands-on demo of Project Aura. Xreal has been a leader in display glasses that essentially produce virtual 200-inch displays from any USB-C-connected device (phones, laptops, gaming systems), but Project Aura is a different beast. It, like Samsung Galaxy XR, is a self-contained Android XR system, a computer on your face in a lightweight eyeglasses form. At least that's the promise. Like Xreal One, the eyeglasses use Sony Micro LED displays that project images through thick prisms to your eyes. These glasses were also prefitted with my prescription, so I could see the experience clearly. Unlike the Android XR prototypes, Xreal Project Aura's glasses connect through a port on the tail end of one stem to a smartphone-sized compute puck that, interestingly, includes an embedded trackpad for mouse control, though I could not quite get it to work for me. They offer a clear and relatively spacious 70-degree FoV. Like the Samsung Galaxy XR, the Aura uses gesture control. There are no cameras tracking the eyes, so I had to intentionally point at and pinch on-screen elements. Since it's an Android XR system, the control metaphors, menus, and interface elements are, for better or worse, identical to those I found with the Samsung Galaxy XR. We used them to quickly connect to a nearby PC for a big-screen productivity experience. My favorite part was a giant game board demo. This was a sort of Dungeons and Dragons card game in which I could use one or both hands to move and resize the 3D and rather detailed game board. When I turned my hand over, a half dozen cards fanned out before me. I could grab each virtual card and examine it. On the playing field were little game character pieces that I could also pick up and examine. All the while, the processor puck was hanging off my belt. Unlike the AI Glasses prototypes, Project Aura is still considered an "episodic" device, one you might use on occasion, and usually at home, in the office, or maybe on a flight home. The original Android XR device, Galaxy XR, is also getting a minor update in this cycle. I tried, for the first time, Galaxy XR's Likenesses, which is Google's version of Personas on Vision Pro. Unlike those, though, you capture your Likeness with your phone. From this, it generates an animated replica of your face and shoulders. I conducted a Google Meet call with a Google rep and could see her eerily realistic-looking Likeness, which appeared to track all her facial movements. It even included her hands. We were able to share a Canva screen and work together. Google told me that even if someone is not wearing a headset like the Galaxy XR, which has cameras tracking your face, the system may eventually be able to use only audio cues to drive a Likeness and create a realistic animated avatar on the other end of the call. While all of these demos were exciting, we're still at least months away from the commercial availability of Project Aura, and that's if not more, for the Android XR AI display glasses. Google didn't share much about battery life or how close they are with frame partners Warby Parker and Gentle Monster to workable (and relatively affordable) AI Display frames. However, based on what I saw, we're closer than I previously thought. Plus, the ecosystem for the frames and connected devices like the all-important phones, pixel watches, computers, and app stores appears to be coming together. I think we're just about done with our dalliances with too-expensive episodic-use immersion devices. The time for AI-powered AR glasses is now here and I, for one, am ready to begin.
Share
Share
Copy Link
Google showcased prototype Android XR smart glasses featuring monocular and binocular displays, powered by Gemini AI. Partners Samsung, Warby Parker, and Gentle Monster will release consumer versions in 2026. The glasses include Google Maps Live View, hand tracking, and seamless Android app integration, positioning Google to compete directly with Meta Ray-Ban and Apple's anticipated AR devices.
Google has unveiled significant progress in its Android XR smart glasses development, demonstrating prototype devices that bring augmented reality wearable technology closer to mainstream adoption. The company showcased both monocular display and binocular versions during The Android Show: XR Edition, with partners Samsung, Warby Parker, and Gentle Monster set to release consumer models in 2026
1
3
. The AI glasses leverage Gemini AI integration to deliver hands-free assistance, navigation, and real-time translation through lightweight frames that resemble traditional eyewear.
Source: 9to5Google
The monocular display version uses microLED technology developed since Google's 2022 acquisition of Raxium, projecting sharp, vibrant images into the wearer's right eye through waveguide displays embedded in the lens
3
. During demonstrations, the display expanded vertically to show multiple video feeds simultaneously during calls, a capability that impressed hands-on reviewers. Google is distributing these monocular dev kits to developers now, with broader access expanding in coming months3
.Xreal's Project Aura represents a breakthrough in compact mixed reality future hardware, squeezing capabilities from the Samsung Galaxy XR headset into glasses-sized frames. The device features a 70-degree field of viewβthe widest ever seen in AR glassesβand uses micro OLED displays that deliver exceptional sharpness and color accuracy
1
4
. Connected to a phone-sized processing puck containing Qualcomm's XR Gen 2 Plus chipset, Project Aura supports full hand tracking through three onboard cameras, enabling users to control virtual windows, launch apps, and even play VR games like Demeo using air gestures1
.
Source: CNET
Unlike Meta's Orion prototype, Project Aura will ship as a commercial product next year at a price point expected to undercut Apple Vision Pro and Galaxy XR headsets
1
. The glasses can also function as standard display glasses when plugged directly into laptops and phones via USB-C, maintaining compatibility with Xreal's existing product ecosystem. A new PC Connect app enables wireless connection to Windows computers with hand tracking controls for pointing and clickingβfunctionality that Apple Vision Pro still lacks with Mac computers1
.Google's strategic advantage lies in its developer ecosystem approach, which allows existing Android applications to work on Android XR smart glasses without requiring developers to modify code
4
. The platform automatically adapts mobile app interfaces to display as compact widgets suitable for the glasses' limited screen real estate, using the same notification code already present in standard Android apps2
. This gives Google an immediate library of compatible applications at launch, a significant edge over competitors building ecosystems from scratch.Google Maps Live View emerges as a standout feature, providing turn-by-turn directions that appear as a pill-shaped indicator when looking straight ahead, then expanding into a full corner map when the wearer tilts their head downward
3
. The transition animation between these views proved fluid and intuitive during demonstrations. Third-party apps like Uber can leverage this navigation framework, with demos showing airport wayfinding with step-by-step directions and images3
.The glasses rely heavily on voice commands and Gemini AI to minimize the need for physical controls. Users can ask Gemini to analyze ingredients on a shelf and suggest recipes, with the AI trained to understand human gestures like pointing and picking up objects for better contextual awareness
2
. The system handles interruptions and redirections smoothly, allowing natural conversational interactions. For photography, users can request image enhancements before capturing shots, with Nano Banana Pro generating AI-modified versions that appear directly in the field of view within seconds5
.Live translation capabilities enable real-time transcription from languages like Chinese to English, displaying translated text in the wearer's view during conversations
4
. The glasses can also share point-of-view video during Google Meet calls, though the current limitation requires showing translations to conversation partners via phone screen. Google intentionally uses distinct voices for Gemini versus Google Maps to prevent user confusion about which service is responding.The prototype monocular glasses remain only slightly chunkier than traditional eyewear, with displays centered slightly below the wearer's normal sightline to avoid constant eye movement
4
. Waveguide technology projects images that never fully occlude vision, allowing wearers to shift focus between near AR content and distant real-world objects5
. Physical controls include a touchpad on the right stem for taps and swipes, a camera button on top, and a display toggle button underneath for power management3
.
Source: TechRadar
Binocular prototypes with dual displays offer enhanced capabilities, including native 3D video playback for content like YouTube Shorts and richer Google Maps views with zoom functionality
3
4
. These dual-display models will arrive later than monocular versions, targeting productivity use cases that could eventually replace phones for certain tasks. Google also plans display-free versions with only cameras, microphones, and speakers for users prioritizing battery life and lower costs, similar to Meta Ray-Ban glasses4
.Related Stories
Google has integrated Android XR with Wear OS to enable novel interactions between smart glasses and smartwatches. When users capture photos with display-free glasses, a notification appears on connected watches allowing full preview of higher-resolution images before moving to the next shot
2
. Gesture controls from watches can also operate Android XR functions, creating a cross-device ecosystem that demonstrates Google's "Better Together" strategy in action3
.The glasses seamlessly switch between Bluetooth and Wi-Fi connections depending on bandwidth requirements, with transitions happening imperceptibly during normal use
2
. This technical achievement, explained by Max Spear, Group Product Manager for XR, enables capabilities like video calling and high-quality media streaming without requiring users to manually manage connection types. Google is releasing Android XR SDK Developer Preview 3 today with emulator support, alongside design guidance through their Glimmer language that incorporates Material Design principles.Google's multi-product approach positions Android XR to compete across different price points and use cases against Meta Ray-Ban smart glasses and Apple's anticipated AR devices. While Meta currently leads with its Ray-Ban partnership and neural wristband technology for display glasses, Google's advantage lies in requiring no additional accessories beyond the glasses themselves for hand tracking and gesture control
4
. The ability to run thousands of Android apps immediately at launch could prove decisive in attracting both developers and consumers.Reviewers noted concerns about display washout in bright sunlight, though Google promises final versions will feature brighter screens and transition lenses that darken automatically in direct light
4
. The company's measured approach contrasts sharply with the Google Glass era, when public development backfired. By working with established eyewear brands like Warby Parker and Gentle Monster, Google aims to deliver fashion-forward designs that consumers actually want to wear daily, learning from past missteps while leveraging two decades of AR research.Summarized by
Navi
[4]
1
Technology

2
Technology

3
Science and Research
