5 Sources
[1]
Stanford's holographic AI glasses are coming for your clunky VR headset
Over the past couple of years, with the introduction of the Apple Vision Pro and the Meta Quest 3, I've become a believer in the potential of mixed reality. First, and this was a big concern for me, it's possible to use VR headsets without barfing. Second, some of the applications are truly amazing, especially the entertainment. While the ability to watch a movie on a giant screen is awesome, the fully immersive 3D experiences on the Vision Pro are really quite compelling. In this article, I'm going to show you a technology that has the potential of definitively obsoleting VR devices like the Vision Pro and Quest 3. But first, I want to recount an experience I had with the Vision Pro that had a bit of a reality-altering effect. Then later, when we discuss the Stanford research, you'll see how they might expand on something like what I experienced and take it far beyond the next level. Also: These XR glasses gave me a 200-inch screen to work with There's a Vision Pro experience called Wild Life. I watched the Rhino episode from early 2024 that told the story of a wildlife refuge in Africa. While watching, I really felt like I could reach out and touch the animals; they were that close. But here's where it gets interesting. Whenever something on TV shows someplace I've actually been to in real life, I have an internal dialog box pop up in my brain that says, "I've been there." So, some time after I watched the Vision Pro episode on the rhino refuge, we saw a news story about the place. And wouldn't you know it? My brain said, "I've been there," even though I've never been to Africa. Something about the VR immersion indexed that episode in my brain as an actual lived experience, not just something I watched. To be clear, I knew at the time it wasn't a real experience. I currently know that it wasn't a real-life lived experience. Yet some little bit of internal brain parameterization still indexes it in the lived experiences table rather than the viewed experiences table. Also: I finally tried Samsung's XR headset, and it beats my Apple Vision Pro in meaningful ways But there are a few widely known problems with the Vision Pro. It's way too expensive, but it's not just that. I own one. I purchased it to be able to write about it for you. Even though I have one right here and movies are insanely awesome on it, I only use it when I have to for work. Why? Because it's also quite uncomfortable. It's like strapping a brick to your face. It's heavy, hot, and so intrusive you can't even take a sip of coffee while using it. All that brings us to some Stanford research that I first covered last year. A team of scientists led by Gordon Wetzstein, a professor of electrical engineering and director of the Stanford Computational Imaging Lab, has been working on solving both the immersion and the comfort problem using holography instead of TV technology. Using a combination of optical nanostructures called waveguides and augmented by AI, the team managed to construct a prototype device. By controlling the intensity and phase of light, they're able to manipulate light at the nano level. The challenge is making real-time adjustments to all the nano-light sequences based on the environment. Also: We tested the best AR and MR glasses: Here's how the Meta Ray-Bans stack up All of that took a ton of AI to improve image formation, optimize wavefront manipulation, handle wildly complex calculations, perform pattern recognition, deal with the thousands of variables involved in light propagation (phase shifts, interference patterns, diffraction effects, and more), and then correct for changes dynamically. Add to that real-time processing and optimization done at the super-micro level managing light for each eye, processing machine learning and constantly refining the holographic images, handling non-linear and high-dimensional data that comes from dealing with changing surface dimensionality, and then making it work with optical data, spatial data, and environmental information. It was a lot. But it was not enough. The reason I mentioned the rhinos earlier in this article is because the Stanford team has just issued a new research report published in Nature Photonics Journal, showing how they are trying to exceed the perception of reality possible from screen display technology. Back in the 1950s, digital pioneer Alan Turing suggested what has become known as the Turing Test. Basically, if a human can't tell if a machine at the other end of a conversation is a machine or a human, that machine is said to pass the Turing Test. The Stanford folks are proposing the idea of a visual Turing Test, where a mixed reality device would pass the test if you can't tell whether what you're looking at is real or computer-generated. Also: The day reality became unbearable: A peek beyond Apple's AR/VR headset Putting aside all the nightmares of uber-deep fakes and my little story here, the Stanford team contends that no matter how high-resolution stereoscopic LED technology is, it's still flat. The human brain, they say, will always be able to distinguish 3D represented on a flat display from true reality. As real as it might be, there's still an uncanny valley that lets the brain sense distortions. But holography bends light the same way physical objects do. The Stanford scientists contend that they can build holographic displays that produce 3D objects that are every bit as dimensional as real objects. By doing so, they'll pass their visual Turing Test. "A visual Turing Test then means, ideally, one cannot distinguish between a physical, real thing as seen through the glasses and a digitally created image being projected on the display surface," says Suyeon Choi, a postdoctoral scholar in Wetzstein's lab and first author of the paper. Also: Meta just launched a $400 Xbox Edition Quest 3S headset - and it's full of surprises I'm not sure about this. Yes, I support the idea that they will be capable of producing eyewear that bends light to replicate reality. But I wear glasses. There's always a periphery outside the edge of my glasses that I can see and sense. Unless they create headsets that block that peripheral vision, they won't be able to truly emulate reality. It's probably doable. The Meta Quest 3 and the Vision Pro both wrap around the eyes. But if Stanford's goal is to make holographic glasses that feel like normal glasses, then peripheral vision could complicate matters. In any case, let's talk about how far they've come in a year. Let's start by defining the technical term "étendue." According to Dictionnaires Le Robert and translated into English by The Goog, étendue is the "Property of bodies to be located in space and to occupy part of it." Ocular scientists use it to combine two characteristics of a visual experience: the field of view (or how wide an image appears) and the eyebox (the area in which a pupil can move and still see the entire image). A large étendue would both provide a wide field of view and allow the eye to move around enough for real life while still seeing the generated image. Since we reported on the project in 2024, the Stanford team has increased the field of view (FOV) from 11 degrees to 34.2 degrees horizontally and 20.2 degrees vertically. This year, the team developed a custom-designed angle-encoded holographic waveguide. Instead of Surface Relief Gratings (SRGs) used in 2024, the new prototype's couplers are constructed of Volume Bragg Gratings (VBGs). VBGs prevent "world-side light leakage" and visual noise that can degrade contrast in previous designs, and they also suppress stray light and ghost images. Both SRGs and VBGs are used to control how light bends or splits. SRGs function via a tiny pattern etched on the surface of a material -- light bounces off that surface. VBGs provide changes inside the material and reflect or filter light based on how that internal structure interacts with light waves. VBGs essentially provide more control over light movement. Another key element of the newest prototype is the MEMS (Micro-Electromechanical System) mirror. This mirror is integrated into the illumination module along with a collimated fiber-coupled laser and the holographic waveguide we discussed above. It is another tool for steering light, in this case the illumination angles incident on the Spatial Light Modulator (SLM). This, in turn, creates what the team calls a "synthetic aperture," which has the benefit of increasing the eyebox. Recall that the bigger the eyebox, the more a user's eye can move while using the mixed-reality system. Also: HP just turned Google Beam's hologram calls into reality - and you can buy it this year AI continues to play a key role in the dynamic functionality of the display, compensating heavily for real-world conditions and helping to create a seamless impression of the mixture of real reality and constructed reality. AI optimizes the image quality and three‑dimensionality of the holographic images. Last year, the team did not specify the size of the prototype eyewear, except to say it was smaller than typical VR displays. This year, the team says they've achieved a "total optical stack thickness of less than 3 mm (panel to lens)." By contrast, the lenses on my everyday eyeglasses are about 2 mm thick. "We want this to be compact and lightweight for all-day use, basically. That's problem number one, the biggest problem," Wetzstein said. The Stanford team describes these reports on their progress as a trilogy. Last year's report was Volume One. This year, we're learning about their progress in Volume Two. It's not clear how far away Volume Three is, which the team describes as real-world deployment. But with the improvements they've been making, I'm guessing we'll see some more progress (and possibly Volumes Four and Five) sooner, rather than later. Also: I wore Google's XR glasses, and they already beat my Ray-Ban Meta in 3 ways I'm not entirely sure that blending reality with holographic images to the point where you can't tell the difference is healthy. On the other hand, real reality can be pretty disturbing, so constructing our own bubble of holographic reality might offer a respite (or a new pathology). It's all just so very weird and ever so slightly creepy. But this is the world we live in. What do you think about the idea of a "visual Turing Test"? Do you believe holographic displays could truly fool the brain into thinking digital imagery is real? Have you tried any of the current‑gen mixed reality headsets like the Vision Pro or Quest 3? How immersive did they feel to you? Do you think Stanford's waveguide-based holographic approach could overcome the comfort and realism barriers holding back mainstream XR adoption? Let us know in the comments below.
[2]
A leap toward lighter, sleeker mixed reality displays
Using 3D holograms polished by artificial intelligence, researchers introduce a lean, eyeglass-like 3D headset that they say is a significant step toward passing the "Visual Turing Test." "In the future, most virtual reality displays will be holographic," said Gordon Wetzstein, a professor of electrical engineering at Stanford University, holding his lab's latest project: a virtual reality display that is not much larger than a pair of regular eyeglasses. "Holography offers capabilities that we can't get with any other type of display in a package that is much smaller than anything on the market today." Holography is a Nobel Prize-winning 3D display technique that uses both the intensity of light reflecting from an object, as with a traditional photograph, and the phase of the light (the way the waves synchronize), to produce a hologram, a highly realistic three-dimensional image of the original object. Wetzstein's latest holographic display, detailed in a new paper in Nature Photonics, moves the field toward a new age of lightweight, immersive, and perceptually realistic mixed reality glasses - glasses that project life-like three-dimensional moving images onto the wearer's real-world view. From lens to screen, the display is just 3 millimeters thick. Such a tool could transform education, entertainment, virtual travel, communication, and other fields, the researchers said. Holograms, Wetzstein said, provide a more visually satisfying, more realistic 3D visual experience than current stereoscopic approaches based on stereoscopic LED technology. And they come in a form that looks nothing like the bulky VR headsets of today. But, he acknowledges, it's not easy to achieve. Wetzstein and others in the field refer to it as "mixed reality" to convey the full impact of the display's seamless melding of holographic imagery and views of the real world. One day, Wetzstein predicts, digital images and real-world scenes will be indistinguishable. In the meantime, this prototype is a "significant step" in that direction. "Researchers in the field sometimes describe our goal as to pass the 'Visual Turing Test,'" said Suyeon Choi, a postdoctoral scholar in Wetzstein's lab and first author of the paper, in reference to the AI standard named for the famed British polymath and computer scientist, Alan Turing. In AI, the Turing Test holds that machines can only be declared truly "intelligent" when one cannot distinguish whether one is chatting with a machine or a human being. "A visual Turing Test then means, ideally, one cannot distinguish between a physical, real thing as seen through the glasses and a digitally created image being projected on the display surface," Choi said. His group's latest headset design achieves breakthroughs in image realism and usability by integrating a custom waveguide that steers the image to the viewer's eye. The holographic image is enhanced by a new AI-calibration method that optimizes image quality and three-dimensionality. The result is a display with both a large field of view and a large "eyebox" defined as the area in which the pupil can move and still see the entire image. This combination of large field of view and large eyebox - known in Wetzstein's world as the "ètendue" - is highly coveted. The effect is a crisp 3D image that fills the user's field of view for a more satisfying and immersive 3D experience. The leanness of the packaging cannot be overstated, Wetzstein said. The eyewear could be worn for hours at a time without the neck or eye fatigue that are a challenge with today's wearable displays. "We want this to be compact and lightweight for all-day use, basically. That's problem number one - the biggest problem," Wetzstein said. The other challenges are realism and immersiveness. AI helps solve the first by improving the image resolution and three-dimensional qualities of the holograph. The third challenge is achieved by the device's impressive eyebox and large field of view. The experience is like having a bigger, more realistic screen in your home theater, Wetzstein said. "The eye can move all about the image without losing focus or image quality," he added, noting that this is "key to the realism and immersion of the system." This latest research is the second installment in a scientific trilogy. Last year, in volume one, Wetzstein's lab introduced the holographic waveguide that enables the high image quality in the lean form factor. Now, in volume two, they have built a working prototype to bring the finer details of the engineering to life. Volume three could still be years off, Wetzstein admits, but that ultimate piece will come in the form of a commercial product that transforms how the world thinks of virtual reality - or extended reality, as the case may be. "The world has never seen a display like this with a large field of view, a large eyebox, and such image quality in a holographic display," Wetzstein said. "It's the best 3D display created so far and a great step forward - but there are lots of open challenges yet to solve."
[3]
New 3D headset uses holograms and AI to create lifelike mixed reality visuals
Using 3D holograms polished by artificial intelligence, researchers introduce a lean, eyeglass-like 3D headset that they say is a significant step toward passing the "Visual Turing Test." "In the future, most virtual reality displays will be holographic," said Gordon Wetzstein, a professor of electrical engineering at Stanford University, holding his lab's latest project: a virtual reality display that is not much larger than a pair of regular eyeglasses. "Holography offers capabilities that we can't get with any other type of display in a package that is much smaller than anything on the market today." Holography is a Nobel Prize-winning 3D display technique that uses both the intensity of light reflecting from an object, as with a traditional photograph, and the phase of the light (the way the waves synchronize), to produce a hologram, a highly realistic three-dimensional image of the original object. Wetzstein's latest holographic display, detailed in a new paper published in Nature Photonics, moves the field toward a new age of lightweight, immersive, and perceptually realistic mixed reality glasses -- glasses that project life-like three-dimensional moving images onto the wearer's real-world view. From lens to screen, the display is just 3 millimeters thick. Such a tool could transform education, entertainment, virtual travel, communication, and other fields, the researchers said. Extending reality Holograms, Wetzstein said, provide a more visually satisfying, more realistic 3D visual experience than current stereoscopic approaches based on stereoscopic LED technology. And they come in a form that looks nothing like the bulky VR headsets of today. But, he acknowledges, it's not easy to achieve. Wetzstein and others in the field refer to it as "mixed reality" to convey the full impact of the display's seamless melding of holographic imagery and views of the real world. One day, Wetzstein predicts, digital images and real-world scenes will be indistinguishable. In the meantime, this prototype is a "significant step" in that direction. "Researchers in the field sometimes describe our goal as to pass the "Visual Turing Test,'" said Suyeon Choi, a postdoctoral scholar in Wetzstein's lab and first author of the paper, in reference to the AI standard named for the famed British polymath and computer scientist, Alan Turing. In AI, the Turing Test holds that machines can only be declared truly "intelligent" when one cannot distinguish whether one is chatting with a machine or a human being. "A visual Turing Test then means, ideally, one cannot distinguish between a physical, real thing as seen through the glasses and a digitally created image being projected on the display surface," Choi said. Thinking outside the eyebox His group's latest headset design achieves breakthroughs in image realism and usability by integrating a custom waveguide that steers the image to the viewer's eye. The holographic image is enhanced by a new AI-calibration method that optimizes image quality and three-dimensionality. The result is a display with both a large field of view and a large "eyebox" defined as the area in which the pupil can move and still see the entire image. This combination of large field of view and large eyebox -- known in Wetzstein's world as the "étendue" -- is highly coveted. The effect is a crisp 3D image that fills the user's field of view for a more satisfying and immersive 3D experience. The leanness of the packaging cannot be overstated, Wetzstein said. The eyewear could be worn for hours at a time without the neck or eye fatigue that are a challenge with today's wearable displays. "We want this to be compact and lightweight for all-day use, basically. That's problem number one -- the biggest problem," Wetzstein said. The other challenges are realism and immersiveness. AI helps solve the first by improving the image resolution and three-dimensional qualities of the holograph. The third challenge is achieved by the device's impressive eyebox and large field of view. The experience is like having a bigger, more realistic screen in your home theater, Wetzstein said. "The eye can move all about the image without losing focus or image quality," he added, noting that this is "key to the realism and immersion of the system." This latest research is the second installment in a scientific trilogy. Last year, in volume one, Wetzstein's lab introduced the holographic waveguide that enables high image quality in the lean form factor. Now, in volume two, they have built a working prototype to bring the finer details of the engineering to life. Volume three could still be years off, Wetzstein admits, but that ultimate piece will come in the form of a commercial product that transforms how the world thinks of virtual reality -- or extended reality, as the case may be. "The world has never seen a display like this with a large field of view, a large eyebox, and such image quality in a holographic display," Wetzstein said. "It's the best 3D display created so far and a great step forward -- but there are lots of open challenges yet to solve."
[4]
Meta just revealed a new XR display prototype -- and it may be the biggest leap in smart glasses since Google Glass
Smart glasses are evolving at a rapid pace -- incredible display tech coming to AR specs and AI breakthroughs making the likes of the Oakley Meta HSTN's incredibly intelligent. But we all know the end goal is a pair of glasses that brings these two worlds together. And researchers at Meta Reality Labs and Stanford University may have just given us the clearest glimpse yet of this future. Yes, I know the researcher pictured above kind of looks like a Cyberpunk pirate, but that eyepatch is a prototype holographic XR display -- creating full 3D holograms on a screen thin enough to be used in a standard-sized pair of glasses. This is the holy grail that companies have been chasing down, and Meta just took one giant step closer to it. So let's get into the details of this mixed reality tech. Basically, it's a combination of custom glass and silicon along with AI-driven algorithms to render "perceptually realistic 3D images," as the researchers say in their paper. To make this happen, there's a custom ultra-thin waveguide display driving the visuals -- using a laser to project onto a uniquely-textured part of the lens glass for picture clarity. After this, it goes through a polarizer so we can see it, and a custom-designed Spatial Light Modulator that will (you guessed it) modulate light -- the special thing being that it will do so on an individual pixel-by-pixel basis to render a "full-resolution holographic" image we can see. That's right. Holograms. Unlike your standard VR headsets and AR glasses that use eye-confusion techniques to simulate depth, this system can produce true holograms by reconstructing them entirely through light. There's both a wide field of view and a large eyebox to accommodate all possible pupil positions for accessibility. "The world has never seen a display like this with a large field of view, a large eyebox, and such image quality in a holographic display," Gordon Wetzstein, Stanford electrical engineering professor told the Stanford Report. "It's the best 3D display created so far and a great step forward - but there are lots of open challenges yet to solve." And even better? All of this is crammed into a panel just 3mm thin! That is significant for the future progress of stuffing displays into glasses without needing bird bath optics. One challenge to note, though, is this is mixed reality and not augmented. The wording is critical here, as the mixed reality reference means the optics are not transparent. But that being said, with this being the second of three projects to bring this to life in a commercial project, there's no doubt that what we're looking at here is a breakthrough. Don't expect this tech to come to glasses you can buy for another few years, but if this mixed reality display can become augmented, that's the dream I've been having over the past four years of testing and writing about smart glasses! My apologies to the phones team, but the clock has just started ticking on a new frontier of personal devices.
[5]
Meta's mixed reality glasses make my Meta Quest 3 look like a boulder
Support rumors that Meta's headsets may turn into slim goggles Meta's newly published research gives us a glimpse at its future XR plans, and seemingly confirms it wants to make ultra-slim XR goggles. That's because Meta's Reality Labs, alongside Stanford University, published a paper in Nature Photonics showcasing a prototype that uses holography and AI to create a super-slim mixed reality headset design. The optical stack is just 3mm thick, and unlike other mixed reality headsets we're used to - like the Meta Quest 3 - this design doesn't layer stereoscopic images to create a sense of depth. Instead, it produces holograms that should look more realistic and be more natural to view. That means it's not only thin, but high-quality too - an important combination. Now there's still more work to be done. The prototype shown in the image above doesn't look close to being a consumer-grade product that's ready to hit store shelves. What's more, it doesn't yet seem to pass what's called the Visual Turing Test. This would be the point at which it's impossible to distinguish between a hologram and a real-world object, though that goal looks to be what Reality Labs and Stanford hope to eventually achieve. Even with this technology still likely years (perhaps even a decade) from making it to a gadget you or I could go out and buy, the prototype's design does showcase Meta's desire to produce ultra-thin MR tech. It lends credence to rumors that Meta's next VR headset could be a pair of lightweight goggles about a fifth as heavy as the 515g Meta Quest 3. Given these rumored goggles are believed to be coming in the next few years, they'll likely avoid the experimental holography tech found in Meta and Stanford's report, but if Meta were looking to trim weight and slim down the design further in future iterations, the research it's conducting now would be a vital first step. I, for one, am increasingly excited to see what XR tech Meta is cooking up. It's Ray-Ban, and now Oakley, glasses have showcased the wild popularity that XR wearables can achieve if they find the sweet spot of comfort, utility, and price, with that first factor looking to be the most vital. Meta's other recent research into VR on the software side also highlights that a lighter headset would remove friction in keeping people immersed for hours on end. This could lead to more meaningful productivity applications, but also more immersive and expansive gaming experiences, and other use cases I'm excited to see and try when the time is right. For now, I'm content with my Meta Quest 3, but I can't deny it now looks a little like a boulder next to this 3 mm-thick prototype design.
Share
Copy Link
Stanford researchers develop a breakthrough in mixed reality technology with holographic AI glasses, potentially revolutionizing the future of VR and AR devices.
Researchers at Stanford University have made a significant leap in mixed reality technology, developing holographic AI glasses that could revolutionize the virtual and augmented reality landscape. Led by Gordon Wetzstein, a professor of electrical engineering and director of the Stanford Computational Imaging Lab, the team has created a prototype that addresses major challenges in current VR/AR devices 12.
Source: ZDNet
The new device utilizes holography, a Nobel Prize-winning 3D display technique, to create highly realistic three-dimensional images. Unlike current VR headsets that use stereoscopic LED technology, holography manipulates both the intensity and phase of light to produce more visually satisfying and realistic 3D experiences 23.
One of the most striking features of the prototype is its incredibly slim profile. The display is just 3 millimeters thick from lens to screen, a dramatic reduction compared to bulky VR headsets like the Apple Vision Pro or Meta Quest 3 12. This lean design could potentially allow for all-day wear without the discomfort associated with current devices.
Source: Stanford News
The holographic image is further enhanced by a new AI-calibration method that optimizes image quality and three-dimensionality. This combination of advanced optics and artificial intelligence pushes the boundaries of what's possible in mixed reality displays 24.
The research team's ultimate goal is to pass what they call the "Visual Turing Test." This benchmark would be achieved when users cannot distinguish between a physical, real object seen through the glasses and a digitally created image projected on the display surface 23. While the current prototype is a significant step forward, achieving this level of realism remains a challenge for future development.
The headset design incorporates several key innovations:
If successfully developed into a commercial product, this technology could transform various fields, including education, entertainment, virtual travel, and communication 2. However, the researchers acknowledge that there are still many challenges to overcome before this technology can be brought to market 35.
As Meta Reality Labs collaborates with Stanford on this project, it suggests that future Meta VR products might incorporate similar ultra-thin designs. This aligns with rumors of Meta developing lightweight XR goggles that could be significantly lighter than current headsets 5.
Source: TechRadar
While the road to commercially viable holographic mixed reality glasses may be long, this research represents a significant milestone in the quest for more immersive, comfortable, and realistic virtual experiences.
Google launches its new Pixel 10 smartphone series, showcasing advanced AI capabilities powered by Gemini, aiming to challenge competitors in the premium handset market.
20 Sources
Technology
1 hr ago
20 Sources
Technology
1 hr ago
Google's Pixel 10 series introduces groundbreaking AI features, including Magic Cue, Camera Coach, and Voice Translate, powered by the new Tensor G5 chip and Gemini Nano model.
12 Sources
Technology
1 hr ago
12 Sources
Technology
1 hr ago
NASA and IBM have developed Surya, an open-source AI model that can predict solar flares and space weather with improved accuracy, potentially helping to protect Earth's infrastructure from solar storm damage.
6 Sources
Technology
9 hrs ago
6 Sources
Technology
9 hrs ago
Google's latest smartwatch, the Pixel Watch 4, introduces significant upgrades including a curved display, enhanced AI features, and improved health tracking capabilities.
17 Sources
Technology
1 hr ago
17 Sources
Technology
1 hr ago
FieldAI, a robotics startup, has raised $405 million to develop "foundational embodied AI models" for various robot types. The company's innovative approach integrates physics principles into AI, enabling safer and more adaptable robot operations across diverse environments.
7 Sources
Technology
1 hr ago
7 Sources
Technology
1 hr ago