The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Fri, 15 Nov, 8:03 AM UTC
4 Sources
[1]
AI headphones put listeners in a 'sound bubble'
A new headphone prototype allows listeners to create a "sound bubble." Imagine this: You're at an office job, wearing noise-canceling headphones to dampen the ambient chatter. A coworker arrives at your desk and asks a question, but rather than needing to remove the headphones and say, "What?", you hear the question clearly. Meanwhile the water-cooler chat across the room remains muted. Or imagine being in a busy restaurant and hearing everyone at your table, but reducing the other speakers and noise in the restaurant. The new artificial intelligence algorithms combined with a headphone prototype allow the wearer to hear people speaking within a bubble with a programmable radius of 3 to 6 feet. Voices and sounds outside the bubble are quieted an average of 49 decibels (approximately the difference between a vacuum and rustling leaves), even if the distant sounds are louder than those inside the bubble. The code for the proof-of-concept device is available for others to build on. The researchers are creating a startup to commercialize this technology. "Humans aren't great at perceiving distances through sound, particularly when there are multiple sound sources around them," says senior author Shyam Gollakota, a University of Washington professor in the Paul G. Allen School of Computer Science & Engineering. "Our abilities to focus on the people in our vicinity can be limited in places like loud restaurants, so creating sound bubbles on a hearable has not been possible so far. Our AI system can actually learn the distance for each sound source in a room, and process this in real time, within 8 milliseconds, on the hearing device itself." The researchers created the prototype with commercially available noise-canceling headphones. They affixed six small microphones across the headband. The team's neural network -- running on a small onboard embedded computer attached to the headphones -- tracks when different sounds reach each microphone. The system then suppresses the sounds coming from outside the bubble, while playing back and slightly amplifying the sounds inside the bubble (because noise-canceling headphones physically let some sound through). "We'd worked on a previous smart-speaker system where we spread the microphones across a table because we thought we needed significant distances between microphones to extract distance information about sounds," Gollakota says. "But then we started questioning our assumption. Do we need a big separation to create this 'sound bubble'? What we showed here is that we don't. We were able to do it with just the microphones on the headphones, and in real-time, which was quite surprising." To train the system to create sound bubbles in different environments, researchers needed a distance-based sound dataset collected in the real-world, which was not available. To gather such a dataset, they put the headphones on a mannequin head. A robotic platform rotated the head while a moving speaker played noises coming from different distances. The team collected data with the mannequin system as well as with human users in 22 different indoor environments, including offices and living spaces. The researchers have determined that the system works for a couple of reasons. First, the wearer's head reflects sounds, which helps the neural net distinguish sounds from various distances. Second, sounds (like human speech) have multiple frequencies, each of which goes through different phases as it travels from its source. The team's AI algorithm, the researchers believe, is comparing the phases of each of these frequencies to determine the distance of any sound source (a person talking, for instance). Headphones like Apple's AirPods Pro 2 can amplify the voice of the person in front of the wearer while reducing some background noise. But these features work by tracking head position and amplifying the sound coming from a specific direction, rather than gauging distance. This means the headphones can't amplify multiple speakers at once, lose functionality if the wearer turns their head away from the target speaker, and aren't as effective at reducing loud sounds from the speaker's direction. The system has been trained to work only indoors, because getting clean training audio is more difficult outdoors. Next, the team is working to make the technology function on hearing aids and noise-canceling earbuds, which requires a new strategy for positioning the microphones. The research appears in Nature Electronics. Additional coauthors are from the University of Washington, Microsoft, and AssemblyAI. Funding for the research came from a Moore Inventor Fellow award, a UW CoMotion Innovation Gap Fund, and the National Science Foundation.
[2]
AI headphones create a 'sound bubble,' quieting all sounds more than a few feet away
Imagine this: You're at an office job, wearing noise-canceling headphones to dampen the ambient chatter. A co-worker arrives at your desk and asks a question, but rather than needing to remove the headphones and say, "What?," you hear the question clearly. Meanwhile the water-cooler chat across the room remains muted. Or imagine being in a busy restaurant and hearing everyone at your table, but reducing the other speakers and noise in the restaurant. A team led by researchers at the University of Washington has created a headphone prototype that allows listeners to create just such a "sound bubble." The team's artificial intelligence algorithms combined with a headphone prototype allow the wearer to hear people speaking within a bubble with a programmable radius of 3 to 6 feet. Voices and sounds outside the bubble are quieted an average of 49 decibels (approximately the difference between a vacuum and rustling leaves), even if the distant sounds are louder than those inside the bubble. The team published its findings Nov. 14 in Nature Electronics. The code for the proof-of-concept device is available for others to build on. The researchers are creating a startup to commercialize this technology. "Humans aren't great at perceiving distances through sound, particularly when there are multiple sound sources around them," said senior author Shyam Gollakota, a UW professor in the Paul G. Allen School of Computer Science & Engineering. "Our abilities to focus on the people in our vicinity can be limited in places like loud restaurants, so creating sound bubbles on a hearable has not been possible so far. Our AI system can actually learn the distance for each sound source in a room, and process this in real time, within 8 milliseconds, on the hearing device itself." Researchers created the prototype with commercially available noise-canceling headphones. They affixed six small microphones across the headband. The team's neural network -- running on a small onboard embedded computer attached to the headphones -- tracks when different sounds reach each microphone. The system then suppresses the sounds coming from outside the bubble, while playing back and slightly amplifying the sounds inside the bubble (because noise-canceling headphones physically let some sound through). "We'd worked on a previous smart-speaker system where we spread the microphones across a table because we thought we needed significant distances between microphones to extract distance information about sounds," Gollakota said. "But then we started questioning our assumption. Do we need a big separation to create this 'sound bubble'? What we showed here is that we don't. We were able to do it with just the microphones on the headphones, and in real-time, which was quite surprising." To train the system to create sound bubbles in different environments, researchers needed a distance-based sound dataset collected in the real-world, which was not available. To gather such a dataset, they put the headphones on a mannequin head. A robotic platform rotated the head while a moving speaker played noises coming from different distances. The team collected data with the mannequin system as well as with human users in 22 different indoor environments, including offices and living spaces. Researchers have determined that the system works for a couple of reasons. First, the wearer's head reflects sounds, which helps the neural net distinguish sounds from various distances. Second, sounds (like human speech) have multiple frequencies, each of which goes through different phases as it travels from its source. The team's AI algorithm, the researchers believe, is comparing the phases of each of these frequencies to determine the distance of any sound source (a person talking, for instance). Headphones like Apple's AirPods Pro 2 can amplify the voice of the person in front of the wearer while reducing some background noise. But these features work by tracking head position and amplifying the sound coming from a specific direction, rather than gauging distance. This means the headphones can't amplify multiple speakers at once, lose functionality if the wearer turns their head away from the target speaker, and aren't as effective at reducing loud sounds from the speaker's direction. The system has been trained to work only indoors, because getting clean training audio is more difficult outdoors. Next, the team is working to make the technology function on hearing aids and noise-canceling earbuds, which requires a new strategy for positioning the microphones. Additional co-authors are Malek Itani and Tuochao Chen, UW doctoral students in the Allen School; Sefik Emre Eskimez, a senior researcher at Microsoft; and Takuya Yoshioka, director of research at AssemblyAI. This research was funded by a Moore Inventor Fellow award, a UW CoMotion Innovation Gap Fund and the National Science Foundation.
[3]
AI headphones create a 'sound bubble,' quieting all sounds more than a few feet away
Imagine this: You're at an office job, wearing noise-canceling headphones to dampen the ambient chatter. A co-worker arrives at your desk and asks a question, but rather than needing to remove the headphones and say, "What?", you hear the question clearly. Meanwhile, the water-cooler chat across the room remains muted. Or imagine being in a busy restaurant and hearing everyone at your table, but reducing the other speakers and noise in the restaurant. A team led by researchers at the University of Washington has created a headphone prototype that allows listeners to create just such a "sound bubble." The team's artificial intelligence algorithms combined with a headphone prototype allow the wearer to hear people speaking within a bubble with a programmable radius of 3 to 6 feet. Voices and sounds outside the bubble are quieted an average of 49 decibels (approximately the difference between a vacuum and rustling leaves), even if the distant sounds are louder than those inside the bubble. The team published its findings in Nature Electronics. The code for the proof-of-concept device is available for others to build on. The researchers are creating a startup to commercialize this technology. "Humans aren't great at perceiving distances through sound, particularly when there are multiple sound sources around them," said senior author Shyam Gollakota, a UW professor in the Paul G. Allen School of Computer Science & Engineering. "Our abilities to focus on the people in our vicinity can be limited in places like loud restaurants, so creating sound bubbles on a hearable has not been possible so far. Our AI system can actually learn the distance for each sound source in a room, and process this in real time, within 8 milliseconds, on the hearing device itself." Researchers created the prototype with commercially available noise-canceling headphones. They affixed six small microphones across the headband. The team's neural network -- running on a small onboard embedded computer attached to the headphones -- tracks when different sounds reach each microphone. The system then suppresses the sounds coming from outside the bubble, while playing back and slightly amplifying the sounds inside the bubble (because noise-canceling headphones physically let some sound through). "We'd worked on a previous smart-speaker system where we spread the microphones across a table because we thought we needed significant distances between microphones to extract distance information about sounds," Gollakota said. "But then we started questioning our assumption. Do we need a big separation to create this 'sound bubble'? What we showed here is that we don't. We were able to do it with just the microphones on the headphones, and in real-time, which was quite surprising." To train the system to create sound bubbles in different environments, researchers needed a distance-based sound dataset collected in the real-world, which was not available. To gather such a dataset, they put the headphones on a mannequin head. A robotic platform rotated the head while a moving speaker played noises coming from different distances. The team collected data with the mannequin system as well as with human users in 22 different indoor environments, including offices and living spaces. Researchers have determined that the system works for a couple of reasons. First, the wearer's head reflects sounds, which helps the neural net distinguish sounds from various distances. Second, sounds (like human speech) have multiple frequencies, each of which goes through different phases as it travels from its source. The team's AI algorithm, the researchers believe, is comparing the phases of each of these frequencies to determine the distance of any sound source (a person talking, for instance). Headphones like Apple's AirPods Pro 2 can amplify the voice of the person in front of the wearer while reducing some background noise. But these features work by tracking head position and amplifying the sound coming from a specific direction, rather than gauging distance. This means the headphones can't amplify multiple speakers at once, lose functionality if the wearer turns their head away from the target speaker, and aren't as effective at reducing loud sounds from the speaker's direction. The system has been trained to work only indoors, because getting clean training audio is more difficult outdoors. Next, the team is working to make the technology function on hearing aids and noise-canceling earbuds, which requires a new strategy for positioning the microphones.
[4]
'Sound bubble' headphones tune out noise more than a few feet away
In a restaurant or at a party, background noise can make it hard to hear people talking, even up close. But soon we could be wearing headphones that use AI to filter out noise that's more than a few feet away, creating a "sound bubble" that lets you focus on your own conversation. Developed by engineers at the University of Washington, the device is essentially a pair of noise-canceling headphones, equipped with six extra microphones along the headband. A small onboard computer runs a neural network trained to analyze the distance of different sound sources, filtering out noise coming from farther away and amplifying sounds closer to the user. The end result is a kind of sound bubble, as the team describes it, which can be customized with a radius of 1 to 2 m (3.3 to 6.6 ft). The idea is you can clearly hear people talking from within that bubble, while noises outside of it are suppressed by an average of 49 decibels. If someone else enters the bubble, they can join the conversation too. "Our abilities to focus on the people in our vicinity can be limited in places like loud restaurants, so creating sound bubbles on a hearable has not been possible so far," said Shyam Gollakota, senior author of the study. "Our AI system can actually learn the distance for each sound source in a room, and process this in real time, within 8 milliseconds, on the hearing device itself." The neural network was originally trained on data gathered in 22 different indoor environments, such as offices and living spaces. The headphones were placed on a mannequin head and rotated while noises were played from different distances. The algorithm seems to be comparing the different phases of each frequency of sound to determine how far away the source is, and block sounds accordingly. This is just the latest version of the tech, which the team has been developing for a while now. An iteration from last year used a swarm of small robots that would move around a room on their own, taking measurements to create separate audio streams for different sources, allowing the user to mute certain areas on demand. Just a few months ago, the researchers demonstrated a version that could single out the voice of one person just by looking at them. This sound bubble version could end up being the most practical iteration of the tech, allowing you to have a clear conversation in a bar with people at your table. It could work even better if it can be integrated into smaller equipment like hearing aids or earbuds, and thankfully the team is already working on that, as well as founding a startup to commercialize the tech. The research was published in the journal Nature Electronics. The team demonstrates the sound bubble tech in the video below.
Share
Share
Copy Link
Researchers at the University of Washington have developed AI-powered headphones that create a customizable 'sound bubble', allowing users to hear nearby conversations clearly while significantly reducing background noise.
Researchers at the University of Washington have developed a groundbreaking AI-powered headphone prototype that creates a customizable 'sound bubble' around the wearer. This innovative technology allows users to hear nearby conversations clearly while significantly reducing background noise 1.
The prototype uses commercially available noise-canceling headphones fitted with six small microphones across the headband. A neural network, running on a small onboard embedded computer, processes the audio input in real-time within 8 milliseconds 2. The system:
The AI algorithm can determine the distance of sound sources by comparing the phases of different frequencies in the audio 3.
To train the AI system, researchers created a unique dataset using a mannequin head equipped with the prototype headphones. A robotic platform rotated the head while a moving speaker played sounds from various distances. Data was collected in 22 different indoor environments, including offices and living spaces 2.
This technology could revolutionize how we manage noise in various settings:
Currently, the system is trained to work only indoors due to the challenges of obtaining clean training audio outdoors. The research team is now working on adapting the technology for hearing aids and noise-canceling earbuds, which will require new strategies for microphone placement 4.
The researchers have made the code for their proof-of-concept device publicly available for further development. Additionally, they are in the process of creating a startup to commercialize this technology, potentially bringing it to market in the near future 1.
Reference
[1]
[2]
Neurables introduces innovative headphones that monitor brain activity to enhance focus and productivity. The device aims to help users work smarter and avoid burnout in an increasingly demanding digital world.
5 Sources
5 Sources
Viaim introduces RecDot, AI-powered earbuds that offer high-quality audio playback along with advanced recording, transcription, and translation capabilities, potentially transforming how we capture and process spoken information.
2 Sources
2 Sources
San Francisco-based Omi AI introduces a wearable AI device at CES 2025, promising future thought-reading capabilities through a brain-computer interface. The $89 device, set to ship in Q2 2025, aims to boost productivity and serve as an AI companion.
3 Sources
3 Sources
Bose introduces a cutting-edge Smart Ultra Soundbar with AI-powered features and transforms its QuietComfort Ultra Earbuds into wireless rear speakers, revolutionizing the home audio experience.
8 Sources
8 Sources
University of Michigan researchers have developed WorldScribe, an AI-powered software that provides real-time audio descriptions of surroundings for people who are blind or have low vision, potentially revolutionizing their daily experiences.
2 Sources
2 Sources