3 Sources
[1]
AI glasses could be a improvement for those with hearing loss
Artificial intelligence-powered glasses developed by a University of Stirling researcher could dramatically improve how people with hearing loss experience sound. It aims to help by filtering out background noise in real time, even in loud environments, through the use of AI-powered smart glasses. The device uses a small camera built into glasses to track the speaker's lip movements, while a smartphone app uses 5G to send both audio and visual data to a powerful cloud server. There, artificial intelligence isolates the speaker's voice from surrounding noise and sends the cleaned-up sound back to the listener's hearing aid or headphones almost instantly. This approach, known as audio-visual speech enhancement, takes advantage of the close link between lip movements and speech. While some noise-canceling technologies already exist, they struggle with overlapping voices or complex background sounds - something this system aims to overcome. The project, which builds on a 2015 Stirling-led study, has been led by Heriot-Watt University and involves Dr. Ahsan Adeel from the University of Stirling's Faculty of Natural Sciences - working alongside researchers from the University of Edinburgh and Edinburgh Napier University. Dr. Ahsan Adeel, Associate Professor in Artificial Intelligence at the University of Stirling's Computing Science and Mathematics Division, who first coined the idea of 5G-IoT-enabled, multi-modal hearing aids in 2018, said, "It is highly gratifying to see that the next-generation hearing aid vision is now taking practical shape. "We are grateful to our 5G Internet of Things colleagues at Heriot-Watt, Napier, and the University of Edinburgh for believing in this vision and helping make it a reality." Breakthrough Dr. Adeel continued, "Looking ahead, to further overcome persistent challenges of delay, privacy, and cost, we are moving beyond current AI - built on the oversimplified, 20th-century conception of neurons - towards harnessing the extraordinary capabilities of pyramidal cells in the mammalian neocortex, the part of the brain in mammals that handles reasoning and decision making, regarded as a hallmark of conscious processing. "This breakthrough approach shifts from abstract, human-level cognitive audio-visual models to true cellular-level multisensory processing, enabling the world's first personalized, standalone, data center, cloud-independent, biologically plausible hearing aids - a feat beyond current AI and neuromorphic systems. "These devices will match human-level performance while consuming less power than a dim light bulb, delivering minimal latency, and ensuring complete privacy. "This work is deepening our understanding of the neurobiological foundations of multisensory audio-visual speech processing and accelerating the creation of next-generation, biologically inspired models and hearing aids. Ultimately enhancing hearing aid uptake and enabling better participation in challenging social settings." A new approach More than 1.2 million adults in the UK have hearing loss severe enough to make ordinary conversation difficult, according to the Royal National Institute for Deaf People. Hearing aids can help, but most are limited by size and processing power and often struggle in noisy places like cafés, transport hubs or workplaces. By shifting the heavy processing work to cloud servers, the researchers can apply powerful deep-learning algorithms without overloading the small, wearable device. The group is working on multiple fronts, from cloud AI to edge device AI, to achieve optimal results for sustainability. From lab to life Still in the prototype stage, the team has already tested the technology with people who use hearing aids. Early results are promising and the team are speaking to hearing aid manufacturers about future partnerships and hoping to reduce costs to make the devices more widely available. The team has already hosted workshops for hearing aid users and continue to collect noise samples, from washing machines to traffic, to improve the system. They believe the cloud-based model could one day be public, allowing anyone with a compatible device to connect and benefit. Professor Mathini Sellathurai of Heriot Watt University, who leads the project, said, "We're not trying to reinvent hearing aids. We're trying to give them superpowers. You simply point the camera or look at the person you want to hear. "Even if two people are talking at once, the AI uses visual cues to extract the voice of the person you're looking at. There's a slight delay, since the sound travels to Sweden and back, but with 5G, it's fast enough to feel instant. "One of the most exciting parts is how general the technology could be. Yes, it's aimed to support people who use hearing aids and who have severe visual impairments, but it could help anyone working in noisy places, from oil rigs to hospital wards. "There are only a few big companies that make hearing aids, and they have limited support in noisy environments. We want to break that barrier and help more people, especially children and older adults, access affordable, AI-driven hearing support."
[2]
Here's how smartglasses with AI could give hearing aids 'superpowers'
I wore XR glasses for 24 hours and broke my reality. Here's what I saw Research is being carried out into how smartglasses equipped with a camera, AI, and a data connection could help deaf people better follow real-time conversations. The technology won't replace hearing aids, but enhance them instead, and you can think of it a bit like equipping them with smart noise cancellation. The concept is being studied at Heriot-Watt University in Scotland, and project lead Professor Mathini Sellathurai explained how it worked: "You simply point the camera or look at the person you want to hear. Even if two people are talking at once, the AI uses visual cues to extract the voice of the person you're looking at. It's aimed to support people who use hearing aids and who have severe visual impairments, but it could help anyone working in noisy places, from oil rigs to hospital wards." Sellathurai said the technology will give hearing aids, "superpowers," and went into further detail about how they will work, and where AI fits in. The camera on the smartglasses tracks a speaker's lip movements, and through lip-reading technology and AI cleaning up background noise and other surrounding conversations, a "clean" version of the speaker's voice is sent to hearing aids or headphones. Cloud processing The processing is not performed on device though. The smartglasses will send data to a connected phone, where it's then sent to cloud servers. There, AI smarts are applied and the final version returned to the wearer. According to Sellathurai, there's only a slight delay when the data is sent over a 5G connection. The team is currently using cloud servers in Sweden, and says this approach is necessary to avoid putting too much strain on a wearable device. However, there may be privacy concerns over not only a device wearer recording a conversation in real-time, but also the integrity of that data when it's transferred and analyzed in the cloud. Using smartglasses with a camera, hearing aids, and AI technology to enhance hearing may sound like an overly complex way of addressing the problem, but it's an established approach known as audio-visual speech enhancement. It is already used in hearing devices to enhance the voice of a person speaking in a noisy environment, in video production, and in hearing aids as a type of noise cancellation. Competition and prototypes Similar technology has been demonstrated in the past. Speech-to-text technology from TranscribeGlass was incorporated into the Vuzix Z100 smart glasses, where conversations were transcribed onto the Z100's screen for the wearer to read, like subtitles for the world around them. The Nuance Audio Glasses combine both corrective lenses and hearing aid technology into smart eyewear. The Ray-Ban Meta, one of the best current smartglasses you can buy, include accessibility features for those with hearing and vision impairments. TranscribeGlass's eyewear starts at $377 and requires a $20 per month subscription, while Nuance Audio Glasses cost from $1,200. Professor Sellathurai said the intention is to increase the amount of options available for hearing-impaired people, and help, "children and older adults access affordable, AI-driven hearing support." The researchers are in talks with hearing aid manufacturers about partnerships and how to reduce costs, and hope to have a working prototype pair of smartglasses in 2026.
[3]
Scientists working on 'superpower' glasses that help people hear more clearly
The smart glasses are fitted with a camera that records dialogue and uses visual cues to detect the main speaker in a conversation. New "hearing glasses" are being created that use artificial intelligence to help people hear conversations more clearly in real-time. Scientists in Scotland are developing a prototype set of glasses that combine lip-reading technology, artificial intelligence and cloud computing to clean up conversations in people's hearing aids. The smart glasses are fitted with a camera that records dialogue and uses visual cues to detect the main speaker. The wearer's phone then sends the recording to a cloud server, where the speaker's voice is isolated and background noise removed. The cleaned-up audio is then sent back to the listener's hearing aid almost instantly, despite travelling to servers all the way over in Sweden and back. "We're not trying to reinvent hearing aids. We're trying to give them superpowers," said project leader Professor Mathini Sellathurai, of Heriot-Watt University. "You simply point the camera or look at the person you want to hear. "Even if two people are talking at once, the AI uses visual cues to extract the voice of the person you're looking at." Read more science and tech news: Water shortfall declared 'nationally significant' New pancreatic cancer vaccine shows promise in trial Could flashing mouthguards help rugby's safety problem Over 1.2 million UK adults struggle with ordinary conversation because of hearing loss, according to the Royal National Institute for Deaf People. Although noise-cancelling technology does exist for hearing aids, it often struggles with voices overlapping in conversation or when there are lots of different background noises. The researchers say that by using cloud servers to do the heavy lifting on cleaning up audio, the glasses can take advantage of powerful artificial intelligence while still being wearable. They hope to have a working version of the glasses by 2026, and are already speaking to hearing aid manufacturers about ways to reduce costs and make the devices more widely available. Scientists from Heriot-Watt University led the project and worked with researchers from the University of Edinburgh, Napier University and the University of Stirling.
Share
Copy Link
Researchers in Scotland are developing AI-powered smart glasses that could significantly improve hearing aid functionality, offering real-time noise cancellation and speech enhancement for those with hearing impairments.
Researchers in Scotland are developing innovative AI-powered smart glasses that could dramatically improve the quality of life for people with hearing impairments. This groundbreaking technology aims to enhance the functionality of traditional hearing aids by providing real-time noise cancellation and speech enhancement, even in challenging acoustic environments 1.
Source: Sky News
The smart glasses system employs a multi-faceted approach to isolate and enhance speech:
This approach, known as audio-visual speech enhancement, takes advantage of the close link between lip movements and speech. It aims to overcome the limitations of current noise-canceling technologies, which often struggle with overlapping voices or complex background sounds 2.
With over 1.2 million adults in the UK experiencing severe hearing loss, this technology could have a significant impact 3. While primarily designed for those with hearing impairments, the researchers suggest that the technology could benefit anyone working in noisy environments, from oil rigs to hospital wards 1.
Dr. Ahsan Adeel from the University of Stirling's Computing Science and Mathematics Division envisions even more advanced developments:
"We are moving towards harnessing the extraordinary capabilities of pyramidal cells in the mammalian neocortex... This breakthrough approach shifts from abstract, human-level cognitive audio-visual models to true cellular-level multisensory processing, enabling the world's first personalized, standalone, data center, cloud-independent, biologically plausible hearing aids" 1.
Source: Android Police
The technology is currently in the prototype stage, with researchers having already conducted tests involving hearing aid users. The team is in talks with hearing aid manufacturers about potential partnerships and is working to reduce costs to make the devices more widely accessible 2.
Professor Mathini Sellathurai of Heriot Watt University, who leads the project, aims to have a working prototype by 2026 3.
While the technology shows great promise, there are potential privacy concerns regarding real-time recording of conversations and the integrity of data transferred and analyzed in the cloud 2. The researchers are addressing these issues and working on multiple fronts, from cloud AI to edge device AI, to achieve optimal results for sustainability and user privacy 1.
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
14 hrs ago
7 Sources
Technology
14 hrs ago
Anthropic has updated its Claude Opus 4 and 4.1 AI models with the ability to terminate conversations in extreme cases of persistent harm or abuse, as part of its AI welfare research.
6 Sources
Technology
22 hrs ago
6 Sources
Technology
22 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
14 hrs ago
2 Sources
Technology
14 hrs ago
OpenAI has made subtle changes to GPT-5's personality, aiming to make it more approachable after users complained about its formal tone. The company is also working on allowing greater customization of ChatGPT's style.
4 Sources
Technology
6 hrs ago
4 Sources
Technology
6 hrs ago
SoftBank has purchased Foxconn's Ohio plant for $375 million to produce AI servers for the Stargate project. Foxconn will continue to operate the facility, which will be retrofitted for AI server production.
5 Sources
Technology
5 hrs ago
5 Sources
Technology
5 hrs ago