Curated by THEOUTPOST
On Fri, 30 Aug, 4:06 PM UTC
6 Sources
[1]
Google is training AI to 'hear' when you're sick. Here's how it works.
Google's AI arm is reportedly tapping into "bioacoustics" -- a field that blends a combination of biology and sounds that, in part, help researchers gain insights on how pathogen presence affects human sound. As it turns out, our sounds convey tell-tale information about our well-being. According to a Bloomberg report, the search-engine giant built an AI model that uses sound signals to "predict early signs of disease." In places where there is difficulty accessing quality healthcare, this technology can step in as an alternative where users need nothing but their smartphone's microphone. Google's bioacoustics-based AI model is called HeAR (Heath Acoustic Representations). It was trained on 300 million, two-second audio samples that include coughs, sniffles, sneezes, and breathing patterns. These audio clips were pulled from non-copyrighted, publicly available content from platforms like YouTube. One example of such content is a video that recorded sounds of patients in a Zambia-based hospital where sick individuals came in for tuberculosis screenings. In fact, HeAR has been trained on 100 million cough sounds that help detect tuberculosis. According to Bloomberg, bioacoustics can offer "near-imperceptible clues" that can reveal subtle signs of illness that can help health professionals diagnose patients. Plus, the AI model can detect minute differences in patients' cough patterns, allowing it to spot early signs of a sickness' amelioration or deterioration. Google is partnering with Salcit Technologies, an AI healthcare startup based in India. Salcit has its own AI model called Swaasa (which means "breath" in Sanskrit) -- and the Indian collaborator is using Swaasa to help HeAR improve its accuracy for tuberculosis and lung health screening. Swaasa offers a mobile app that allows users to submit a 10-second cough sample. According to Salcit's co-founder, Manmohan Jain, the app can identify whether an individual has a disease with an accuracy rate of 94 percent The auditory-based test costs $2.40. This is much cheaper than a spirometry test, which costs about $35 in an India-based clinic. HeAR doesn't come without challenges, though. For example, Google and Salcit are still trying to navigate problems with users submitting audio samples with too much background noise. Google's bioacoustics-based AI model is nowhere near the "ready-for-market" stage, but you've got to admit, the concept of using AI in the medical field, combined with sound, is arguably innovative and promising.
[2]
Google's latest AI can 'HeAR' if you're sick
While Google often touts the creativity and productivity merits of its generative AI, this technology isn't limited to summarizing long articles or creating images. The Mountain View-based firm has also been working on ways to use generative AI for healthcare purposes. In particular, the company has now rolled out a bioacoustic foundation model to help with detecting early signs of disease. Last week, Google announced that its "HeAR" bioacoustic foundation model is now available to researchers. Short for Health Acoustic Representations, Google says HeAR is a tool that researchers can use to build AI models that can "listen to human sounds and flag early signs of disease." The tech giant states that HeAR has been trained on 300 million pieces of audio data, with roughly 100 million of those pieces of data being cough sounds. Using this data, the AI has learned to discern patterns within health-related sounds. Google claims that HeAR performs better than other models on a wide range of tasks with less training data. What makes all of this interesting is the fact that this AI can fit into an app on a phone. This means if you have a phone, you can get access to health screenings. Not only could this open up screening test accessibility to those in remote areas, but it also means reduced costs as only your device's microphone is necessary instead of expensive X-ray machines and diagnostic hardware. Google also announced that it has partnered with Salcit Technologies -- an India-based respiratory healthcare company. Salcit has its own bioacoustic AI model, called Swaasa, that it uses to analyze cough sounds and assess lung health. The healthcare company is reportedly using HeAR to improve Swaasa's early detection of tuberculous based on cough sounds. Of course, with any new technology, there are going to be hurdles to overcome. One of HeAR's hurdles will be to convince health professionals to adopt the technology. However, Google is already making ground on that as organizations, like the United Nations's StopTB Partnership, have begun supporting HeAR.
[3]
Google is developing AI that can hear if you're sick
A new artificial intelligence model being developed by Google could make diagnosing tuberculosis (TB) and other respiratory ailments as easy as recording a voice note. Google is training one of its foundational AI models to listen for signs of disease using sound signals, like coughing, sneezing, and sniffling. This tech, which would work using people's smartphone microphones, could revolutionize diagnoses for communities where advanced diagnostic tools are difficult to come by. The tech giant is collaborating with Indian respiratory health care AI startup, Salcit Technologies. The tech, which was introduced earlier this year as Health Acoustic Representations, or HeAR, is what's known as a bioacoustic foundation model. HeAR was then trained on 300 million pieces of audio data, including 100 million cough sounds, to learn to pick out patterns in the sounds. Salcit is then using this AI model, in combination with its own product Swaasa, which uses AI to analyze cough sounds and assess lung health, to help research and improve early detection of TB based solely on cough sounds. Between three and four million cases of TB go unreported, according to United Nationas-backed non-profit The Stop TB Partnership. If untreated, the mortality rate of TB is higher than 50%. "Every missed case of tuberculosis is a tragedy; every late diagnosis, a heartbreak," Sujay Kakarmath, a product manager at Google Research working on HeAR, said in a statement. "Acoustic biomarkers offer the potential to rewrite this narrative." The advent of AI has produced new opportunities for early detection and diagnoses of a wide array of illnesses. From spotting signs of chronic illnesses, to pinpointing previously unrecognized types of endometrial cancer, to the early identification of Parkinson's disease, researchers from around the world have already found classic AI highly useful -- and the technology is still in its early stages. Most recently, UCLA said it is developing a new AI-enhanced test that could help speed up the process for diagnosing Lyme Disease, the university said Monday.
[4]
Google is working on AI that can hear signs of sickness | TechCrunch
Given everything you've already heard about AI, you may not be surprised to learn that Google is among other outfits beginning to use sound signals to predict early signs of disease. How? According to Bloomberg, Google has trained its foundation AI model with 300 million pieces of audio that included coughs, sniffles, and labored breathing, to identify, for example, someone battling tuberculosis. Now, it has teamed up with an Indian company - Salcit Technologies, a respiratory healthcare AI startup -- to tuck that tech into smartphones where it can potentially help high-risk populations in geographies with poor access to healthcare. It's not Google's first foray into trying to digitize human senses. Its venture arm has also backed at least one startup that's using AI in an attempt to sniff out disease, literally.
[5]
Google trains AI for sound-based disease detection on smartphone
Google has teamed up with Salcit Technologies, an AI startup focused on respiratory healthcare in India, to incorporate this technology into smartphones. High-risk communities in areas with restricted healthcare access could experience a transformation due to this. Google has previously made efforts to digitize human senses. The company's investment arm has previously backed startups using AI to identify diseases based on scent. Exploring bioacoustics, which combines biology and acoustics, demonstrates the growing use of AI to extract important information from the sounds made by humans and animals. In healthcare, generative AI, the technology behind ChatGPT's widespread adoption by over 200 million users, is advancing bioacoustics with new capabilities. Google has developed an AI model called HeAR (Health Acoustic Representations) that utilizes sound signals to anticipate early signs of illness, providing an innovative tool for medical diagnosis. Easily deployable on smartphones, this technology can track and screen high-risk populations in regions with limited access to costly diagnostic devices such as X-ray machines.
[6]
Google has trained AI to identify sounds associated with respiratory sickness
Key Takeaways Google's AI innovation now aids in clinical diagnoses, thanks to training on sound variations. Google's HeAR model is trained to detect signs of tuberculosis and other pulmonary sickness by analyzing patient coughing sounds. A partner firm is using Google's HeAR to improve lung assessments and TB diagnosis, showing 94% accuracy. Google's AI innovation has expanded dramatically in the past couple of years since consumer-facing AI applications gained popularity. Today, Google's Gemini is more than a chatbot on the web -- it is tightly integrated into various features and apps on the new Google Pixel 9 series, but the company is still relying heavily on innovation. A recent report reveals Google is now working with multiple international partners to aid clinical diagnoses with AI trained to recognize ailments by variations in the mere sound from symptoms like coughing and sneezing. Besides the marvel of AI itself, Google has achieved a key milestone with Gemini, and more specifically, Gemini Nano. It is a smaller, scaled-down model of the generative AI, which can run on device on most modern flagship Android phones. This makes it independent of cellular network instability and other variables associated with cloud processing of queries for AI models. Bloomberg reports the tech titan has joined hands with an Indian start-up, Salcit Technologies, specializing in enhancing respiratory healthcare using AI, to create a similar solution which we hope runs on-device eventually. It's easy to see where this is going -- Google's on-device AI models can help accelerate respiratory ailment diagnosis in remote areas where primary healthcare and access to expensive medical equipment remains a concern. This partnership has already yielded a result, which Google is calling the HeAR model, short for Health Acoustic Representations. Generative AI to the rescue HeAR is essentially a foundation AI model from Google that's trained on 300 million audio clips of coughs, sniffles, sneezes, and breathing from around the world available in publicly viewable content. Although imperceptible to the untrained ear, these clips sound different from a healthy person's respiration. Google's training data for HeAR also included 100 million cough sounds to help quickly screen people for diseases like Tuberculosis. Salcit Technologies is using Google's HeAR to improve lung assessments and TB diagnosis delivered by its in-house AI called Swaasa. While AI-assisted diagnosis isn't a replacement for proper clinical assessment and treatment, Swaasa has been approved for use by the Indian medical device regulator. Running as an app on a mobile device, Swaasa needs a 10-second audio clip of the patient coughing for diagnosing ailments with 94% (claimed) accuracy. While the method doesn't guarantee foolproof reliability, and has its own issues like clear recordings, it is already cheaper than typical spirometry testing used for diagnosing TB and other ailments. Most importantly, Swaasa still relies on cloud processing, and has ample room for improvement before HeAR can be implemented on-device. Meanwhile, Google is betting on similar AI tech for training foundation models to detect autism based on sounds an infant makes. Exciting times.
Share
Share
Copy Link
Google is developing an AI model called "Hear" that can detect diseases by analyzing audio cues. This innovative technology aims to revolutionize early disease detection and improve healthcare accessibility worldwide.
Google is making significant strides in the field of artificial intelligence with its latest project, an AI model called "Hear." This innovative technology is designed to detect diseases by analyzing audio cues, potentially revolutionizing the way we approach early disease detection and diagnosis 1.
The Hear model is being trained to identify a wide range of health conditions through various audio inputs. These include respiratory sounds like coughing and breathing, as well as voice patterns and even the sound of a person's heartbeat. The AI's ability to discern subtle audio differences could lead to early detection of diseases such as tuberculosis, pneumonia, and asthma 2.
Google is not working alone on this ambitious project. The tech giant has partnered with healthcare professionals and researchers to gather diverse audio samples from patients with various conditions. This collaborative approach ensures that the AI model is trained on a comprehensive dataset, improving its accuracy and reliability 3.
The implications of this technology are far-reaching, especially for regions with limited access to healthcare resources. By using smartphones or other audio recording devices, the Hear model could provide a low-cost, accessible method for preliminary disease screening. This could be particularly beneficial in remote areas or developing countries where medical facilities are scarce 4.
While the potential benefits are significant, the development of such technology also raises important questions about privacy, data security, and the role of AI in healthcare. Google will need to address these concerns as they continue to refine the Hear model and prepare it for real-world applications 5.
As the project progresses, Google is exploring additional applications for the Hear model. These include the potential to detect mental health conditions through voice analysis and even the possibility of identifying cognitive decline in its early stages. The company emphasizes that this technology is still in its research phase and will require extensive testing and validation before it can be deployed in clinical settings 1.
Google's Hear project represents a significant step forward in the integration of AI and healthcare. As the technology continues to evolve, it has the potential to transform disease detection and management on a global scale. However, it will be crucial to balance innovation with ethical considerations and rigorous scientific validation to ensure that this powerful tool is used responsibly and effectively in the service of human health 3.
Reference
[2]
[5]
Google announces significant updates to its health-related features, leveraging AI to improve search results, introduce new APIs, and enhance user experience across various platforms.
10 Sources
10 Sources
Google announces partnerships in India to leverage AI for diabetic retinopathy screening, waste management, and agricultural optimization, showcasing its commitment to addressing societal challenges through technology.
4 Sources
4 Sources
Google Cloud and DeliverHealth collaborate to create an AI-powered solution for clinical documentation, aiming to reduce administrative burdens on healthcare providers and improve patient care efficiency.
3 Sources
3 Sources
Researchers develop an AI model that can detect lung diseases with 96.57% accuracy using ultrasound videos, distinguishing between conditions like pneumonia and COVID-19 while providing explanations for its decisions.
3 Sources
3 Sources
Google introduces CT Foundation, a new AI tool for analyzing 3D CT scans, potentially revolutionizing medical imaging and diagnosis. This development highlights the growing role of AI in healthcare, particularly in radiology.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved