New Encryption Method Enhances Privacy for AI-Powered Medical Data Analysis

2 Sources

A University at Buffalo-led study introduces a novel encryption technique for AI-powered medical data, proving highly effective in detecting sleep apnea while safeguarding patient privacy.

News article

Innovative Encryption Technique Safeguards AI-Powered Medical Data

Researchers at the University at Buffalo have developed a groundbreaking method to enhance privacy in AI-powered medical data analysis. The study, funded by a $200,000 IBM/State University of New York grant, demonstrates how to securely encrypt data as it moves between third-party cloud service providers and healthcare professionals 1.

Fully Homomorphic Encryption: A Game-Changer for Medical Data Security

The new technique, which utilizes fully homomorphic encryption (FHE), has shown remarkable effectiveness in detecting sleep apnea. In tests using a deidentified electrocardiogram (ECG) dataset, the method achieved a 99.56% accuracy rate in identifying the condition 2.

Lead research investigator Nalini Ratha, Ph.D., SUNY Empire Innovation Professor at UB, emphasized the significance of this development: "This work highlights how secure, encrypted data-processing can protect patient privacy while still enabling advanced, AI-based diagnostic tools" 1.

Addressing Privacy Concerns in AI-Powered Healthcare

The adoption of AI in healthcare has been hindered by concerns over data privacy. The new encryption method aims to alleviate these fears by preventing unauthorized access to sensitive medical information. Ratha explained the potential risks of unencrypted data:

"If a cloud service provider like Google or Amazon runs an analytic on my data, they can potentially figure out what my sleep apnea status is and then start sending me ads to buy this or that" 2.

Optimizing FHE for Efficient Data Processing

FHE-based analytics are typically slower and more complex than traditional unencrypted methods. To overcome these challenges, the research team developed new techniques to optimize key deep learning operations, enabling faster and more cost-effective FHE system performance 1.

These optimizations cover various stages of deep neural networks, including:

  1. Convolution for pattern detection
  2. Activation functions for decision-making
  3. Pooling for data size reduction
  4. Fully connected layers for comprehensive node connections

The "Gold in a Box" Analogy

To illustrate how their encryption system works, Ratha used a gold analogy:

"If you want to build an ornament out of gold, but you don't want to give it directly to the jeweler because you don't know what the jeweler will mix with it, you put it in a box. The jeweler can touch the gold, but he cannot ever take it out of the box" 2.

In this analogy, the box represents the encryption, the gold symbolizes the data, and the jeweler represents the FHE-based algorithm that can interact with the data without extracting it.

Broader Applications in Healthcare

While the study focused on sleep apnea detection, the researchers believe their findings have wide-ranging applications in healthcare. The encryption method could be applied to various medical procedures and imaging techniques, including X-rays, MRIs, and CT scans 1.

As AI continues to revolutionize healthcare, this innovative approach to data security could pave the way for more widespread adoption of AI-powered diagnostic tools while ensuring patient privacy remains protected.

Explore today's top stories

Thinking Machines Lab Raises Record $2 Billion in Seed Funding, Valued at $12 Billion

Mira Murati's AI startup Thinking Machines Lab secures a historic $2 billion seed round, reaching a $12 billion valuation. The company plans to unveil its first product soon, focusing on collaborative general intelligence.

TechCrunch logoWired logoReuters logo

11 Sources

Startups

18 hrs ago

Thinking Machines Lab Raises Record $2 Billion in Seed

Google's AI Agent 'Big Sleep' Thwarts Cyberattack Before It Happens, Marking a Milestone in AI-Driven Cybersecurity

Google's AI agent 'Big Sleep' has made history by detecting and preventing a critical vulnerability in SQLite before it could be exploited, showcasing the potential of AI in proactive cybersecurity.

The Hacker News logoDigital Trends logoAnalytics India Magazine logo

4 Sources

Technology

10 hrs ago

Google's AI Agent 'Big Sleep' Thwarts Cyberattack Before It

AI Researchers Urge Preservation of Chain-of-Thought Monitoring as Critical Safety Measure

Leading AI researchers from major tech companies and institutions have published a position paper calling for urgent action to preserve and enhance Chain-of-Thought (CoT) monitoring in AI systems, warning that this critical safety measure could soon be lost as AI technology advances.

TechCrunch logoVentureBeat logoDigit logo

4 Sources

Technology

10 hrs ago

AI Researchers Urge Preservation of Chain-of-Thought

Google's AI-Powered Cybersecurity Breakthroughs: Big Sleep Agent Foils Live Attack

Google announces major advancements in AI-driven cybersecurity, including the first-ever prevention of a live cyberattack by an AI agent, ahead of Black Hat USA and DEF CON 33 conferences.

Google Blog logoSiliconANGLE logo

2 Sources

Technology

10 hrs ago

Google's AI-Powered Cybersecurity Breakthroughs: Big Sleep

Mistral Unveils Voxtral: Open-Source AI Audio Model Challenges Industry Giants

French AI startup Mistral releases Voxtral, an open-source speech recognition model family, aiming to provide affordable and accurate audio processing solutions for businesses while competing with established proprietary systems.

TechCrunch logoThe Register logoVentureBeat logo

7 Sources

Technology

18 hrs ago

Mistral Unveils Voxtral: Open-Source AI Audio Model
TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo