Mount Sinai Study Reveals Cost-Effective AI Strategy for Healthcare Systems

4 Sources

Share

Researchers at Mount Sinai have identified strategies for using large language models in healthcare settings, potentially reducing costs by up to 17-fold while maintaining performance.

News article

Mount Sinai Researchers Develop Cost-Effective AI Strategy for Healthcare

Researchers at the Icahn School of Medicine at Mount Sinai have made a significant breakthrough in the application of artificial intelligence (AI) in healthcare settings. Their study, published in npj Digital Medicine, outlines strategies for using large language models (LLMs) in health systems while maintaining cost efficiency and performance

1

2

3

4

.

Study Methodology and Findings

The research team, led by Dr. Girish N. Nadkarni and Dr. Eyal Klang, conducted an extensive study involving:

  • Testing of 10 different LLMs
  • Use of real patient data
  • Over 300,000 experiments
  • Incremental increase in task loads to evaluate model performance

The study revealed that by grouping up to 50 clinical tasks together, LLMs could handle them simultaneously without a significant drop in accuracy. This approach could potentially reduce application programming interface (API) costs for LLMs by up to 17-fold

1

2

3

4

.

Implications for Healthcare Systems

The findings have significant implications for the integration of AI in healthcare:

  1. Cost Reduction: The task-grouping approach could lead to substantial savings, potentially amounting to millions of dollars per year for larger health systems

    1

    2

    3

    .

  2. Efficiency: The strategy allows for the automation of various tasks such as matching patients for clinical trials, structuring research cohorts, and reviewing medication safety

    1

    2

    3

    4

    .

  3. Performance Stability: The study provides insights into maintaining stable AI performance under heavy workloads

    1

    2

    3

    4

    .

Unexpected Findings and Future Research

An unexpected discovery was that even advanced models like GPT-4 showed signs of strain when pushed to their cognitive limits. Instead of minor errors, the models' performance would periodically drop unpredictably under pressure

1

2

3

4

.

Dr. David L. Reich, a co-author of the study, emphasized the importance of recognizing these cognitive limits to maximize AI utility while mitigating risks in critical healthcare settings

1

2

3

4

.

Next Steps

The research team plans to:

  1. Explore how these models perform in real-time clinical environments
  2. Test emerging models to see if cognitive thresholds shift as technology advances
  3. Work towards a reliable framework for healthcare AI integration

    1

    2

    3

    4

This study marks a significant step towards equipping healthcare systems with AI tools that balance efficiency, accuracy, and cost-effectiveness, potentially enhancing patient care without introducing new risks.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo