The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Tue, 19 Nov, 12:02 AM UTC
4 Sources
[1]
Study identifies strategy for AI cost-efficiency in health care settings
A study by researchers at the Icahn School of Medicine at Mount Sinai has identified strategies for using large language models (LLMs), a type of artificial intelligence (AI), in health systems while maintaining cost efficiency and performance. The findings, published in the November 18 online issue of npj Digital Medicine, provide insights into how health systems can leverage advanced AI tools to automate tasks efficiently, saving time and reducing operational costs while ensuring these models remain reliable even under high task loads. The paper is titled "A Strategy for Cost-effective Large Language Model Use at Health System-scale." "Our findings provide a road map for health care systems to integrate advanced AI tools to automate tasks efficiently, potentially cutting costs for application programming interface (API) calls for LLMs up to 17-fold and ensuring stable performance under heavy workloads," says co-senior author Girish N. Nadkarni, MD, MPH, Irene and Dr. Arthur M. Fishberg, Professor of Medicine at Icahn Mount Sinai, Director of The Charles Bronfman Institute of Personalized Medicine, and Chief of the Division of Data-Driven and Digital Medicine (D3M) at the Mount Sinai Health System. Hospitals and health systems generate massive volumes of data every day. LLMs, such as OpenAI's GPT-4, offer encouraging ways to automate and streamline workflows by assisting with various tasks. However, continuously running these AI models is costly, creating a financial barrier to widespread use, say the investigators. "Our study was motivated by the need to find practical ways to reduce costs while maintaining performance so health systems can confidently use LLMs at scale. We set out to 'stress test' these models, assessing how well they handle multiple tasks simultaneously, and to pinpoint strategies that keep both performance high and costs manageable," says first author Eyal Klang, MD, Director of the Generative AI Research Program in the D3M at Icahn Mount Sinai. The study involved testing 10 LLMs with real patient data, examining how each model responded to various types of clinical questions. The team ran more than 300,000 experiments, incrementally increasing task loads to evaluate how the models managed rising demands. Along with measuring accuracy, the team evaluated the models' adherence to clinical instructions. An economic analysis followed, revealing that grouping tasks could help hospitals cut AI-related costs while keeping model performance intact. The study showed that by specifically grouping up to 50 clinical tasks -- such as matching patients for clinical trials, structuring research cohorts, extracting data for epidemiological studies, reviewing medication safety, and identifying patients eligible for preventive health screenings -- together, LLMs can handle them simultaneously without a significant drop in accuracy. This task-grouping approach suggests that hospitals could optimize workflows and reduce API costs as much as 17-fold, savings that could amount to millions of dollars per year for larger health systems, making advanced AI tools more financially viable. "Recognizing the point at which these models begin to struggle under heavy cognitive loads is essential for maintaining reliability and operational stability. Our findings highlight a practical path for integrating generative AI in hospitals and open the door for further investigation of LLMs' capabilities within real-world limitations," says Dr. Nadkarni. One unexpected finding, say the investigators, was how even advanced models like GPT-4 showed signs of strain when pushed to their cognitive limits. Instead of minor errors, the models' performance would periodically drop unpredictably under pressure. "This research has significant implications for how AI can be integrated into health care systems. Grouping tasks for LLMs not only reduces costs but also conserves resources that can be better directed toward patient care," says co-author David L. Reich, MD, Chief Clinical Officer of the Mount Sinai Health System; President of The Mount Sinai Hospital and Mount Sinai Queens; Horace W. Goldsmith, Professor of Anesthesiology; and Professor of Artificial Intelligence and Human Health, and Pathology, Molecular and Cell-Based Medicine, at Icahn Mount Sinai. "And by recognizing the cognitive limits of these models, health care providers can maximize AI utility while mitigating risks, ensuring that these tools remain a reliable support in critical health care settings." Next, the research team plans to explore how these models perform in real-time clinical environments, managing real patient workloads and interacting directly with health care teams. Additionally, the team aims to test emerging models to see if cognitive thresholds shift as technology advances, working toward a reliable framework for health care AI integration. Ultimately, they say, their goal is to equip health care systems with tools that balance efficiency, accuracy, and cost-effectiveness, enhancing patient care without introducing new risks.
[2]
Study identifies strategy for AI cost-efficiency in health care settings
A study by researchers at the Icahn School of Medicine at Mount Sinai has identified strategies for using large language models (LLMs), a type of artificial intelligence (AI), in health systems while maintaining cost efficiency and performance. The findings, published in the November 18 online issue of npj Digital Medicine, provide insights into how health systems can leverage advanced AI tools to automate tasks efficiently, saving time and reducing operational costs while ensuring these models remain reliable even under high task loads. "Our findings provide a road map for health care systems to integrate advanced AI tools to automate tasks efficiently, potentially cutting costs for application programming interface (API) calls for LLMs up to 17-fold and ensuring stable performance under heavy workloads," says co-senior author Girish N. Nadkarni, MD, MPH, Irene and Dr. Arthur M. Fishberg Professor of Medicine at Icahn Mount Sinai, Director of The Charles Bronfman Institute of Personalized Medicine, and Chief of the Division of Data-Driven and Digital Medicine (D3M) at the Mount Sinai Health System. Hospitals and health systems generate massive volumes of data every day. LLMs, such as OpenAI's GPT-4, offer encouraging ways to automate and streamline workflows by assisting with various tasks. However, continuously running these AI models is costly, creating a financial barrier to widespread use, say the investigators. "Our study was motivated by the need to find practical ways to reduce costs while maintaining performance so health systems can confidently use LLMs at scale. We set out to 'stress test' these models, assessing how well they handle multiple tasks simultaneously, and to pinpoint strategies that keep both performance high and costs manageable," says first author Eyal Klang, MD, Director of the Generative AI Research Program in the D3M at Icahn Mount Sinai. The study involved testing 10 LLMs with real patient data, examining how each model responded to various types of clinical questions. The team ran more than 300,000 experiments, incrementally increasing task loads to evaluate how the models managed rising demands. Along with measuring accuracy, the team evaluated the models' adherence to clinical instructions. An economic analysis followed, revealing that grouping tasks could help hospitals cut AI-related costs while keeping model performance intact. The study showed that by specifically grouping up to 50 clinical tasks -- such as matching patients for clinical trials, structuring research cohorts, extracting data for epidemiological studies, reviewing medication safety, and identifying patients eligible for preventive health screenings -- together, LLMs can handle them simultaneously without a significant drop in accuracy. This task-grouping approach suggests that hospitals could optimize workflows and reduce API costs as much as 17-fold, savings that could amount to millions of dollars per year for larger health systems, making advanced AI tools more financially viable. "Recognizing the point at which these models begin to struggle under heavy cognitive loads is essential for maintaining reliability and operational stability. Our findings highlight a practical path for integrating generative AI in hospitals and open the door for further investigation of LLMs' capabilities within real-world limitations," says Dr. Nadkarni. One unexpected finding, say the investigators, was how even advanced models like GPT-4 showed signs of strain when pushed to their cognitive limits. Instead of minor errors, the models' performance would periodically drop unpredictably under pressure. "This research has significant implications for how AI can be integrated into health care systems. Grouping tasks for LLMs not only reduces costs but also conserves resources that can be better directed toward patient care," says co-author David L. Reich, MD, Chief Clinical Officer of the Mount Sinai Health System; President of The Mount Sinai Hospital and Mount Sinai Queens; Horace W. Goldsmith Professor of Anesthesiology; and Professor of Artificial Intelligence and Human Health, and Pathology, Molecular and Cell-Based Medicine, at Icahn Mount Sinai. "And by recognizing the cognitive limits of these models, health care providers can maximize AI utility while mitigating risks, ensuring that these tools remain a reliable support in critical health care settings." Next, the research team plans to explore how these models perform in real-time clinical environments, managing real patient workloads and interacting directly with health care teams. Additionally, the team aims to test emerging models to see if cognitive thresholds shift as technology advances, working toward a reliable framework for health care AI integration. Ultimately, they say, their goal is to equip health care systems with tools that balance efficiency, accuracy, and cost-effectiveness, enhancing patient care without introducing new risks.
[3]
Study identifies cost-effective strategies for using AI in health systems
Mount Sinai Health SystemNov 18 2024 A study by researchers at the Icahn School of Medicine at Mount Sinai has identified strategies for using large language models (LLMs), a type of artificial intelligence (AI), in health systems while maintaining cost efficiency and performance. The findings, published in the November 18 online issue of npj Digital Medicine [DOI: 10.1038/s41746-024-01315-1], provide insights into how health systems can leverage advanced AI tools to automate tasks efficiently, saving time and reducing operational costs while ensuring these models remain reliable even under high task loads. "Our findings provide a road map for health care systems to integrate advanced AI tools to automate tasks efficiently, potentially cutting costs for application programming interface (API) calls for LLMs up to 17-fold and ensuring stable performance under heavy workloads," says co-senior author Girish N. Nadkarni, MD, MPH, Irene and Dr. Arthur M. Fishberg Professor of Medicine at Icahn Mount Sinai, Director of The Charles Bronfman Institute of Personalized Medicine, and Chief of the Division of Data-Driven and Digital Medicine (D3M) at the Mount Sinai Health System. Hospitals and health systems generate massive volumes of data every day. LLMs, such as OpenAI's GPT-4, offer encouraging ways to automate and streamline workflows by assisting with various tasks. However, continuously running these AI models is costly, creating a financial barrier to widespread use, say the investigators. Our study was motivated by the need to find practical ways to reduce costs while maintaining performance so health systems can confidently use LLMs at scale. We set out to 'stress test' these models, assessing how well they handle multiple tasks simultaneously, and to pinpoint strategies that keep both performance high and costs manageable." Eyal Klang, MD, first author, Director of the Generative AI Research Program in the D3M at Icahn Mount Sinai The study involved testing 10 LLMs with real patient data, examining how each model responded to various types of clinical questions. The team ran more than 300,000 experiments, incrementally increasing task loads to evaluate how the models managed rising demands. Along with measuring accuracy, the team evaluated the models' adherence to clinical instructions. An economic analysis followed, revealing that grouping tasks could help hospitals cut AI-related costs while keeping model performance intact. The study showed that by specifically grouping up to 50 clinical tasks-;such as matching patients for clinical trials, structuring research cohorts, extracting data for epidemiological studies, reviewing medication safety, and identifying patients eligible for preventive health screenings-;together, LLMs can handle them simultaneously without a significant drop in accuracy. This task-grouping approach suggests that hospitals could optimize workflows and reduce API costs as much as 17-fold, savings that could amount to millions of dollars per year for larger health systems, making advanced AI tools more financially viable. "Recognizing the point at which these models begin to struggle under heavy cognitive loads is essential for maintaining reliability and operational stability. Our findings highlight a practical path for integrating generative AI in hospitals and open the door for further investigation of LLMs' capabilities within real-world limitations," says Dr. Nadkarni. One unexpected finding, say the investigators, was how even advanced models like GPT-4 showed signs of strain when pushed to their cognitive limits. Instead of minor errors, the models' performance would periodically drop unpredictably under pressure. "This research has significant implications for how AI can be integrated into health care systems. Grouping tasks for LLMs not only reduces costs but also conserves resources that can be better directed toward patient care," says co-author David L. Reich, MD, Chief Clinical Officer of the Mount Sinai Health System; President of The Mount Sinai Hospital and Mount Sinai Queens; Horace W. Goldsmith Professor of Anesthesiology; and Professor of Artificial Intelligence and Human Health, and Pathology, Molecular and Cell-Based Medicine, at Icahn Mount Sinai. "And by recognizing the cognitive limits of these models, health care providers can maximize AI utility while mitigating risks, ensuring that these tools remain a reliable support in critical health care settings." Next, the research team plans to explore how these models perform in real-time clinical environments, managing real patient workloads and interacting directly with health care teams. Additionally, the team aims to test emerging models to see if cognitive thresholds shift as technology advances, working toward a reliable framework for health care AI integration. Ultimately, they say, their goal is to equip health care systems with tools that balance efficiency, accuracy, and cost-effectiveness, enhancing patient care without introducing new risks. Mount Sinai Health System Journal reference: Klang, E., et al. (2024). A strategy for cost-effective large language model use at health system-scale. npj Digital Medicine. doi.org/10.1038/s41746-024-01315-1.
[4]
Study Identifies Strategy for AI Cost-Efficiency i | Newswise
Newswise -- New York, NY [November 18, 2024] -- A study by researchers at the Icahn School of Medicine at Mount Sinai has identified strategies for using large language models (LLMs), a type of artificial intelligence (AI), in health systems while maintaining cost efficiency and performance. The findings, published in the November 18 online issue of npj Digital Medicine [DOI: 10.1038/s41746-024-01315-1], provide insights into how health systems can leverage advanced AI tools to automate tasks efficiently, saving time and reducing operational costs while ensuring these models remain reliable even under high task loads. "Our findings provide a road map for health care systems to integrate advanced AI tools to automate tasks efficiently, potentially cutting costs for application programming interface (API) calls for LLMs up to 17-fold and ensuring stable performance under heavy workloads," says co-senior author Girish N. Nadkarni, MD, MPH, Irene and Dr. Arthur M. Fishberg Professor of Medicine at Icahn Mount Sinai, Director of The Charles Bronfman Institute of Personalized Medicine, and Chief of the Division of Data-Driven and Digital Medicine (D3M) at the Mount Sinai Health System. Hospitals and health systems generate massive volumes of data every day. LLMs, such as OpenAI's GPT-4, offer encouraging ways to automate and streamline workflows by assisting with various tasks. However, continuously running these AI models is costly, creating a financial barrier to widespread use, say the investigators. "Our study was motivated by the need to find practical ways to reduce costs while maintaining performance so health systems can confidently use LLMs at scale. We set out to 'stress test' these models, assessing how well they handle multiple tasks simultaneously, and to pinpoint strategies that keep both performance high and costs manageable," says first author Eyal Klang, MD, Director of the Generative AI Research Program in the D3M at Icahn Mount Sinai. The study involved testing 10 LLMs with real patient data, examining how each model responded to various types of clinical questions. The team ran more than 300,000 experiments, incrementally increasing task loads to evaluate how the models managed rising demands. Along with measuring accuracy, the team evaluated the models' adherence to clinical instructions. An economic analysis followed, revealing that grouping tasks could help hospitals cut AI-related costs while keeping model performance intact. The study showed that by specifically grouping up to 50 clinical tasks -- such as matching patients for clinical trials, structuring research cohorts, extracting data for epidemiological studies, reviewing medication safety, and identifying patients eligible for preventive health screenings -- together, LLMs can handle them simultaneously without a significant drop in accuracy. This task-grouping approach suggests that hospitals could optimize workflows and reduce API costs as much as 17-fold, savings that could amount to millions of dollars per year for larger health systems, making advanced AI tools more financially viable. "Recognizing the point at which these models begin to struggle under heavy cognitive loads is essential for maintaining reliability and operational stability. Our findings highlight a practical path for integrating generative AI in hospitals and open the door for further investigation of LLMs' capabilities within real-world limitations," says Dr. Nadkarni. One unexpected finding, say the investigators, was how even advanced models like GPT-4 showed signs of strain when pushed to their cognitive limits. Instead of minor errors, the models' performance would periodically drop unpredictably under pressure. "This research has significant implications for how AI can be integrated into health care systems. Grouping tasks for LLMs not only reduces costs but also conserves resources that can be better directed toward patient care," says co-author David L. Reich, MD, Chief Clinical Officer of the Mount Sinai Health System; President of The Mount Sinai Hospital and Mount Sinai Queens; Horace W. Goldsmith Professor of Anesthesiology; and Professor of Artificial Intelligence and Human Health, and Pathology, Molecular and Cell-Based Medicine, at Icahn Mount Sinai. "And by recognizing the cognitive limits of these models, health care providers can maximize AI utility while mitigating risks, ensuring that these tools remain a reliable support in critical health care settings." Next, the research team plans to explore how these models perform in real-time clinical environments, managing real patient workloads and interacting directly with health care teams. Additionally, the team aims to test emerging models to see if cognitive thresholds shift as technology advances, working toward a reliable framework for health care AI integration. Ultimately, they say, their goal is to equip health care systems with tools that balance efficiency, accuracy, and cost-effectiveness, enhancing patient care without introducing new risks. The remaining authors, all with Icahn Mount Sinai, are Donald Apakama, MD, MS; Ethan E Abbott, DO; Akhil Vaid, MD: Joshua Lampert, MD; Ankit Sakhuja, MBBS, MS; Robert Freeman, DNP, RN, MSN, NE-BC; Alexander W. Charney, MD, PhD; Monica Kraft, MD; and Benjamin S. Glicksberg, PhD. To view details on competing interests, see: [DOI: 10.1038/s41746-024-01315-1]. This study was supported by the National Heart, Lung, and Blood Institute, National Institutes of Health (R01HL155915), and National Center for Advancing Translational Sciences, National Institutes of Health (CTSA grant UL1TR004419). About the Icahn School of Medicine at Mount Sinai The Icahn School of Medicine at Mount Sinai is internationally renowned for its outstanding research, educational, and clinical care programs. It is the sole academic partner for the eight- member hospitals* of the Mount Sinai Health System, one of the largest academic health systems in the United States, providing care to a large and diverse patient population. Ranked 13th nationwide in National Institutes of Health (NIH) funding and among the 99th percentile in research dollars per investigator according to the Association of American Medical Colleges, Icahn Mount Sinai has a talented, productive, and successful faculty. More than 3,000 full-time scientists, educators, and clinicians work within and across 44 academic departments and 36 multidisciplinary institutes, a structure that facilitates tremendous collaboration and synergy. Our emphasis on translational research and therapeutics is evident in such diverse areas as genomics/big data, virology, neuroscience, cardiology, geriatrics, as well as gastrointestinal and liver diseases. Icahn Mount Sinai offers highly competitive MD, PhD, and Master's degree programs, with current enrollment of approximately 1,300 students. It has the largest graduate medical education program in the country, with more than 2,000 clinical residents and fellows training throughout the Health System. In addition, more than 550 postdoctoral research fellows are in training within the Health System. A culture of innovation and discovery permeates every Icahn Mount Sinai program. Mount Sinai's technology transfer office, one of the largest in the country, partners with faculty and trainees to pursue optimal commercialization of intellectual property to ensure that Mount Sinai discoveries and innovations translate into healthcare products and services that benefit the public. Icahn Mount Sinai's commitment to breakthrough science and clinical care is enhanced by academic affiliations that supplement and complement the School's programs. Through the Mount Sinai Innovation Partners (MSIP), the Health System facilitates the real-world application and commercialization of medical breakthroughs made at Mount Sinai. Additionally, MSIP develops research partnerships with industry leaders such as Merck & Co., AstraZeneca, Novo Nordisk, and others. The Icahn School of Medicine at Mount Sinai is located in New York City on the border between the Upper East Side and East Harlem, and classroom teaching takes place on a campus facing Central Park. Icahn Mount Sinai's location offers many opportunities to interact with and care for diverse communities. Learning extends well beyond the borders of our physical campus, to the eight hospitals of the Mount Sinai Health System, our academic affiliates, and globally. -------------------------------------------------------
Share
Share
Copy Link
Researchers at Mount Sinai have identified strategies for using large language models in healthcare settings, potentially reducing costs by up to 17-fold while maintaining performance.
Researchers at the Icahn School of Medicine at Mount Sinai have made a significant breakthrough in the application of artificial intelligence (AI) in healthcare settings. Their study, published in npj Digital Medicine, outlines strategies for using large language models (LLMs) in health systems while maintaining cost efficiency and performance 1234.
The research team, led by Dr. Girish N. Nadkarni and Dr. Eyal Klang, conducted an extensive study involving:
The study revealed that by grouping up to 50 clinical tasks together, LLMs could handle them simultaneously without a significant drop in accuracy. This approach could potentially reduce application programming interface (API) costs for LLMs by up to 17-fold 1234.
The findings have significant implications for the integration of AI in healthcare:
Cost Reduction: The task-grouping approach could lead to substantial savings, potentially amounting to millions of dollars per year for larger health systems 123.
Efficiency: The strategy allows for the automation of various tasks such as matching patients for clinical trials, structuring research cohorts, and reviewing medication safety 1234.
Performance Stability: The study provides insights into maintaining stable AI performance under heavy workloads 1234.
An unexpected discovery was that even advanced models like GPT-4 showed signs of strain when pushed to their cognitive limits. Instead of minor errors, the models' performance would periodically drop unpredictably under pressure 1234.
Dr. David L. Reich, a co-author of the study, emphasized the importance of recognizing these cognitive limits to maximize AI utility while mitigating risks in critical healthcare settings 1234.
The research team plans to:
This study marks a significant step towards equipping healthcare systems with AI tools that balance efficiency, accuracy, and cost-effectiveness, potentially enhancing patient care without introducing new risks.
Reference
[1]
Medical Xpress - Medical and Health News
|Study identifies strategy for AI cost-efficiency in health care settings[3]
A pilot study by UC San Diego researchers demonstrates that AI using large language models can significantly improve the efficiency and accuracy of hospital quality reporting, potentially transforming healthcare delivery.
4 Sources
4 Sources
Recent studies highlight the potential of artificial intelligence in medical settings, demonstrating improved diagnostic accuracy and decision-making. However, researchers caution about the need for careful implementation and human oversight.
2 Sources
2 Sources
A collaborative research study explores the effectiveness of GPT-4 in assisting physicians with patient diagnosis, highlighting both the potential and limitations of AI in healthcare.
3 Sources
3 Sources
A new AI model, BiomedGPT, has been developed as a generalist vision-language foundation model capable of performing various biomedical tasks. This open-source tool combines image and text understanding to support a wide range of medical and scientific applications.
2 Sources
2 Sources
A new study by Mayo Clinic researchers demonstrates that AI-enhanced electrocardiogram (AI-ECG) tools for detecting weak heart pumps are not only effective but also cost-efficient, especially in outpatient settings.
4 Sources
4 Sources