4 Sources
[1]
AI tool grounded in evidence-based medicine outperformed other AI tools -- and most doctors- on USMLE exams
Achieving higher scores on the USMLE than most physicians and all other AI tools so far, Semantic Clinical Artificial Intelligence (SCAI, pronounced "Sky") has the potential to become a critical partner for physicians, says lead author Peter L. Elkin, MD, chair of the Department of Biomedical Informatics in the Jacobs School of Medicine and Biomedical Sciences at UB and a physician with UBMD Internal Medicine. Elkin says SCAI is the most accurate clinical AI tool available to date, with the most advanced version scoring 95.2% on Step 3 of the USMLE, while a GPT4 Omni tool scored 90.5% on the same test. "As physicians, we are used to using computers as tools," he explains, "but SCAI is different; it can add to your decision-making and thinking based on its own reasoning." The tool can respond to medical questions posed by clinicians or the public at https://halsted.compbio.buffalo.edu/chat/. The researchers tested the model against the USMLE, required for licensing physicians nationwide, which assesses the physician's ability to apply knowledge, concepts and principles, and to demonstrate fundamental patient-centered skills. Any questions with a visual component were eliminated. Elkin explains that most AI tools function by using statistics to find associations in online data that allow them to answer a question. "We call these tools generative artificial intelligence," he says. "Some have postulated that they are just plagiarizing what's on the internet because the answers they give you are what others have written." However, these AI models are now becoming partners in care rather than simple tools for clinicians to utilize in their practice, he says. "But SCAI answers more complex questions and performs more complex semantic reasoning," he says, "We have created knowledge sources that can reason more the way people learn to reason while doing their training in medical school." The team started with a natural language processing software they had previously developed. They added vast amounts of authoritative clinical information gleaned from widely disparate sources ranging from recent medical literature and clinical guidelines to genomic data, drug information, discharge recommendations, patient safety data and more. Any data that might be biased, such as clinical notes, were not included. 13 million medical facts SCAI contains 13 million medical facts, as well as all the possible interactions between those facts. The team used basic clinical facts known as semantic triples (subject-relation-object, such as "Penicillin treats pneumococcal pneumonia") to create semantic networks. The tool can then represent these semantic networks so that it is possible to draw logical inferences from them. "We have taught large language models how to use semantic reasoning," says Elkin. Other techniques that contributed to SCAI include knowledge graphs that are designed to find new links in medical data as well as previously "hidden" patterns, as well as retrieval-augmented generation, which allows the large language model to access and incorporate information from external knowledge databases before responding to a prompt. This reduces "confabulation," the tendency for AI tools to always respond to a prompt even when it doesn't have enough information to go on. Elkin adds that using formal semantics to inform the large language model provides important context necessary for SCAI to understand and respond more accurately to a particular question. 'It can have a conversation with you' "SCAI is different from other large language models because it can have a conversation with you and as a human-computer partnership can add to your decision-making and thinking based on its own reasoning," Elkin says. He concludes: "By adding semantics to large language models, we are providing them with the ability to reason similarly to the way we do when practicing evidence-based medicine." Because it can access such vast amounts of data, SCAI also has the potential to improve patient safety, improve access to care and "democratize specialty care," Elkin says, by making medical information on specialties and subspecialties accessible to primary care providers and even to patients. While the power of SCAI is impressive, Elkin stresses its role will be to augment, not replace, physicians. "Artificial intelligence isn't going to replace doctors," he says, "but a doctor who uses AI may replace a doctor who does not." In addition to Elkin, UB co-authors from the Department of Biomedical Informatics are Guresh Mehta; Frank LeHouillier; Melissa Resnick, PhD; Crystal Tomlin, PhD; Skyler Resendez, PhD; and Jiaxing Liu. Sarah Mullin, PhD, of Roswell Park Comprehensive Cancer Center, and Jonathan R. Nebeker, MD, and Steven H. Brown, MD, both of the Department of Veterans Affairs, also are co-authors. The work was funded by grants from the National Institutes of Health and the Department of Veterans Affairs.
[2]
Clinical AI tool scores highest yet on the United States Medical Licensing Exam
University at BuffaloApr 22 2025 A powerful clinical artificial intelligence tool developed by University at Buffalo biomedical informatics researchers has demonstrated remarkable accuracy on all three parts of the United States Medical Licensing Exam (Step exams), according to a paper published today (April 22) in JAMA Network Open. Achieving higher scores on the USMLE than most physicians and all other AI tools so far, Semantic Clinical Artificial Intelligence (SCAI, pronounced "Sky") has the potential to become a critical partner for physicians, says lead author Peter L. Elkin, MD, chair of the Department of Biomedical Informatics in the Jacobs School of Medicine and Biomedical Sciences at UB and a physician with UBMD Internal Medicine. Elkin says SCAI is the most accurate clinical AI tool available to date, with the most advanced version scoring 95.2% on Step 3 of the USMLE, while a GPT4 Omni tool scored 90.5% on the same test. "As physicians, we are used to using computers as tools," he explains, "but SCAI is different; it can add to your decision-making and thinking based on its own reasoning." The tool can respond to medical questions posed by clinicians or the public at https://halsted.compbio.buffalo.edu/chat/. The researchers tested the model against the USMLE, required for licensing physicians nationwide, which assesses the physician's ability to apply knowledge, concepts and principles, and to demonstrate fundamental patient-centered skills. Any questions with a visual component were eliminated. Elkin explains that most AI tools function by using statistics to find associations in online data that allow them to answer a question. "We call these tools generative artificial intelligence," he says. "Some have postulated that they are just plagiarizing what's on the internet because the answers they give you are what others have written." However, these AI models are now becoming partners in care rather than simple tools for clinicians to utilize in their practice, he says. "But SCAI answers more complex questions and performs more complex semantic reasoning," he says, "We have created knowledge sources that can reason more the way people learn to reason while doing their training in medical school." The team started with a natural language processing software they had previously developed. They added vast amounts of authoritative clinical information gleaned from widely disparate sources ranging from recent medical literature and clinical guidelines to genomic data, drug information, discharge recommendations, patient safety data and more. Any data that might be biased, such as clinical notes, were not included. 13 million medical facts SCAI contains 13 million medical facts, as well as all the possible interactions between those facts. The team used basic clinical facts known as semantic triples (subject-relation-object, such as "Penicillin treats pneumococcal pneumonia") to create semantic networks. The tool can then represent these semantic networks so that it is possible to draw logical inferences from them. "We have taught large language models how to use semantic reasoning," says Elkin. Other techniques that contributed to SCAI include knowledge graphs that are designed to find new links in medical data as well as previously "hidden" patterns, as well as retrieval-augmented generation, which allows the large language model to access and incorporate information from external knowledge databases before responding to a prompt. This reduces "confabulation," the tendency for AI tools to always respond to a prompt even when it doesn't have enough information to go on. Elkin adds that using formal semantics to inform the large language model provides important context necessary for SCAI to understand and respond more accurately to a particular question. 'It can have a conversation with you' "SCAI is different from other large language models because it can have a conversation with you and as a human-computer partnership can add to your decision-making and thinking based on its own reasoning," Elkin says. He concludes: "By adding semantics to large language models, we are providing them with the ability to reason similarly to the way we do when practicing evidence-based medicine." Because it can access such vast amounts of data, SCAI also has the potential to improve patient safety, improve access to care and "democratize specialty care," Elkin says, by making medical information on specialties and subspecialties accessible to primary care providers and even to patients. While the power of SCAI is impressive, Elkin stresses its role will be to augment, not replace, physicians. "Artificial intelligence isn't going to replace doctors," he says, "but a doctor who uses AI may replace a doctor who does not." In addition to Elkin, UB co-authors from the Department of Biomedical Informatics are Guresh Mehta; Frank LeHouillier; Melissa Resnick, PhD; Crystal Tomlin, PhD; Skyler Resendez, PhD; and Jiaxing Liu. Sarah Mullin, PhD, of Roswell Park Comprehensive Cancer Center, and Jonathan R. Nebeker, MD, and Steven H. Brown, MD, both of the Department of Veterans Affairs, also are co-authors. The work was funded by grants from the National Institutes of Health and the Department of Veterans Affairs. University at Buffalo Journal reference: Elkin, P. L., et al. (2025). Semantic Clinical Artificial Intelligence vs Native Large Language Model Performance on the USMLE. JAMA Network Open. doi.org/10.1001/jamanetworkopen.2025.6359.
[3]
An AI tool grounded in evidence-based medicine outperforms other AI tools -- and most doctors -- on USMLE exams
A powerful clinical artificial intelligence tool developed by University at Buffalo biomedical informatics researchers has demonstrated remarkable accuracy on all three parts of the United States Medical Licensing Exam (Step exams), according to a paper published in JAMA Network Open. Achieving higher scores on the USMLE than most physicians and all other AI tools so far, Semantic Clinical Artificial Intelligence (SCAI, pronounced "sky") has the potential to become a critical partner for physicians, says lead author Peter L. Elkin, MD, chair of the Department of Biomedical Informatics in the Jacobs School of Medicine and Biomedical Sciences at UB and a physician with UBMD Internal Medicine. Elkin says SCAI is the most accurate clinical AI tool available to date, with the most advanced version scoring 95.2% on Step 3 of the USMLE, while a GPT4 Omni tool scored 90.5% on the same test. "As physicians, we are used to using computers as tools," he explains, "but SCAI is different; it can add to your decision-making and thinking based on its own reasoning." The researchers tested the model against the USMLE, required for licensing physicians nationwide, which assesses the physician's ability to apply knowledge, concepts and principles, and to demonstrate fundamental patient-centered skills. Any questions with a visual component were eliminated. Elkin explains that most AI tools function by using statistics to find associations in online data that allow them to answer a question. "We call these tools generative artificial intelligence," he says. "Some have postulated that they are just plagiarizing what's on the internet because the answers they give you are what others have written." However, these AI models are now becoming partners in care rather than simple tools for clinicians to utilize in their practice, he says. "But SCAI answers more complex questions and performs more complex semantic reasoning," he says. "We have created knowledge sources that can reason more the way people learn to reason while doing their training in medical school." The team started with a natural language processing software they had previously developed. They added vast amounts of authoritative clinical information gleaned from widely disparate sources, ranging from recent medical literature and clinical guidelines to genomic data, drug information, discharge recommendations, patient safety data and more. Any data that might be biased, such as clinical notes, were not included. 13 million medical facts SCAI contains 13 million medical facts, as well as all the possible interactions between those facts. The team used basic clinical facts known as semantic triples (subject-relation-object, such as "Penicillin treats pneumococcal pneumonia") to create semantic networks. The tool can then represent these semantic networks so that it is possible to draw logical inferences from them. "We have taught large language models how to use semantic reasoning," says Elkin. Other techniques that contributed to SCAI include knowledge graphs that are designed to find new links in medical data as well as previously "hidden" patterns, as well as retrieval-augmented generation, which allows the large language model to access and incorporate information from external knowledge databases before responding to a prompt. This reduces "confabulation," the tendency for AI tools to always respond to a prompt even when it doesn't have enough information to go on. Elkin adds that using formal semantics to inform the large language model provides important context necessary for SCAI to understand and respond more accurately to a particular question. 'It can have a conversation with you' "SCAI is different from other large language models because it can have a conversation with you and, as a human-computer partnership, can add to your decision-making and thinking based on its own reasoning," Elkin says. He concludes, "By adding semantics to large language models, we are providing them with the ability to reason similarly to the way we do when practicing evidence-based medicine." Because it can access such vast amounts of data, SCAI also has the potential to improve patient safety, improve access to care and "democratize specialty care," Elkin says, by making medical information on specialties and subspecialties accessible to primary care providers and even to patients. While the power of SCAI is impressive, Elkin stresses its role will be to augment, not replace, physicians. "Artificial intelligence isn't going to replace doctors," he says, "but a doctor who uses AI may replace a doctor who does not."
[4]
An AI tool grounded in evidence-based medicine outperformed other AI tools - and most doctors - on USMLE exams
BUFFALO, N.Y. -- A powerful clinical artificial intelligence tool developed by University at Buffalo biomedical informatics researchers has demonstrated remarkable accuracy on all three parts of the United States Medical Licensing Exam (Step exams), according to a paper published today (April 22) in JAMA Network Open. Achieving higher scores on the USMLE than most physicians and all other AI tools so far, Semantic Clinical Artificial Intelligence (SCAI, pronounced "Sky") has the potential to become a critical partner for physicians, says lead author Peter L. Elkin, MD, chair of the Department of Biomedical Informatics in the Jacobs School of Medicine and Biomedical Sciences at UB and a physician with UBMD Internal Medicine. Elkin says SCAI is the most accurate clinical AI tool available to date, with the most advanced version scoring 95.2% on Step 3 of the USMLE, while a GPT4 Omni tool scored 90.5% on the same test. "As physicians, we are used to using computers as tools," he explains, "but SCAI is different; it can add to your decision-making and thinking based on its own reasoning." The tool can respond to medical questions posed by clinicians or the public at https://halsted.compbio.buffalo.edu/chat/. The researchers tested the model against the USMLE, required for licensing physicians nationwide, which assesses the physician's ability to apply knowledge, concepts and principles, and to demonstrate fundamental patient-centered skills. Any questions with a visual component were eliminated. Elkin explains that most AI tools function by using statistics to find associations in online data that allow them to answer a question. "We call these tools generative artificial intelligence," he says. "Some have postulated that they are just plagiarizing what's on the internet because the answers they give you are what others have written." However, these AI models are now becoming partners in care rather than simple tools for clinicians to utilize in their practice, he says. "But SCAI answers more complex questions and performs more complex semantic reasoning," he says, "We have created knowledge sources that can reason more the way people learn to reason while doing their training in medical school." The team started with a natural language processing software they had previously developed. They added vast amounts of authoritative clinical information gleaned from widely disparate sources ranging from recent medical literature and clinical guidelines to genomic data, drug information, discharge recommendations, patient safety data and more. Any data that might be biased, such as clinical notes, were not included. 13 million medical facts SCAI contains 13 million medical facts, as well as all the possible interactions between those facts. The team used basic clinical facts known as semantic triples (subject-relation-object, such as "Penicillin treats pneumococcal pneumonia") to create semantic networks. The tool can then represent these semantic networks so that it is possible to draw logical inferences from them. "We have taught large language models how to use semantic reasoning," says Elkin. Other techniques that contributed to SCAI include knowledge graphs that are designed to find new links in medical data as well as previously "hidden" patterns, as well as retrieval-augmented generation, which allows the large language model to access and incorporate information from external knowledge databases before responding to a prompt. This reduces "confabulation," the tendency for AI tools to always respond to a prompt even when it doesn't have enough information to go on. Elkin adds that using formal semantics to inform the large language model provides important context necessary for SCAI to understand and respond more accurately to a particular question. 'It can have a conversation with you' "SCAI is different from other large language models because it can have a conversation with you and as a human-computer partnership can add to your decision-making and thinking based on its own reasoning," Elkin says. He concludes: "By adding semantics to large language models, we are providing them with the ability to reason similarly to the way we do when practicing evidence-based medicine." Because it can access such vast amounts of data, SCAI also has the potential to improve patient safety, improve access to care and "democratize specialty care," Elkin says, by making medical information on specialties and subspecialties accessible to primary care providers and even to patients. While the power of SCAI is impressive, Elkin stresses its role will be to augment, not replace, physicians. "Artificial intelligence isn't going to replace doctors," he says, "but a doctor who uses AI may replace a doctor who does not." In addition to Elkin, UB co-authors from the Department of Biomedical Informatics are Guresh Mehta; Frank LeHouillier; Melissa Resnick, PhD; Crystal Tomlin, PhD; Skyler Resendez, PhD; and Jiaxing Liu. Sarah Mullin, PhD, of Roswell Park Comprehensive Cancer Center, and Jonathan R. Nebeker, MD, and Steven H. Brown, MD, both of the Department of Veterans Affairs, also are co-authors. The work was funded by grants from the National Institutes of Health and the Department of Veterans Affairs.
Share
Copy Link
A new AI tool called SCAI, developed by University at Buffalo researchers, has achieved unprecedented accuracy on the United States Medical Licensing Exam, outperforming most physicians and other AI models.
Researchers at the University at Buffalo have developed a groundbreaking clinical artificial intelligence tool called Semantic Clinical Artificial Intelligence (SCAI), which has demonstrated exceptional performance on the United States Medical Licensing Exam (USMLE). The study, published in JAMA Network Open, reveals that SCAI outperformed most physicians and all other AI tools tested so far 1.
SCAI achieved remarkable scores on all three parts of the USMLE, with its most advanced version scoring 95.2% on Step 3 of the exam. In comparison, a GPT4 Omni tool scored 90.5% on the same test 2. This performance demonstrates SCAI's potential to become a critical partner for physicians in clinical decision-making.
Unlike traditional AI tools that rely on statistical associations in online data, SCAI employs complex semantic reasoning. Dr. Peter L. Elkin, lead author and chair of the Department of Biomedical Informatics at the University at Buffalo, explains that SCAI can "add to your decision-making and thinking based on its own reasoning" 3.
SCAI's impressive capabilities stem from its vast knowledge base, which includes:
This comprehensive approach allows SCAI to reason similarly to how medical professionals think when practicing evidence-based medicine 4.
The researchers believe SCAI has the potential to:
While SCAI's capabilities are impressive, Dr. Elkin emphasizes that its role will be to augment, not replace, physicians. He states, "Artificial intelligence isn't going to replace doctors, but a doctor who uses AI may replace a doctor who does not" 1.
The development of SCAI represents a significant advancement in clinical AI, potentially revolutionizing medical education, decision-making, and patient care. As AI tools like SCAI continue to evolve, they may become indispensable partners for healthcare professionals, enhancing the quality and efficiency of medical practice across various specialties.
Summarized by
Navi
[1]
[3]
[4]
State University of New York at Buffalo
|An AI tool grounded in evidence-based medicine outperformed other AI tools - and most doctors - on USMLE examsSundar Pichai, CEO of Alphabet, announces plans to continue hiring engineers through 2026, highlighting the importance of human talent alongside AI investments. He discusses AI's impact on productivity, job market concerns, and Google's commitment to innovation across various sectors.
6 Sources
Technology
18 hrs ago
6 Sources
Technology
18 hrs ago
OpenAI reports an increase in Chinese groups using ChatGPT for various covert operations, including social media manipulation, cyber operations, and influence campaigns. The company has disrupted multiple operations originating from China and other countries.
7 Sources
Technology
2 hrs ago
7 Sources
Technology
2 hrs ago
Palantir CEO Alex Karp emphasizes the dangers of AI and the critical nature of the US-China AI race, highlighting Palantir's role in advancing US interests in AI development.
3 Sources
Technology
2 hrs ago
3 Sources
Technology
2 hrs ago
Microsoft's stock reaches a new all-time high, driven by its strategic AI investments and strong market position in cloud computing and productivity software.
3 Sources
Business and Economy
2 hrs ago
3 Sources
Business and Economy
2 hrs ago
A UN report highlights a significant increase in indirect carbon emissions from major tech companies due to the energy demands of AI-powered data centers, raising concerns about the environmental impact of AI expansion.
3 Sources
Technology
2 hrs ago
3 Sources
Technology
2 hrs ago