2 Sources
2 Sources
[1]
AI helps identify childhood cancer survivors needing extra support
St. Jude Children's Research HospitalMar 27 2026 Artificial intelligence (AI) could help physicians determine if survivors of childhood cancer need extra support - and the more information included in AI prompting, the better its performance. This finding, published today in Communications Medicine by scientists from St. Jude Children's Research Hospital, may guide future integration of AI into clinical workflows. The scientists observed how well large language models, a type of AI, could analyze interviews with young survivors and their caregivers to detect multiple symptoms causing severe disruptions in their daily lives. By comparing different prompting approaches, the researchers found that more complex prompts, which provided additional information to the models, performed the best. The results suggest that future efforts to leverage AI to improve survivors' care should consider these sophisticated prompting strategies over simpler ones. "About 40%-60% of a clinical encounter is a patient talking to their physician about symptoms and related health experiences," said corresponding author I-Chan Huang, PhD, St. Jude Department of Epidemiology & Cancer Control. "We have provided a proof of concept that large language models could help analyze that underutilized conversational data to detect symptom severity and its functional impact and assist physician decision-making to provide better care to survivors." Comparing prompting strategies for survivorship Children who have cancer are treated during a critical time in their development, which can have a ripple effect later in life. Cancer- and treatment-related effects can occur long after the initial disease is cured. However, identifying which survivors have symptoms severe enough to need extra, targeted support is difficult for physicians. Much of the data that informs that identification exists in transcripts of conversations and answers to open-ended questions in surveys that cannot be reviewed quickly. Newer language-based AI offers physicians an opportunity to analyze, understand and use that information to help survivors. The researchers interviewed 30 survivors between 8 and 17 years old and their caregivers. Two human experts analyzed the conversation transcripts for signs of excessive pain and fatigue, yielding more than 800 analyzable pieces of information. They categorized the symptoms by severity and their physical, cognitive or social impact. After performing that gold-standard analysis, the scientists gave the same transcripts to two large language models, ChatGPT and Llama, using four styles of prompts. Both models showed an ability to analyze the data in a similar way to the experts, though their performance depended on the prompt used. Prompting is the art of instructing AI to perform a task. The researchers compared four common prompting strategies: two simple and two complex. The simple approaches were zero-shot and few-shot prompting, which provide no or minimal information beyond the basic instructions. These approaches produced unstable and inaccurate results. "We found that simple prompts were not effective," Huang said. "However, our more sophisticated prompting strategies performed significantly better and had a higher concurrence with our human reviewers." The two complex strategies were chain-of-thought and generated knowledge prompts. Chain-of-thought uses step-by-step logical instructions, while generated knowledge prompts a model to come up with background information before giving it the instructions. Both complex prompting methods distinguished the physical and cognitive impact of symptoms on survivors well, while having a moderate ability to detect social impacts. Clinical use will require much more testing, but these early results suggest that chain-of-thought, generated knowledge or similar prompting methods should be used in the future. The findings provide one of the first concrete examples of how AI may be able to improve survivorship care. "These AI-driven approaches provide us with a new way to unlock the complex symptom information hidden in the wealth of patient-physician conversations that currently go unused," Huang said. "By making this information easier to capture and analyze, we can help physicians better identify survivors who need additional support in real time and improve care for this growing population." Authors and funding The study's first author is Jin-ah Sim, formerly of St. Jude. The study's other authors are Madeline Horan, formerly of St. Jude, now Wake Forest University School of Medicine; Xiaolei Huang, University of Memphis; Minsu Kim, Hallym University; Kumar Srivastava, Kirsten Ness and Melissa Hudson, St. Jude; and Justin Baker, formerly of St. Jude, now Stanford University Medical School. The study was supported by grants from the National Cancer Institute (U01CA195547, R21CA202210, R01CA238368 and R01CA258193), Cancer Center Support (CORE) grant (CA21765) and the American Lebanese Syrian Associated Charities (ALSAC), the fundraising and awareness organization of St. Jude. St. Jude Children's Research Hospital Journal reference: DOI: 10.1038/s43856-026-01499-5
[2]
Artificial Intelligence Learns to Make Sense of Childhood Cancer Survivors' Health Care Needs | Newswise
Newswise -- (MEMPHIS, Tenn. - March 26, 2026) Artificial intelligence (AI) could help physicians determine if survivors of childhood cancer need extra support -- and the more information included in AI prompting, the better its performance. This finding, published today in Communications Medicine by scientists from St. Jude Children's Research Hospital, may guide future integration of AI into clinical workflows. The scientists observed how well large language models, a type of AI, could analyze interviews with young survivors and their caregivers to detect multiple symptoms causing severe disruptions in their daily lives. By comparing different prompting approaches, the researchers found that more complex prompts, which provided additional information to the models, performed the best. The results suggest that future efforts to leverage AI to improve survivors' care should consider these sophisticated prompting strategies over simpler ones. "About 40%-60% of a clinical encounter is a patient talking to their physician about symptoms and related health experiences," said corresponding author I-Chan Huang, PhD, St. Jude Department of Epidemiology & Cancer Control. "We have provided a proof of concept that large language models could help analyze that underutilized conversational data to detect symptom severity and its functional impact and assist physician decision-making to provide better care to survivors." Children who have cancer are treated during a critical time in their development, which can have a ripple effect later in life. Cancer- and treatment-related effects can occur long after the initial disease is cured. However, identifying which survivors have symptoms severe enough to need extra, targeted support is difficult for physicians. Much of the data that informs that identification exists in transcripts of conversations and answers to open-ended questions in surveys that cannot be reviewed quickly. Newer language-based AI offers physicians an opportunity to analyze, understand and use that information to help survivors. The researchers interviewed 30 survivors between 8 and 17 years old and their caregivers. Two human experts analyzed the conversation transcripts for signs of excessive pain and fatigue, yielding more than 800 analyzable pieces of information. They categorized the symptoms by severity and their physical, cognitive or social impact. After performing that gold-standard analysis, the scientists gave the same transcripts to two large language models, ChatGPT and Llama, using four styles of prompts. Both models showed an ability to analyze the data in a similar way to the experts, though their performance depended on the prompt used. Prompting is the art of instructing AI to perform a task. The researchers compared four common prompting strategies: two simple and two complex. The simple approaches were zero-shot and few-shot prompting, which provide no or minimal information beyond the basic instructions. These approaches produced unstable and inaccurate results. "We found that simple prompts were not effective," Huang said. "However, our more sophisticated prompting strategies performed significantly better and had a higher concurrence with our human reviewers." The two complex strategies were chain-of-thought and generated knowledge prompts. Chain-of-thought uses step-by-step logical instructions, while generated knowledge prompts a model to come up with background information before giving it the instructions. Both complex prompting methods distinguished the physical and cognitive impact of symptoms on survivors well, while having a moderate ability to detect social impacts. Clinical use will require much more testing, but these early results suggest that chain-of-thought, generated knowledge or similar prompting methods should be used in the future. The findings provide one of the first concrete examples of how AI may be able to improve survivorship care. "These AI-driven approaches provide us with a new way to unlock the complex symptom information hidden in the wealth of patient-physician conversations that currently go unused," Huang said. "By making this information easier to capture and analyze, we can help physicians better identify survivors who need additional support in real time and improve care for this growing population." Authors and funding The study's first author is Jin-ah Sim, formerly of St. Jude. The study's other authors are Madeline Horan, formerly of St. Jude, now Wake Forest University School of Medicine; Xiaolei Huang, University of Memphis; Minsu Kim, Hallym University; Kumar Srivastava, Kirsten Ness and Melissa Hudson, St. Jude; and Justin Baker, formerly of St. Jude, now Stanford University Medical School. The study was supported by grants from the National Cancer Institute (U01CA195547, R21CA202210, R01CA238368 and R01CA258193), Cancer Center Support (CORE) grant (CA21765) and the American Lebanese Syrian Associated Charities (ALSAC), the fundraising and awareness organization of St. Jude. St. Jude Children's Research Hospital is leading the way the world understands, treats, and cures childhood catastrophic diseases. As the only National Cancer Institute-designated Comprehensive Cancer Center devoted solely to children, St. Jude advances groundbreaking research and shares its discoveries worldwide to accelerate progress in pediatric medicine. Treatments developed at St. Jude have helped push the overall childhood cancer survival rate from 20% to more than 80% since the hospital opened more than 60 years ago. Through collaboration and innovation, St. Jude is working to ensure that children everywhere have access to the best possible care. To learn more, visit stjude.org, read St. Jude Progress, a digital magazine, and follow St. Jude on social media at @stjuderesearch.
Share
Share
Copy Link
Artificial intelligence could help physicians determine which childhood cancer survivors need additional support by analyzing patient conversations. Researchers at St. Jude Children's Research Hospital found that large language models using complex prompting strategies can detect symptom severity in interviews with survivors and caregivers, potentially transforming how physicians use conversational data to improve survivorship care.
Artificial intelligence is learning to identify childhood cancer survivors who need extra support by analyzing patient conversations, according to research published in Communications Medicine by scientists from St. Jude Children's Research Hospital
1
. The study demonstrates how large language models can process interviews with young survivors and their caregivers to detect multiple symptoms causing severe disruptions in daily life, offering physicians a new tool to improve survivorship care.
Source: News-Medical
The research addresses a critical challenge in pediatric oncology: identifying which childhood cancer survivors have symptoms severe enough to warrant targeted intervention. "About 40%-60% of a clinical encounter is a patient talking to their physician about symptoms and related health experiences," said corresponding author I-Chan Huang, PhD, from St. Jude Department of Epidemiology & Cancer Control
2
. "We have provided a proof of concept that large language models could help analyze that underutilized conversational data to detect symptom severity and its functional impact and assist physician decision-making to provide better care to survivors."The researchers interviewed 30 survivors between 8 and 17 old along with their caregivers, generating more than 800 analyzable pieces of conversational data
1
. Two human experts first analyzed the conversation transcripts for signs of excessive pain and fatigue, categorizing symptoms by severity and their physical, cognitive or social impact. The scientists then tested two large language models—ChatGPT and Llama—using four different AI prompting strategies to see how well they matched the human experts' assessments.The study compared two simple approaches—zero-shot and few-shot prompting, which provide minimal information beyond basic instructions—against two complex prompting methods: chain-of-thought prompting and generated knowledge prompting. The simple approaches produced unstable and inaccurate results. "We found that simple prompts were not effective," Huang explained. "However, our more sophisticated prompting strategies performed significantly better and had a higher concurrence with our human reviewers"
2
.Chain-of-thought prompting uses step-by-step logical instructions, while generated knowledge prompting asks the model to generate background information before receiving instructions. Both complex prompting methods successfully distinguished the physical and cognitive impact of symptoms on survivors, though they showed only moderate ability to detect social impacts
1
.Children treated for cancer face unique challenges because treatment occurs during critical developmental periods, creating a ripple effect that can manifest long after the initial disease is cured. Cancer- and treatment-related effects can emerge years later, but identifying which survivors need additional support proves difficult for physicians. Much of the relevant data exists in transcripts of conversations and answers to open-ended survey questions that cannot be reviewed quickly in clinical settings
2
."These AI-driven approaches provide us with a new way to unlock the complex symptom information hidden in the wealth of patient-physician conversations that currently go unused," Huang said. "By making this information easier to capture and analyze, we can help physicians better identify survivors who need additional support in real time and improve care for this growing population"
1
.Related Stories
While clinical use will require much more testing, these early results suggest that chain-of-thought, generated knowledge or similar prompting methods should guide future integration of AI into clinical workflows. The findings provide one of the first concrete examples of how artificial intelligence may improve long-term survivorship outcomes for childhood cancer survivors. The study, led by first author Jin-ah Sim and supported by grants from the National Cancer Institute, offers a data analysis framework that could transform how physicians use health experiences shared during clinical encounters to deliver better care
2
.For the growing population of childhood cancer survivors, this research represents a potential shift in how physicians process conversational data to identify symptom severity and provide physician decision-making support. By leveraging artificial intelligence to analyze patient conversations that currently go underutilized, healthcare providers may soon have a more efficient way to ensure survivors receive the targeted support they need.
Summarized by
Navi
1
Technology

2
Technology

3
Policy and Regulation
