2 Sources
[1]
AI system offers new hope for diagnosing PTSD in children
University of South FloridaJun 30 2025 Diagnosing post-traumatic stress disorder in children can be notoriously difficult. Many, especially those with limited communication skills or emotional awareness, struggle to explain what they're feeling. Researchers at the University of South Florida are working to address those gaps and improve patient outcomes by merging their expertise in childhood trauma and artificial intelligence. Led by Alison Salloum, professor in the USF School of Social Work, and Shaun Canavan, associate professor in the Bellini College of Artificial Intelligence, Cybersecurity and Computing, the interdisciplinary team is building a system that could provide clinicians with an objective, cost-effective tool to help identify PTSD in children and adolescents, while tracking their recovery over time. The study, published in Pattern Recognition Letters, is the first of its kind to incorporate context-aware PTSD classification while fully preserving participant privacy. Traditionally, diagnosing PTSD in children relies on subjective clinical interviews and self-reported questionnaires, which can be limited by cognitive development, language skills, avoidance behaviors or emotional suppression. This really started when I noticed how intense some children's facial expressions became during trauma interviews. Even when they weren't saying much, you could see what they were going through on their faces. That's when I talked to Shaun about whether AI could help detect that in a structured way." Alison Salloum, professor in the USF School of Social Work Canavan, who specializes in facial analysis and emotion recognition, repurposed existing tools in his lab to build a new system that prioritizes patient privacy. The technology strips away identifying details and only analyzes de-identified data, including head pose, eye gaze and facial landmarks, such as the eyes and mouth. "That's what makes our approach unique," Canavan said. "We don't use raw video. We completely get rid of the subject identification and only keep data about facial movement, and we factor in whether the child was talking to a parent or a clinician." The team built a dataset from 18 sessions with children as they shared emotional experiences. With more than 100 minutes of video per child and each video containing roughly 185,000 frames, Canavan's AI models extracted a range of subtle facial muscle movements linked to emotional expression. The findings revealed that distinct patterns are detectable in the facial movements of children with PTSD. The researchers also found that facial expressions during clinician-led interviews were more revealing than parent-child conversations. This aligns with existing psychological research showing children may be more emotionally expressive with therapists and may avoid sharing distress with parents due to shame or their cognitive abilities. "That's where the AI could offer a valuable supplement," Salloum said. "Not replacing clinicians, but enhancing their tools. The system could eventually be used to give practitioners real-time feedback during therapy sessions and help monitor progress without repeated, potentially distressing interviews." The team hopes to expand the study to further examine any potential bias from gender, culture and age, especially preschoolers, where verbal communication is limited and diagnosis relies almost entirely on parent observation. Though the study is still in its early stages, Salloum and Canavan feel the potential applications are far-reaching. Many of the current participants had complex clinical pictures, including co-occurring conditions like depression, ADHD or anxiety, mirroring real-world cases and offering promise for the system's accuracy. "Data like this is incredibly rare for AI systems, and we're proud to have conducted such an ethically sound study. That's crucial when you're working with vulnerable subjects," Canavan said. "Now we have promising potential from this software to give informed, objective insights to the clinician." If validated in larger trials, USF's approach could redefine how PTSD in children is diagnosed and tracked, using everyday tools like video and AI to bring mental health care into the future. University of South Florida Journal reference: Aathreya, S., et al. (2025). Multimodal, context-based dataset of children with Post Traumatic Stress Disorder. Pattern Recognition Letters. doi.org/10.1016/j.patrec.2025.05.003.
[2]
AI Facial Analysis Detects PTSD - Neuroscience News
Summary: Diagnosing PTSD in children is often hindered by limited communication and emotional awareness, but new research is using AI to bridge that gap. By analyzing facial movements during interviews, researchers created a privacy-preserving tool that can identify PTSD-related expression patterns. Their system does not use raw video but instead tracks non-identifying facial cues such as eye gaze and mouth movement. The study showed that children's facial expressions during clinician-led sessions were especially revealing. Diagnosing post-traumatic stress disorder in children can be notoriously difficult. Many, especially those with limited communication skills or emotional awareness, struggle to explain what they're feeling. Researchers at the University of South Florida are working to address those gaps and improve patient outcomes by merging their expertise in childhood trauma and artificial intelligence. Led by Alison Salloum, professor in the USF School of Social Work, and Shaun Canavan, associate professor in the Bellini College of Artificial Intelligence, Cybersecurity and Computing, the interdisciplinary team is building a system that could provide clinicians with an objective, cost-effective tool to help identify PTSD in children and adolescents, while tracking their recovery over time. The study, published in Pattern Recognition Letters, is the first of its kind to incorporate context-aware PTSD classification while fully preserving participant privacy. Traditionally, diagnosing PTSD in children relies on subjective clinical interviews and self-reported questionnaires, which can be limited by cognitive development, language skills, avoidance behaviors or emotional suppression. "This really started when I noticed how intense some children's facial expressions became during trauma interviews," Salloum said. "Even when they weren't saying much, you could see what they were going through on their faces. That's when I talked to Shaun about whether AI could help detect that in a structured way." Canavan, who specializes in facial analysis and emotion recognition, repurposed existing tools in his lab to build a new system that prioritizes patient privacy. The technology strips away identifying details and only analyzes de-identified data, including head pose, eye gaze and facial landmarks, such as the eyes and mouth. "That's what makes our approach unique," Canavan said. "We don't use raw video. We completely get rid of the subject identification and only keep data about facial movement, and we factor in whether the child was talking to a parent or a clinician." The team built a dataset from 18 sessions with children as they shared emotional experiences. With more than 100 minutes of video per child and each video containing roughly 185,000 frames, Canavan's AI models extracted a range of subtle facial muscle movements linked to emotional expression. The findings revealed that distinct patterns are detectable in the facial movements of children with PTSD. The researchers also found that facial expressions during clinician-led interviews were more revealing than parent-child conversations. This aligns with existing psychological research showing children may be more emotionally expressive with therapists and may avoid sharing distress with parents due to shame or their cognitive abilities. "That's where the AI could offer a valuable supplement," Salloum said. "Not replacing clinicians, but enhancing their tools. The system could eventually be used to give practitioners real-time feedback during therapy sessions and help monitor progress without repeated, potentially distressing interviews." The team hopes to expand the study to further examine any potential bias from gender, culture and age, especially preschoolers, where verbal communication is limited and diagnosis relies almost entirely on parent observation. Though the study is still in its early stages, Salloum and Canavan feel the potential applications are far-reaching. Many of the current participants had complex clinical pictures, including co-occurring conditions like depression, ADHD or anxiety, mirroring real-world cases and offering promise for the system's accuracy. "Data like this is incredibly rare for AI systems, and we're proud to have conducted such an ethically sound study. That's crucial when you're working with vulnerable subjects," Canavan said. "Now we have promising potential from this software to give informed, objective insights to the clinician." If validated in larger trials, USF's approach could redefine how PTSD in children is diagnosed and tracked, using everyday tools like video and AI to bring mental health care into the future. Multimodal, context-based dataset of children with Post Traumatic Stress Disorder The conventional method of diagnosing Post Traumatic Stress Disorder by a clinician has been subjective in nature by taking specific events/context in consideration. Developing AI-based solutions to these sensitive areas calls for adopting similar methodologies. Considering this, we propose a de-identified dataset of children subjects who are clinically diagnosed with/without PTSD in multiple contexts. This datset can help facilitate future research in this area. For each subject, in the dataset, the participant undergoes several sessions with clinicians and/or guardian that brings out various emotional response from the participant. We collect videos of these sessions and for each video, we extract several facial features that detach the identity information of the subjects. These include facial landmarks, head pose, action units (AU), and eye gaze. To evaluate this dataset, we propose a baseline approach to identifying PTSD using the encoded action unit (AU) intensities of the video frames as the features. We show that AU intensities intrinsically captures the expressiveness of the subject and can be leveraged in modeling PTSD solutions. The AU features are used to train a transformer for classification where we propose encoding the low-dimensional AU intensity vectors using a learnable Fourier representation. We show that this encoding, combined with a standard Multilayer Perceptron (MLP) mapping of AU intensities yields a superior result when compared to its individual counterparts. We apply the approach to various contexts of PTSD discussions (e.g., Clinician-child discussion) and our experiments show that using context is essential in classifying videos of children.
Share
Copy Link
Researchers at the University of South Florida have developed an AI-based system that analyzes facial expressions to help diagnose PTSD in children, offering a potential breakthrough in mental health assessment.
Researchers at the University of South Florida have developed a groundbreaking artificial intelligence system that could revolutionize the diagnosis of post-traumatic stress disorder (PTSD) in children. The study, published in Pattern Recognition Letters, marks the first time context-aware PTSD classification has been incorporated while fully preserving participant privacy 1.
Traditionally, diagnosing PTSD in children has relied on subjective clinical interviews and self-reported questionnaires. However, these methods can be limited by factors such as cognitive development, language skills, and emotional suppression. Professor Alison Salloum from the USF School of Social Work noticed intense facial expressions in children during trauma interviews, which sparked the idea for this innovative approach 2.
In collaboration with Associate Professor Shaun Canavan from the Bellini College of AI, Cybersecurity and Computing, the team developed a system that analyzes de-identified facial data. The technology focuses on non-identifying features such as head pose, eye gaze, and facial landmarks like eyes and mouth movements 1.
Source: Neuroscience News
A key feature of this system is its emphasis on patient privacy. Unlike traditional video analysis, this approach completely removes subject identification, retaining only data about facial movements. The system also considers the context of whether the child is interacting with a parent or a clinician 2.
The research team built a dataset from 18 sessions with children sharing emotional experiences. Each session provided over 100 minutes of video, with approximately 185,000 frames per video. AI models extracted subtle facial muscle movements linked to emotional expression 1.
Key findings include:
These results align with existing psychological research suggesting that children may be more emotionally expressive with therapists than with parents 2.
Source: News-Medical
While still in early stages, the potential applications of this technology are far-reaching. The system could provide real-time feedback to practitioners during therapy sessions and help monitor progress without repeated, potentially distressing interviews 1.
The researchers plan to expand the study to examine potential biases related to gender, culture, and age, with a particular focus on preschoolers where verbal communication is limited 2.
The study's ethical approach to working with vulnerable subjects is noteworthy. The researchers emphasize the rarity of such data in AI systems and the importance of conducting ethically sound studies in this sensitive area 2.
As this technology progresses, it could redefine how PTSD in children is diagnosed and tracked, bringing mental health care into the future by leveraging everyday tools like video analysis and AI 1. However, further validation through larger trials will be crucial to establish its effectiveness and reliability in real-world clinical settings.
Microsoft announces layoffs of 9,000 employees across various divisions, including Xbox, as it continues to invest heavily in AI technology and streamline operations.
17 Sources
Business and Economy
13 hrs ago
17 Sources
Business and Economy
13 hrs ago
Silicon Valley investor Vinod Khosla forecasts massive job automation and economic shifts due to AI advancements, predicting an era of abundance by 2040.
3 Sources
Technology
21 hrs ago
3 Sources
Technology
21 hrs ago
Nvidia surpasses Microsoft in market capitalization, reaching $3.86 trillion, as AI chip demand surges. Other tech giants also see significant growth, while Tesla faces challenges.
4 Sources
Business and Economy
21 hrs ago
4 Sources
Business and Economy
21 hrs ago
Bank of America reports that autonomous vehicles are experiencing their 'ChatGPT moment', with breakthroughs in AI and computing driving rapid commercial deployment. The market is estimated to reach $1.2 trillion by 2040, encompassing cars, trucks, and other sectors.
2 Sources
Technology
13 hrs ago
2 Sources
Technology
13 hrs ago
Perplexity introduces a $200 monthly 'Max' subscription plan, offering unlimited access to advanced AI tools and early feature access, as it competes in the growing AI search market.
5 Sources
Business and Economy
13 hrs ago
5 Sources
Business and Economy
13 hrs ago