3 Sources
[1]
AI system offers new hope for diagnosing PTSD in children
University of South FloridaJun 30 2025 Diagnosing post-traumatic stress disorder in children can be notoriously difficult. Many, especially those with limited communication skills or emotional awareness, struggle to explain what they're feeling. Researchers at the University of South Florida are working to address those gaps and improve patient outcomes by merging their expertise in childhood trauma and artificial intelligence. Led by Alison Salloum, professor in the USF School of Social Work, and Shaun Canavan, associate professor in the Bellini College of Artificial Intelligence, Cybersecurity and Computing, the interdisciplinary team is building a system that could provide clinicians with an objective, cost-effective tool to help identify PTSD in children and adolescents, while tracking their recovery over time. The study, published in Pattern Recognition Letters, is the first of its kind to incorporate context-aware PTSD classification while fully preserving participant privacy. Traditionally, diagnosing PTSD in children relies on subjective clinical interviews and self-reported questionnaires, which can be limited by cognitive development, language skills, avoidance behaviors or emotional suppression. This really started when I noticed how intense some children's facial expressions became during trauma interviews. Even when they weren't saying much, you could see what they were going through on their faces. That's when I talked to Shaun about whether AI could help detect that in a structured way." Alison Salloum, professor in the USF School of Social Work Canavan, who specializes in facial analysis and emotion recognition, repurposed existing tools in his lab to build a new system that prioritizes patient privacy. The technology strips away identifying details and only analyzes de-identified data, including head pose, eye gaze and facial landmarks, such as the eyes and mouth. "That's what makes our approach unique," Canavan said. "We don't use raw video. We completely get rid of the subject identification and only keep data about facial movement, and we factor in whether the child was talking to a parent or a clinician." The team built a dataset from 18 sessions with children as they shared emotional experiences. With more than 100 minutes of video per child and each video containing roughly 185,000 frames, Canavan's AI models extracted a range of subtle facial muscle movements linked to emotional expression. The findings revealed that distinct patterns are detectable in the facial movements of children with PTSD. The researchers also found that facial expressions during clinician-led interviews were more revealing than parent-child conversations. This aligns with existing psychological research showing children may be more emotionally expressive with therapists and may avoid sharing distress with parents due to shame or their cognitive abilities. "That's where the AI could offer a valuable supplement," Salloum said. "Not replacing clinicians, but enhancing their tools. The system could eventually be used to give practitioners real-time feedback during therapy sessions and help monitor progress without repeated, potentially distressing interviews." The team hopes to expand the study to further examine any potential bias from gender, culture and age, especially preschoolers, where verbal communication is limited and diagnosis relies almost entirely on parent observation. Though the study is still in its early stages, Salloum and Canavan feel the potential applications are far-reaching. Many of the current participants had complex clinical pictures, including co-occurring conditions like depression, ADHD or anxiety, mirroring real-world cases and offering promise for the system's accuracy. "Data like this is incredibly rare for AI systems, and we're proud to have conducted such an ethically sound study. That's crucial when you're working with vulnerable subjects," Canavan said. "Now we have promising potential from this software to give informed, objective insights to the clinician." If validated in larger trials, USF's approach could redefine how PTSD in children is diagnosed and tracked, using everyday tools like video and AI to bring mental health care into the future. University of South Florida Journal reference: Aathreya, S., et al. (2025). Multimodal, context-based dataset of children with Post Traumatic Stress Disorder. Pattern Recognition Letters. doi.org/10.1016/j.patrec.2025.05.003.
[2]
AI Facial Analysis Detects PTSD - Neuroscience News
Summary: Diagnosing PTSD in children is often hindered by limited communication and emotional awareness, but new research is using AI to bridge that gap. By analyzing facial movements during interviews, researchers created a privacy-preserving tool that can identify PTSD-related expression patterns. Their system does not use raw video but instead tracks non-identifying facial cues such as eye gaze and mouth movement. The study showed that children's facial expressions during clinician-led sessions were especially revealing. Diagnosing post-traumatic stress disorder in children can be notoriously difficult. Many, especially those with limited communication skills or emotional awareness, struggle to explain what they're feeling. Researchers at the University of South Florida are working to address those gaps and improve patient outcomes by merging their expertise in childhood trauma and artificial intelligence. Led by Alison Salloum, professor in the USF School of Social Work, and Shaun Canavan, associate professor in the Bellini College of Artificial Intelligence, Cybersecurity and Computing, the interdisciplinary team is building a system that could provide clinicians with an objective, cost-effective tool to help identify PTSD in children and adolescents, while tracking their recovery over time. The study, published in Pattern Recognition Letters, is the first of its kind to incorporate context-aware PTSD classification while fully preserving participant privacy. Traditionally, diagnosing PTSD in children relies on subjective clinical interviews and self-reported questionnaires, which can be limited by cognitive development, language skills, avoidance behaviors or emotional suppression. "This really started when I noticed how intense some children's facial expressions became during trauma interviews," Salloum said. "Even when they weren't saying much, you could see what they were going through on their faces. That's when I talked to Shaun about whether AI could help detect that in a structured way." Canavan, who specializes in facial analysis and emotion recognition, repurposed existing tools in his lab to build a new system that prioritizes patient privacy. The technology strips away identifying details and only analyzes de-identified data, including head pose, eye gaze and facial landmarks, such as the eyes and mouth. "That's what makes our approach unique," Canavan said. "We don't use raw video. We completely get rid of the subject identification and only keep data about facial movement, and we factor in whether the child was talking to a parent or a clinician." The team built a dataset from 18 sessions with children as they shared emotional experiences. With more than 100 minutes of video per child and each video containing roughly 185,000 frames, Canavan's AI models extracted a range of subtle facial muscle movements linked to emotional expression. The findings revealed that distinct patterns are detectable in the facial movements of children with PTSD. The researchers also found that facial expressions during clinician-led interviews were more revealing than parent-child conversations. This aligns with existing psychological research showing children may be more emotionally expressive with therapists and may avoid sharing distress with parents due to shame or their cognitive abilities. "That's where the AI could offer a valuable supplement," Salloum said. "Not replacing clinicians, but enhancing their tools. The system could eventually be used to give practitioners real-time feedback during therapy sessions and help monitor progress without repeated, potentially distressing interviews." The team hopes to expand the study to further examine any potential bias from gender, culture and age, especially preschoolers, where verbal communication is limited and diagnosis relies almost entirely on parent observation. Though the study is still in its early stages, Salloum and Canavan feel the potential applications are far-reaching. Many of the current participants had complex clinical pictures, including co-occurring conditions like depression, ADHD or anxiety, mirroring real-world cases and offering promise for the system's accuracy. "Data like this is incredibly rare for AI systems, and we're proud to have conducted such an ethically sound study. That's crucial when you're working with vulnerable subjects," Canavan said. "Now we have promising potential from this software to give informed, objective insights to the clinician." If validated in larger trials, USF's approach could redefine how PTSD in children is diagnosed and tracked, using everyday tools like video and AI to bring mental health care into the future. Multimodal, context-based dataset of children with Post Traumatic Stress Disorder The conventional method of diagnosing Post Traumatic Stress Disorder by a clinician has been subjective in nature by taking specific events/context in consideration. Developing AI-based solutions to these sensitive areas calls for adopting similar methodologies. Considering this, we propose a de-identified dataset of children subjects who are clinically diagnosed with/without PTSD in multiple contexts. This datset can help facilitate future research in this area. For each subject, in the dataset, the participant undergoes several sessions with clinicians and/or guardian that brings out various emotional response from the participant. We collect videos of these sessions and for each video, we extract several facial features that detach the identity information of the subjects. These include facial landmarks, head pose, action units (AU), and eye gaze. To evaluate this dataset, we propose a baseline approach to identifying PTSD using the encoded action unit (AU) intensities of the video frames as the features. We show that AU intensities intrinsically captures the expressiveness of the subject and can be leveraged in modeling PTSD solutions. The AU features are used to train a transformer for classification where we propose encoding the low-dimensional AU intensity vectors using a learnable Fourier representation. We show that this encoding, combined with a standard Multilayer Perceptron (MLP) mapping of AU intensities yields a superior result when compared to its individual counterparts. We apply the approach to various contexts of PTSD discussions (e.g., Clinician-child discussion) and our experiments show that using context is essential in classifying videos of children.
[3]
AI to Track Facial Expressions to Detect PTSD Symptoms in Children | AIM
The tool can pick up on emotional signals even trained professionals might miss in real time. A research team from the University of South Florida (USF) has developed an AI system that can identify post-traumatic stress disorder (PTSD) in children. The project addresses a longstanding clinical dilemma: diagnosing PTSD in children who may not have the emotional vocabulary, cognitive development or comfort to articulate their distress. Traditional methods such as subjective interviews and self-reported questionnaires often fall short. This is where AI steps in. "Even when they weren't saying much, you could see what they were going through on their faces," Alison Salloum, professor at the USF School of Social Work, reportedly said. Her observations during trauma interviews laid the foundation for collaboration with Shaun Canavan, an expert in facial analysis at USF's Bellini College of Artificial Intelligence, Cybersecurity, and Computing. The study introduces a privacy-first, context-aware classification model that analyses subtle facial muscle movements. However, instead of using raw footage, the system extracts non-identifiable metrics such as eye gaze, mouth curvature, and head position, ensuring ethical boundaries are respected when working with vulnerable populations. "We don't use raw video. We completely get rid of subject identification and only keep data about facial movement," Canavan reportedly emphasised. The AI also accounts for conversational context, whether a child is speaking to a parent or a therapist, which significantly influences emotional expressivity. Across 18 therapy sessions, with over 100 minutes of footage per child and approximately 185,000 frames each, the AI identified consistent facial expression patterns in children diagnosed with PTSD. Notably, children were more expressive with clinicians than with parents; a finding that aligns with psychological literature suggesting shame or emotional avoidance often inhibits open communication at home. While still in its early stages, the tool is not being pitched as a replacement for therapists. Instead, it's designed as a clinical augmentation, a second set of 'digital' eyes that can pick up on emotional signals even trained professionals might miss in real time. "Data like this is incredibly rare for AI systems," Canavan added. "That's what makes this so promising. We now have an ethically sound, objective way to support mental health assessments." If validated on a larger scale, the system could transform mental health diagnostics for children -- especially for pre-verbal or very young patients -- by turning non-verbal cues into actionable insights.
Share
Copy Link
Researchers at the University of South Florida have developed an AI system that can identify post-traumatic stress disorder (PTSD) in children by analyzing facial expressions, offering a new tool for diagnosis and treatment monitoring.
Researchers at the University of South Florida have developed a groundbreaking AI system that could revolutionize the diagnosis and monitoring of post-traumatic stress disorder (PTSD) in children. Led by Alison Salloum from the School of Social Work and Shaun Canavan from the Bellini College of Artificial Intelligence, Cybersecurity and Computing, the team has created a tool that analyzes facial expressions to identify PTSD symptoms 1.
Diagnosing PTSD in children has long been a challenge due to limited communication skills, emotional awareness, and cognitive development. Traditional methods rely on subjective clinical interviews and self-reported questionnaires, which can be inadequate for young patients 2. The new AI system aims to provide an objective, cost-effective tool to assist clinicians in identifying PTSD and tracking recovery over time.
A key feature of this innovative approach is its focus on privacy preservation. The system does not use raw video footage but instead analyzes de-identified data, including head pose, eye gaze, and facial landmarks 1. This ensures that patient identities are protected while still capturing crucial emotional cues.
The research team built a dataset from 18 therapy sessions, each containing over 100 minutes of video with approximately 185,000 frames per child. The AI models extracted subtle facial muscle movements linked to emotional expression 3. Key findings include:
Source: Analytics India Magazine
While still in early stages, the researchers believe this technology has far-reaching potential:
Source: Neuroscience News
The team plans to expand the study to examine potential biases related to gender, culture, and age. They also hope to validate the system's accuracy in cases with co-occurring conditions like depression, ADHD, or anxiety 3.
The researchers emphasize that this AI system is not intended to replace clinicians but to augment their tools and provide objective insights. The study's ethical approach to data collection and analysis sets a precedent for responsible AI development in mental health care 2.
If validated in larger trials, this approach could significantly improve how PTSD in children is diagnosed and tracked, potentially transforming mental health care for young patients.
Google DeepMind introduces AlphaEarth Foundations, an AI model that acts as a 'virtual satellite' to map and analyze Earth's surface with unprecedented accuracy and efficiency, potentially transforming environmental monitoring and resource management.
5 Sources
Technology
1 hr ago
5 Sources
Technology
1 hr ago
Google announces its intention to sign the European Union's AI Code of Practice, a voluntary framework aimed at helping companies comply with the EU's AI Act. This decision contrasts with Meta's refusal, highlighting a growing divide among tech giants on AI regulation.
11 Sources
Policy and Regulation
9 hrs ago
11 Sources
Policy and Regulation
9 hrs ago
Palo Alto Networks has agreed to acquire Israeli cybersecurity firm CyberArk for $25 billion, marking a significant move in the cybersecurity industry to address emerging AI-driven threats and identity security challenges.
12 Sources
Business and Economy
9 hrs ago
12 Sources
Business and Economy
9 hrs ago
Mark Zuckerberg signals a potential shift in Meta's approach to open-source AI, citing safety concerns as the company pursues 'superintelligence'. This marks a significant change in Meta's AI strategy and its competition with rivals like OpenAI and Google DeepMind.
2 Sources
Technology
1 hr ago
2 Sources
Technology
1 hr ago
Taiwan Semiconductor Manufacturing Company (TSMC) experiences significant growth and global recognition due to the AI boom, with its CEO meeting world leaders and the company climbing Fortune's Global 500 ranking.
2 Sources
Business and Economy
9 hrs ago
2 Sources
Business and Economy
9 hrs ago