2 Sources
[1]
Tech can tell exactly when in videos students are learning
A new study combines eye tracking and artificial intelligence to identify the exact moments in an educational video that matter for learning in children. The study could also predict how much children understood from the video based on their eye movements while they were watching it. The research is preliminary, but it provides promise for some exciting breakthroughs in video education, said Jason Coronel, lead author of the study and associate professor of communication at The Ohio State University. "Our ultimate goal is to build an AI system that can tell in real time whether a viewer is understanding or not understanding what they are seeing in an educational video," Coronel said. "That would give us the opportunity to dynamically adjust the content for an individual person to help them understand what is being taught." Coronel conducted the research with Matt Sweitzer, Alex Bonus, Rebecca Dore, and Blue Lerner, an interdisciplinary team of experts in eye tracking, machine learning, and children's media, all affiliated with Ohio State. The study, published in the Journal of Communication, involved 197 children aged 4 to 8 who watched a four-minute composite video from the popular YouTube series "SciShow Kids" and "Learn Bright." The video taught children about camouflage in animals. Eye-tracking allowed researchers to measure attention to the video in real time, which is critical for learning, Coronel said. After watching the video, children were asked a series of questions to determine what they had learned about camouflage. (Before they watched, the children answered questions to assess their baseline knowledge.) An AI analysis of the eye-tracking results identified points in the video that are related to whether the children were able to answer the questions about camouflage correctly. For example, one key point was near the beginning when the video host asked children to help her find her anthropomorphic sidekick, named Squeaks. "Our machine learning and eye-tracking data indicate that children's eye movements during this early moment are among the strongest predictors of their overall understanding of the video," the study authors wrote. "One possibility, then, is that kids who follow the cue (to help find Squeaks) with focused attention become more engaged and better prepared to understand more complex concepts introduced later." The analysis identified seven key moments in the video where noticeable shifts in children's eye movements were more strongly linked to how well they understood the concept of animal camouflage. One study co-author, Alex Bonus, who has expertise in children and media, noted that the seven time points lined up with substantial changes in the video's educational content -- what researchers call "event boundaries." These boundaries are when people perceive that one meaningful experience is ending and a new one is beginning. For example, one event boundary came when the narrator began to define camouflage explicitly and paired the explanation with a visual display of the word. Coronel emphasized that these results are preliminary and there is a lot yet to understand about what happens at the critical points of a video when learning seems enhanced. "But this method has the potential to help experts design messages with event boundaries that enhance learning," he said. The findings are especially relevant now, as eye-tracking technology becomes less expensive and more common, Coronel said. That, along with advances in AI, makes possible a future where video learning is truly individualized. Right now, for example, it can take days or weeks for teachers to find out if students are understanding their lessons. Often, teachers don't find out until the next test or quiz. "Imagine a future where eye tracking can tell instantaneously when a person is not understanding a concept in a video lesson, and AI dynamically changes the content to help," he said. "Maybe the video can offer a different example or way of explaining the concept. This could make instruction more personalized, effective and scalable."
[2]
Tech Can Tell Exactly When in Videos Students Are Learning | Newswise
COLUMBUS, Ohio - A new study combines eye tracking and artificial intelligence to identify the exact moments in an educational video that matter for learning in children. The study could also predict how much children understood from the video based on their eye movements while they were watching it. The research is preliminary, but it provides promise for some exciting breakthroughs in video education, said Jason Coronel, lead author of the study and associate professor of communication at The Ohio State University. "Our ultimate goal is to build an AI system that can tell in real time whether a viewer is understanding or not understanding what they are seeing in an educational video," Coronel said. "That would give us the opportunity to dynamically adjust the content for an individual person to help them understand what is being taught." Coronel conducted the research with Matt Sweitzer, Alex Bonus, Rebecca Dore, and Blue Lerner, an interdisciplinary team of experts in eye tracking, machine learning, and children's media, all affiliated with Ohio State. The study, published today in the Journal of Communication, involved 197 children aged 4 to 8 who watched a four-minute composite video from the popular YouTube series "SciShow Kids" and "Learn Bright." The video taught children about camouflage in animals. Eye-tracking allowed researchers to measure attention to the video in real time, which is critical for learning, Coronel said. After watching the video, children were asked a series of questions to determine what they had learned about camouflage. (Before they watched, the children answered questions to assess their baseline knowledge.) An AI analysis of the eye-tracking results identified points in the video that are related to whether the children were able to answer the questions about camouflage correctly. For example, one key point was near the beginning when the video host asked children to help her find her anthropomorphic sidekick, named Squeaks. "Our machine learning and eye-tracking data indicate that children's eye movements during this early moment are among the strongest predictors of their overall understanding of the video," the study authors wrote. "One possibility, then, is that kids who follow the cue (to help find Squeaks) with focused attention become more engaged and better prepared to understand more complex concepts introduced later." The analysis identified seven key moments in the video where noticeable shifts in children's eye movements were more strongly linked to how well they understood the concept of animal camouflage. One study co-author, Alex Bonus, who has expertise in children and media, noted that the seven time points lined up with substantial changes in the video's educational content - what researchers call "event boundaries." These boundaries are when people perceive that one meaningful experience is ending and a new one is beginning. For example, one event boundary came when the narrator began to define camouflage explicitly and paired the explanation with a visual display of the word. Coronel emphasized that these results are preliminary and there is a lot yet to understand about what happens at the critical points of a video when learning seems enhanced. "But this method has the potential to help experts design messages with event boundaries that enhance learning," he said. The findings are especially relevant now, as eye-tracking technology becomes less expensive and more common, Coronel said. That, along with advances in AI, makes possible a future where video learning is truly individualized. Right now, for example, it can take days or weeks for teachers to find out if students are understanding their lessons. Often, teachers don't find out until the next test or quiz. "Imagine a future where eye tracking can tell instantaneously when a person is not understanding a concept in a video lesson, and AI dynamically changes the content to help," he said. "Maybe the video can offer a different example or way of explaining the concept. This could make instruction more personalized, effective and scalable."
Share
Copy Link
A new study combines AI and eye-tracking technology to identify key learning moments in educational videos for children, paving the way for personalized and adaptive video learning experiences.
A groundbreaking study led by researchers at The Ohio State University has successfully combined eye-tracking technology and artificial intelligence to pinpoint exact moments of learning in educational videos for children. This innovative approach not only identifies key learning points but also predicts how well children understand the content based on their eye movements 1.
Source: Phys.org
The research, published in the Journal of Communication, involved 197 children aged 4 to 8 who watched a four-minute composite video from popular YouTube series "SciShow Kids" and "Learn Bright." The video focused on teaching children about animal camouflage 2.
The study identified seven critical moments in the video where noticeable shifts in children's eye movements strongly correlated with their understanding of animal camouflage. These moments aligned with what researchers call "event boundaries" - points where one meaningful experience ends and another begins 1.
Jason Coronel, lead author and associate professor of communication at Ohio State, emphasized the potential of this method: "Our ultimate goal is to build an AI system that can tell in real time whether a viewer is understanding or not understanding what they are seeing in an educational video" 2.
An AI analysis of the eye-tracking results revealed specific points in the video that correlated with children's ability to answer questions about camouflage correctly. For instance, one crucial moment occurred early in the video when the host asked children to help find her anthropomorphic sidekick, Squeaks 1.
The researchers noted, "Our machine learning and eye-tracking data indicate that children's eye movements during this early moment are among the strongest predictors of their overall understanding of the video" 2.
As eye-tracking technology becomes more affordable and widespread, coupled with advancements in AI, the potential for personalized video learning experiences grows. Coronel envisions a future where "eye tracking can tell instantaneously when a person is not understanding a concept in a video lesson, and AI dynamically changes the content to help" 1.
This technology could revolutionize education by allowing for real-time adjustments to content, offering different examples or explanations tailored to individual learners. Such an approach could make instruction more personalized, effective, and scalable 2.
The study was conducted by an interdisciplinary team of experts in eye tracking, machine learning, and children's media, including Matt Sweitzer, Alex Bonus, Rebecca Dore, and Blue Lerner, all affiliated with Ohio State 1.
While the results are preliminary, this research opens up exciting possibilities for enhancing video-based learning and designing more effective educational content. As technology continues to advance, the integration of AI and eye-tracking in education could lead to significant improvements in how we approach and optimize learning experiences for students of all ages.
Databricks raises $1 billion in a new funding round, valuing the company at over $100 billion. The data analytics firm plans to invest in AI database technology and an AI agent platform, positioning itself for growth in the evolving AI market.
12 Sources
Business
21 hrs ago
12 Sources
Business
21 hrs ago
Microsoft has integrated a new AI-powered COPILOT function into Excel, allowing users to perform complex data analysis and content generation using natural language prompts within spreadsheet cells.
9 Sources
Technology
21 hrs ago
9 Sources
Technology
21 hrs ago
Adobe launches Acrobat Studio, integrating AI assistants and PDF Spaces to transform document management and collaboration, marking a significant evolution in PDF technology.
10 Sources
Technology
21 hrs ago
10 Sources
Technology
21 hrs ago
Meta rolls out an AI-driven voice translation feature for Facebook and Instagram creators, enabling automatic dubbing of content from English to Spanish and vice versa, with plans for future language expansions.
5 Sources
Technology
13 hrs ago
5 Sources
Technology
13 hrs ago
Nvidia introduces significant updates to its app, including global DLSS override, Smooth Motion for RTX 40-series GPUs, and improved AI assistant, enhancing gaming performance and user experience.
4 Sources
Technology
21 hrs ago
4 Sources
Technology
21 hrs ago