Curated by THEOUTPOST
On Tue, 10 Dec, 8:01 AM UTC
5 Sources
[1]
AI can watch videos by mimicking a living brain - Earth.com
Scientists at Scripps Research have recently devised MovieNet, a transformative artificial intelligence (AI) model capable of understanding moving images with the subtlety of the human brain. Unlike traditional AI models that excel at analyzing static images, MovieNet is designed to recognize and interpret complex, changing scenes over time. The innovation, detailed in a study published in Proceedings of the National Academy of Sciences, holds significant promise for applications ranging from medical diagnostics to autonomous vehicles. "The brain doesn't just see still frames; it creates an ongoing visual narrative," said senior author Hollis Cline, the director of the Dorris Neuroscience Center at Scripps Research. "Static image recognition has come a long way, but the brain's capacity to process flowing scenes - like watching a movie - requires a much more sophisticated form of pattern recognition. By studying how neurons capture these sequences, we've been able to apply similar principles to AI." Cline and first author Masaki Hiramoto, a staff scientist at Scripps Research, based their work on how the brain processes real-world visual sequences. The research focused on tadpoles, whose optic tectum - the brain's visual processing region - efficiently detects and responds to moving stimuli. These neurons assemble fragments of visual information into coherent sequences, mimicking how humans perceive flowing scenes in real life. "Tadpoles have a very good visual system, plus we know that they can detect and respond to moving stimuli efficiently," Hiramoto explained. The researchers identified neurons in tadpoles' brains that detect features such as shifts in brightness and changes in object rotation. These neurons process visual data in 100 to 600-millisecond clips, combining light and shadow patterns to create a continuous narrative. Cline and Hiramoto trained MovieNet to emulate this neurological process, encoding dynamic video clips as a series of recognizable cues. To evaluate MovieNet, the researchers presented the model with video clips of tadpoles swimming in various conditions. The model achieved an accuracy of 82.3% in distinguishing normal swimming behaviors from abnormal ones - outperforming human observers by 18% and surpassing the performance of leading AI models like Google's GoogLeNet, which managed only 72% accuracy. "This is where we saw real potential," Cline noted, emphasizing the significance of MovieNet's ability to handle dynamic data. Unlike conventional AI models, MovieNet efficiently processes and compresses information, enabling it to deliver high accuracy with reduced data and computational demands. One of MovieNet's standout features is its energy efficiency. Conventional AI models require immense computational resources, contributing to a significant environmental footprint. MovieNet, by contrast, reduces energy demands by simplifying data into essential sequences without sacrificing performance. "By mimicking the brain, we've managed to make our AI far less demanding, paving the way for models that aren't just powerful but sustainable," Cline said. This efficiency positions MovieNet as an eco-friendly alternative, paving the way for scaling AI in industries where high costs have been a barrier. MovieNet's ability to interpret subtle changes over time has profound implications for medicine. The model could assist in early detection of health conditions like neurodegenerative diseases and irregular heart rhythms. For example, small motor changes associated with Parkinson's disease - often imperceptible to the human eye - could be flagged by the AI, allowing clinicians to intervene earlier. In drug discovery, MovieNet's dynamic analysis could lead to more precise screening techniques. Traditional methods rely on static snapshots, which can miss critical changes over time. By tracking cellular responses to chemical exposure, MovieNet can provide deeper insights into how drugs interact with biological systems. "Current methods miss critical changes because they can only analyze images captured at intervals," Hiramoto remarked. "Observing cells over time means that MovieNet can track the subtlest changes during drug testing." MovieNet's innovation goes beyond accuracy; it bridges gaps in existing AI technology by enabling nuanced analysis of dynamic scenes. Its ability to identify and interpret real-time changes in visual data sets a new standard for AI, making it a vital tool for applications requiring continuous monitoring and precise recognition. For example, in autonomous vehicles, the AI could enhance safety by detecting and responding to changes in road conditions or pedestrian behavior. Similarly, in medical imaging, it could improve the detection of subtle anomalies that might signal early disease stages. Cline and Hiramoto plan to enhance MovieNet's adaptability, expanding its capabilities across various environments and applications. This includes refining the model to handle more complex scenarios and exploring its use in other fields, such as environmental monitoring and wildlife observation. "Taking inspiration from biology will continue to be a fertile area for advancing AI," Cline said. "By designing models that think like living organisms, we can achieve levels of efficiency that simply aren't possible with conventional approaches." The research team envisions a future where biologically inspired AI like MovieNet revolutionizes technology across sectors. By replicating the brain's sophisticated processing abilities, MovieNet not only advances our understanding of AI but also opens doors to innovations that could redefine industries and improve lives. -- - Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
[2]
Scientists create AI that 'watches' videos by mimicking the brain
Imagine an artificial intelligence (AI) model that can watch and understand moving images with the subtlety of a human brain. Now, scientists at Scripps Research have made this a reality by creating MovieNet: an innovative AI that processes videos much like how our brains interpret real-life scenes as they unfold over time. This brain-inspired AI model, detailed in a study published in the Proceedings of the National Academy of Sciences on November 19, 2024, can perceive moving scenes by simulating how neurons -- or brain cells -- make real-time sense of the world. Conventional AI excels at recognizing still images, but MovieNet introduces a method for machine-learning models to recognize complex, changing scenes -- a breakthrough that could transform fields from medical diagnostics to autonomous driving, where discerning subtle changes over time is crucial. MovieNet is also more accurate and environmentally sustainable than conventional AI. "The brain doesn't just see still frames; it creates an ongoing visual narrative," says senior author Hollis Cline, PhD, the director of the Dorris Neuroscience Center and the Hahn Professor of Neuroscience at Scripps Research. "Static image recognition has come a long way, but the brain's capacity to process flowing scenes -- like watching a movie -- requires a much more sophisticated form of pattern recognition. By studying how neurons capture these sequences, we've been able to apply similar principles to AI." To create MovieNet, Cline and first author Masaki Hiramoto, a staff scientist at Scripps Research, examined how the brain processes real-world scenes as short sequences, similar to movie clips. Specifically, the researchers studied how tadpole neurons responded to visual stimuli. "Tadpoles have a very good visual system, plus we know that they can detect and respond to moving stimuli efficiently," explains Hiramoto. He and Cline identified neurons that respond to movie-like features -- such as shifts in brightness and image rotation -- and can recognize objects as they move and change. Located in the brain's visual processing region known as the optic tectum, these neurons assemble parts of a moving image into a coherent sequence. Think of this process as similar to a lenticular puzzle: each piece alone may not make sense, but together they form a complete image in motion. Different neurons process various "puzzle pieces" of a real-life moving image, which the brain then integrates into a continuous scene. The researchers also found that the tadpoles' optic tectum neurons distinguished subtle changes in visual stimuli over time, capturing information in roughly 100 to 600 millisecond dynamic clips rather than still frames. These neurons are highly sensitive to patterns of light and shadow, and each neuron's response to a specific part of the visual field helps construct a detailed map of a scene to form a "movie clip." Cline and Hiramoto trained MovieNet to emulate this brain-like processing and encode video clips as a series of small, recognizable visual cues. This permitted the AI model to distinguish subtle differences among dynamic scenes. To test MovieNet, the researchers showed it video clips of tadpoles swimming under different conditions. Not only did MovieNet achieve 82.3 percent accuracy in distinguishing normal versus abnormal swimming behaviors, but it exceeded the abilities of trained human observers by about 18 percent. It even outperformed existing AI models such as Google's GoogLeNet -- which achieved just 72 percent accuracy despite its extensive training and processing resources. "This is where we saw real potential," points out Cline. The team determined that MovieNet was not only better than current AI models at understanding changing scenes, but it used less data and processing time. MovieNet's ability to simplify data without sacrificing accuracy also sets it apart from conventional AI. By breaking down visual information into essential sequences, MovieNet effectively compresses data like a zipped file that retains critical details. Beyond its high accuracy, MovieNet is an eco-friendly AI model. Conventional AI processing demands immense energy, leaving a heavy environmental footprint. MovieNet's reduced data requirements offer a greener alternative that conserves energy while performing at a high standard. "By mimicking the brain, we've managed to make our AI far less demanding, paving the way for models that aren't just powerful but sustainable," says Cline. "This efficiency also opens the door to scaling up AI in fields where conventional methods are costly." In addition, MovieNet has potential to reshape medicine. As the technology advances, it could become a valuable tool for identifying subtle changes in early-stage conditions, such as detecting irregular heart rhythms or spotting the first signs of neurodegenerative diseases like Parkinson's. For example, small motor changes related to Parkinson's that are often hard for human eyes to discern could be flagged by the AI early on, providing clinicians valuable time to intervene. Furthermore, MovieNet's ability to perceive changes in tadpole swimming patterns when tadpoles were exposed to chemicals could lead to more precise drug screening techniques, as scientists could study dynamic cellular responses rather than relying on static snapshots. "Current methods miss critical changes because they can only analyze images captured at intervals," remarks Hiramoto. "Observing cells over time means that MovieNet can track the subtlest changes during drug testing." Looking ahead, Cline and Hiramoto plan to continue refining MovieNet's ability to adapt to different environments, enhancing its versatility and potential applications. "Taking inspiration from biology will continue to be a fertile area for advancing AI," says Cline. "By designing models that think like living organisms, we can achieve levels of efficiency that simply aren't possible with conventional approaches."
[3]
Scripps Research develops brain-inspired AI for real-time video analysis
Scripps Research InstituteDec 9 2024 Imagine an artificial intelligence (AI) model that can watch and understand moving images with the subtlety of a human brain. Now, scientists at Scripps Research have made this a reality by creating MovieNet: an innovative AI that processes videos much like how our brains interpret real-life scenes as they unfold over time. This brain-inspired AI model, detailed in a study published in the Proceedings of the National Academy of Sciences on November 19, 2024, can perceive moving scenes by simulating how neurons-;or brain cells-;make real-time sense of the world. Conventional AI excels at recognizing still images, but MovieNet introduces a method for machine-learning models to recognize complex, changing scenes-;a breakthrough that could transform fields from medical diagnostics to autonomous driving, where discerning subtle changes over time is crucial. MovieNet is also more accurate and environmentally sustainable than conventional AI. The brain doesn't just see still frames; it creates an ongoing visual narrative. Static image recognition has come a long way, but the brain's capacity to process flowing scenes-;like watching a movie-;requires a much more sophisticated form of pattern recognition. By studying how neurons capture these sequences, we've been able to apply similar principles to AI." Hollis Cline, PhD, senior author, director of the Dorris Neuroscience Center and the Hahn Professor of Neuroscience at Scripps Research To create MovieNet, Cline and first author Masaki Hiramoto, a staff scientist at Scripps Research, examined how the brain processes real-world scenes as short sequences, similar to movie clips. Specifically, the researchers studied how tadpole neurons responded to visual stimuli. "Tadpoles have a very good visual system, plus we know that they can detect and respond to moving stimuli efficiently," explains Hiramoto. He and Cline identified neurons that respond to movie-like features-;such as shifts in brightness and image rotation-;and can recognize objects as they move and change. Located in the brain's visual processing region known as the optic tectum, these neurons assemble parts of a moving image into a coherent sequence. Think of this process as similar to a lenticular puzzle: each piece alone may not make sense, but together they form a complete image in motion. Different neurons process various "puzzle pieces" of a real-life moving image, which the brain then integrates into a continuous scene. The researchers also found that the tadpoles' optic tectum neurons distinguished subtle changes in visual stimuli over time, capturing information in roughly 100 to 600 millisecond dynamic clips rather than still frames. These neurons are highly sensitive to patterns of light and shadow, and each neuron's response to a specific part of the visual field helps construct a detailed map of a scene to form a "movie clip." Cline and Hiramoto trained MovieNet to emulate this brain-like processing and encode video clips as a series of small, recognizable visual cues. This permitted the AI model to distinguish subtle differences among dynamic scenes. To test MovieNet, the researchers showed it video clips of tadpoles swimming under different conditions. Not only did MovieNet achieve 82.3 percent accuracy in distinguishing normal versus abnormal swimming behaviors, but it exceeded the abilities of trained human observers by about 18 percent. It even outperformed existing AI models such as Google's GoogLeNet-;which achieved just 72 percent accuracy despite its extensive training and processing resources. "This is where we saw real potential," points out Cline. The team determined that MovieNet was not only better than current AI models at understanding changing scenes, but it used less data and processing time. MovieNet's ability to simplify data without sacrificing accuracy also sets it apart from conventional AI. By breaking down visual information into essential sequences, MovieNet effectively compresses data like a zipped file that retains critical details. Beyond its high accuracy, MovieNet is an eco-friendly AI model. Conventional AI processing demands immense energy, leaving a heavy environmental footprint. MovieNet's reduced data requirements offer a greener alternative that conserves energy while performing at a high standard. "By mimicking the brain, we've managed to make our AI far less demanding, paving the way for models that aren't just powerful but sustainable," says Cline. "This efficiency also opens the door to scaling up AI in fields where conventional methods are costly." In addition, MovieNet has potential to reshape medicine. As the technology advances, it could become a valuable tool for identifying subtle changes in early-stage conditions, such as detecting irregular heart rhythms or spotting the first signs of neurodegenerative diseases like Parkinson's. For example, small motor changes related to Parkinson's that are often hard for human eyes to discern could be flagged by the AI early on, providing clinicians valuable time to intervene. Furthermore, MovieNet's ability to perceive changes in tadpole swimming patterns when tadpoles were exposed to chemicals could lead to more precise drug screening techniques, as scientists could study dynamic cellular responses rather than relying on static snapshots. "Current methods miss critical changes because they can only analyze images captured at intervals," remarks Hiramoto. "Observing cells over time means that MovieNet can track the subtlest changes during drug testing." Looking ahead, Cline and Hiramoto plan to continue refining MovieNet's ability to adapt to different environments, enhancing its versatility and potential applications. "Taking inspiration from biology will continue to be a fertile area for advancing AI," says Cline. "By designing models that think like living organisms, we can achieve levels of efficiency that simply aren't possible with conventional approaches." Scripps Research Institute Journal reference: Hiramoto, M., & Cline, H. T. (2024). Identification of movie encoding neurons enables movie recognition AI. Proceedings of the National Academy of Sciences. doi.org/10.1073/pnas.2412260121.
[4]
Brain-Inspired AI Learns to Watch Videos Like a Human - Neuroscience News
Summary: Researchers have developed MovieNet, an AI model inspired by the human brain, to understand and analyze moving images with unprecedented accuracy. Mimicking how neurons process visual sequences, MovieNet can identify subtle changes in dynamic scenes while using significantly less data and energy than traditional AI. In testing, MovieNet outperformed current AI models and even human observers in recognizing behavioral patterns, such as tadpole swimming under different conditions. Its eco-friendly design and potential to revolutionize fields like medicine and drug screening highlight the transformative power of this breakthrough. Imagine an artificial intelligence (AI) model that can watch and understand moving images with the subtlety of a human brain. Now, scientists at Scripps Research have made this a reality by creating MovieNet: an innovative AI that processes videos much like how our brains interpret real-life scenes as they unfold over time. This brain-inspired AI model, detailed in a study published in the Proceedings of the National Academy of Sciences on November 19, 2024, can perceive moving scenes by simulating how neurons -- or brain cells -- make real-time sense of the world. Conventional AI excels at recognizing still images, but MovieNet introduces a method for machine-learning models to recognize complex, changing scenes -- a breakthrough that could transform fields from medical diagnostics to autonomous driving, where discerning subtle changes over time is crucial. MovieNet is also more accurate and environmentally sustainable than conventional AI. "The brain doesn't just see still frames; it creates an ongoing visual narrative," says senior author Hollis Cline, PhD, the director of the Dorris Neuroscience Center and the Hahn Professor of Neuroscience at Scripps Research. "Static image recognition has come a long way, but the brain's capacity to process flowing scenes -- like watching a movie -- requires a much more sophisticated form of pattern recognition. By studying how neurons capture these sequences, we've been able to apply similar principles to AI." To create MovieNet, Cline and first author Masaki Hiramoto, a staff scientist at Scripps Research, examined how the brain processes real-world scenes as short sequences, similar to movie clips. Specifically, the researchers studied how tadpole neurons responded to visual stimuli. "Tadpoles have a very good visual system, plus we know that they can detect and respond to moving stimuli efficiently," explains Hiramoto. He and Cline identified neurons that respond to movie-like features -- such as shifts in brightness and image rotation -- and can recognize objects as they move and change. Located in the brain's visual processing region known as the optic tectum, these neurons assemble parts of a moving image into a coherent sequence. Think of this process as similar to a lenticular puzzle: each piece alone may not make sense, but together they form a complete image in motion. Different neurons process various "puzzle pieces" of a real-life moving image, which the brain then integrates into a continuous scene. The researchers also found that the tadpoles' optic tectum neurons distinguished subtle changes in visual stimuli over time, capturing information in roughly 100 to 600 millisecond dynamic clips rather than still frames. These neurons are highly sensitive to patterns of light and shadow, and each neuron's response to a specific part of the visual field helps construct a detailed map of a scene to form a "movie clip." Cline and Hiramoto trained MovieNet to emulate this brain-like processing and encode video clips as a series of small, recognizable visual cues. This permitted the AI model to distinguish subtle differences among dynamic scenes. To test MovieNet, the researchers showed it video clips of tadpoles swimming under different conditions. Not only did MovieNet achieve 82.3 percent accuracy in distinguishing normal versus abnormal swimming behaviors, but it exceeded the abilities of trained human observers by about 18 percent. It even outperformed existing AI models such as Google's GoogLeNet -- which achieved just 72 percent accuracy despite its extensive training and processing resources. "This is where we saw real potential," points out Cline. The team determined that MovieNet was not only better than current AI models at understanding changing scenes, but it used less data and processing time. MovieNet's ability to simplify data without sacrificing accuracy also sets it apart from conventional AI. By breaking down visual information into essential sequences, MovieNet effectively compresses data like a zipped file that retains critical details. Beyond its high accuracy, MovieNet is an eco-friendly AI model. Conventional AI processing demands immense energy, leaving a heavy environmental footprint. MovieNet's reduced data requirements offer a greener alternative that conserves energy while performing at a high standard. "By mimicking the brain, we've managed to make our AI far less demanding, paving the way for models that aren't just powerful but sustainable," says Cline. "This efficiency also opens the door to scaling up AI in fields where conventional methods are costly." In addition, MovieNet has potential to reshape medicine. As the technology advances, it could become a valuable tool for identifying subtle changes in early-stage conditions, such as detecting irregular heart rhythms or spotting the first signs of neurodegenerative diseases like Parkinson's. For example, small motor changes related to Parkinson's that are often hard for human eyes to discern could be flagged by the AI early on, providing clinicians valuable time to intervene. Furthermore, MovieNet's ability to perceive changes in tadpole swimming patterns when tadpoles were exposed to chemicals could lead to more precise drug screening techniques, as scientists could study dynamic cellular responses rather than relying on static snapshots. "Current methods miss critical changes because they can only analyze images captured at intervals," remarks Hiramoto. "Observing cells over time means that MovieNet can track the subtlest changes during drug testing." Looking ahead, Cline and Hiramoto plan to continue refining MovieNet's ability to adapt to different environments, enhancing its versatility and potential applications. "Taking inspiration from biology will continue to be a fertile area for advancing AI," says Cline. "By designing models that think like living organisms, we can achieve levels of efficiency that simply aren't possible with conventional approaches." Funding: This work for the study "Identification of movie encoding neurons enables movie recognition AI," was supported by funding from the National Institutes of Health (RO1EY011261, RO1EY027437 and RO1EY031597), the Hahn Family Foundation and the Harold L. Dorris Neurosciences Center Endowment Fund. Identification of movie encoding neurons enables movie recognition AI Natural visual scenes are dominated by spatiotemporal image dynamics, but how the visual system integrates "movie" information over time is unclear. We characterized optic tectal neuronal receptive fields using sparse noise stimuli and reverse correlation analysis. Neurons recognized movies of ~200-600 ms durations with defined start and stop stimuli. Movie durations from start to stop responses were tuned by sensory experience though a hierarchical algorithm. Neurons encoded families of image sequences following trigonometric functions. Spike sequence and information flow suggest that repetitive circuit motifs underlie movie detection. Principles of frog topographic retinotectal plasticity and cortical simple cells are employed in machine learning networks for static image recognition, suggesting that discoveries of principles of movie encoding in the brain, such as how image sequences and duration are encoded, may benefit movie recognition technology. We built and trained a machine learning network that mimicked neural principles of visual system movie encoders. The network, named MovieNet, outperformed current machine learning image recognition networks in classifying natural movie scenes, while reducing data size and steps to complete the classification task. This study reveals how movie sequences and time are encoded in the brain and demonstrates that brain-based movie processing principles enable efficient machine learning.
[5]
Breakthrough AI decodes videos like a human brain with 82% accuracy
AI, already modeled off neuronal networks, might have made leaps and bounds in recent years in being able to respond to our voices, process information, accurately respond, and more. However, as models continue to advance, researchers at Scripps effectively reached a real milestone by zeroing in on the neurons responsible for processing life in motion. The eco-friendly MovieNet, initially tested by observing tadpoles swimming, has demonstrated impressive success in detecting abnormalities in moving scenes. This breakthrough could significantly advance fields like medical diagnostics and autonomous driving, where the ability to perceive, track, and interpret changes over time is crucial. Researchers at Scripps Research explained that the human brain has neurons that "assemble parts of a moving image into a coherent sequence," a process that often goes unnoticed. To make the process even more complex, different neurons process various pieces in motion, presenting a challenge to researchers to replicate this form of pattern recognition.
Share
Share
Copy Link
Scientists at Scripps Research have developed MovieNet, an AI model that processes videos by mimicking how the human brain interprets real-time visual scenes, achieving 82% accuracy in distinguishing complex behaviors.
Scientists at Scripps Research have developed MovieNet, an innovative artificial intelligence (AI) model that processes videos by mimicking how the human brain interprets real-time visual scenes. This breakthrough, detailed in a study published in the Proceedings of the National Academy of Sciences, represents a significant advancement in AI's ability to understand and analyze moving images 1.
MovieNet's design is based on the visual processing capabilities of tadpoles. Researchers identified neurons in the tadpoles' optic tectum that respond to movie-like features such as shifts in brightness and image rotation. These neurons process visual data in 100 to 600-millisecond clips, combining patterns of light and shadow to create a continuous narrative 2.
Dr. Hollis Cline, the senior author and director of the Dorris Neuroscience Center at Scripps Research, explained, "The brain doesn't just see still frames; it creates an ongoing visual narrative. By studying how neurons capture these sequences, we've been able to apply similar principles to AI" 3.
In testing, MovieNet demonstrated remarkable capabilities:
MovieNet's efficiency is a standout feature. Unlike conventional AI models that require extensive computational resources, MovieNet processes and compresses information more effectively, reducing data and energy demands without sacrificing performance 5.
The implications of MovieNet's capabilities are far-reaching:
Cline and first author Masaki Hiramoto plan to refine MovieNet's adaptability across various environments and applications. They envision expanding its capabilities to handle more complex scenarios and explore its use in diverse fields 2.
As AI continues to evolve, MovieNet represents a significant step towards creating more efficient, accurate, and versatile models. By bridging the gap between artificial and biological intelligence, it opens new possibilities for technology that can interpret and respond to the world in ways previously limited to living organisms.
Reference
[3]
[4]
[5]
Researchers have developed an AI system capable of predicting human thoughts and providing new insights into brain function. This groundbreaking technology has implications for understanding cognition and potential medical applications.
2 Sources
2 Sources
Researchers at Cold Spring Harbor Laboratory develop a new AI algorithm inspired by genomic compression, potentially revolutionizing AI efficiency and explaining innate abilities in animals.
3 Sources
3 Sources
Researchers have developed a new AI method that can simulate neuronal activity using connectome data, potentially revolutionizing our understanding of brain function and neurological disorders.
3 Sources
3 Sources
Mount Sinai researchers develop an AI tool that uses video data to detect neurological changes in NICU infants, potentially transforming neonatal care through continuous, non-invasive monitoring.
4 Sources
4 Sources
A groundbreaking AI model developed by USC researchers can measure brain aging speed using MRI scans, potentially transforming early detection and treatment of cognitive decline and dementia.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved