3 Sources
3 Sources
[1]
How the Brain Filters Distractions to Stay Focused on a Goal - Neuroscience News
Summary: A new attention model reveals how the human brain allocates limited perceptual resources to focus on goal-relevant information in dynamic environments. Researchers developed "adaptive computation," a system that prioritizes important visual details -- like traffic signals over flashy cars -- based on task relevance. In experiments tracking multiple moving objects, the model successfully predicted where attention would be directed and how difficult people found the task. These findings shed light on why distractions often fade from awareness when we're focused, and how our brains optimize attention in real time. A person's capacity for attention has a profound impact on what they see, dictating which details they glean from the world around them. As they walk down a busy street, the focus of their attention may shift to a compelling new billboard advertisement, or a shiny Lamborghini parked on the curb. Attention, however, can be fleeting. When that person reaches a busy intersection, for instance, details of the billboard or sportscar disappear. The person's attention instead becomes focused on approaching or stationary traffic, the flashing walk sign, and other pedestrians they'll need to avoid in the crosswalk. Most research on attention has concentrated on what happens when we notice the new billboard or shiny car. But in a new study, Yale psychologists instead focus on what happens when our attention shifts to a specific goal, such as navigating the busy intersection. Writing in the journal Psychological Review, the researchers unveil a human attention model that explains how the mind evaluates what is task-relevant in complex, dynamic scenarios -- and apportions computational capacity in response. "We have a limited number of resources with which we can see the world," said Ilker Yildirim, assistant professor of psychology in Yale's Faculty of Arts and Sciences and senior author of the study. "We think of these resources as elementary computational processes; each perception we experience, such as the position of an object or how fast it's moving, is a result of exerting some number of these elementary perceptual computations." For the study, the researchers developed a system which they call "adaptive computation," which is essentially a software program that rations these elementary computations in order to more deeply process goal-relevant objects. For example, when a person crosses a busy street, adaptive computation would prioritize the pedestrian walk sign over the shiny car. "Our model reveals a mechanism by which human attention identifies what things in a dynamic scene are relevant to the goal at hand, and then rations perceptual computations accordingly," said Mario Belledonne, a graduate student in Yale's Graduate School of Arts and Sciences and co-author of the study. In one experiment, they presented volunteer participants with eight identically colored circles on a computer screen. The researchers then highlighted a group of four circles and asked the participants to track the highlighted circles as all eight circles moved randomly across the screen. Such tracking of multiple objects at the same time elicits a complex, dynamic ebb and flow of attention among the participants. Researchers measured these shifts of attention, at sub-second thresholds, by asking subjects to hit the space bar whenever they noticed a flashing dot appear very briefly on a specific object. The frequency with which these flashing dots were noticed indicates where and when people are attending, and the adaptive computation model successfully predicted these momentary, fine-grained patterns of attentional deployment. In another experiment, participants again were asked to track four objects, but in this case the researchers varied how many identically colored "distractor" objects were on the screen and how fast the objects were moving. When the objects stopped moving, researchers asked participants to rate how difficult it was to track them. The researchers showed that the adaptive computation model also explains these subjective difficulty ratings: The more computational resources the model exerted for tracking, the more difficult it was rated by participants. In this way, the researchers' model provided a computational signature of the feeling of exertion that occurs when a person focuses attention on the same task for a prolonged period, Yildirim said. "We want to work out the computational logic of the human mind, by creating new algorithms of perception and attention, and comparing the performance of these algorithms to that of humans," he said. The model also helps make sense of what's sometimes considered a "human quirk": the ability to make perceptions of non-task-oriented objects -- such as the billboard or the sports car -- disappear while crossing the busy street. "We think this line of work can lead to systems that are a bit different from today's AI, something more human-like," Yildirim said. "This would be an AI system that when tasked with a goal might miss things, even shiny things, so as to flexibly and safely interact with the world." The research team also included Brian Scholl, a professor of psychology at FAS, and Eivinas Butkus, of Columbia University, a former member of Yildirim's lab. Funding: The research was supported by a grant from U.S. Air Force Office of Scientific Research. Adaptive computation as a new mechanism of dynamic human attention A key role for attention is to continually focus visual processing to satisfy our goals. How does this work in computational terms? Here we introduce adaptive computation -- a new computational mechanism of human attention that bridges the momentary application of perceptual computations with their impact on decision outcomes. Adaptive computation is a dynamic algorithm that rations perceptual computations across objects on-the-fly, enabled by a novel and general formulation of task relevance. We evaluate adaptive computation in a case study of multiple object tracking (MOT) -- a paradigmatic example of selection as a dynamic process, where observers track a set of target objects moving amid visually identical distractors. Adaptive computation explains the attentional dynamics of object selection with unprecedented depth. It not only recapitulates several classic features of MOT (e.g., trial-level tracking accuracy and localization error of targets), but also captures properties that have not previously been measured or modeled -- including both the subsecond patterns of attentional deployment between objects, and the resulting sense of subjective effort. Critically, this approach captures such data within a framework that is in-principle domain-general, and, unlike past models, without using any MOT-specific heuristic components. Beyond this case study, we also look to the future, discussing how adaptive computation may apply more generally, providing a new type of mechanistic model for the dynamic operation of many forms of visual attention.
[2]
Attention scan: How our minds shift focus in dynamic settings
A person's capacity for attention has a profound impact on what they see, dictating which details they glean from the world around them. As they walk down a busy street, the focus of their attention may shift to a compelling new billboard advertisement or a shiny Lamborghini parked on the curb. That attention, however, can be fleeting. When that person reaches a busy intersection, details of that billboard or sports car disappear from their minds. Their attention instead becomes focused on approaching or stationary traffic, the flashing walk sign, and other pedestrians they'll need to avoid in the crosswalk. Most research on attention has concentrated on what happens when we notice the new billboard or shiny car. But in a new study, Yale psychologists instead focus on what happens when our attention shifts to a specific goal, such as navigating the busy intersection. In an article for the journal Psychological Review, they unveil a human attention model that explains how the mind evaluates what is task-relevant in complex, dynamic scenarios -- and apportions computational capacity in response. "We have a limited number of resources with which we can see the world," said Ilker Yildirim, assistant professor of psychology in Yale's Faculty of Arts and Sciences and senior author of the study. "We think of these resources as elementary computational processes; each percept we experience, such as the position of an object or how fast it's moving, is a result of exerting some number of these elementary perceptual computations." For the study, they developed a system which they call "adaptive computation," which is essentially a software program that rations these elementary computations in order to more deeply process goal-relevant objects. For example, when a person crosses a busy street, adaptive computation would prioritize the pedestrian walk sign over the shiny car. "Our model reveals a mechanism by which human attention identifies what things in a dynamic scene are relevant to the goal at hand, and then rations perceptual computations accordingly," Belledonne said. In one experiment, they presented volunteer participants with eight identically colored circles on a computer screen. The researchers then highlighted a group of four circles and asked the participants to track the highlighted circles as all eight circles moved randomly across the screen. Such tracking of multiple objects at the same time elicits a complex, dynamic ebb and flow of attention among the participants. Researchers measured these shifts of attention, at sub-second thresholds, by asking subjects to hit the space bar whenever they noticed a flashing dot appear very briefly on a specific object. The frequency with which these flashing dots were noticed indicates where and when people are attending, and the adaptive computation model successfully predicted these momentary, fine-grained patterns of attentional deployment. In another experiment, participants again were asked to track four objects, but in this case the researchers varied how many more identically colored "distractor" objects were on the screen and how fast the objects were moving. When the objects stopped moving, researchers asked participants to rate how difficult it was to track them. The researchers showed that the adaptive computation model also explains these subjective difficulty ratings: The more computational resources the model exerted on a trial, the more difficult it was rated by participants. In this way, the researchers' model provided a computational signature of the feeling of exertion that occurs when a person has to focus attention on the same task for a prolonged period, Yildirim said. "We want to work out the computational logic of the human mind, by creating new algorithms of perception and attention, and comparing the performance of these algorithms to that of humans," he said. The model also helps make sense of what's sometimes considered a "human quirk": the ability to make perceptions of non-task-oriented objects -- such as the billboard or the sports car -- disappear while crossing the busy street. "We think this line of work can lead to systems that are a bit different from today's AI, something more human-like," Yildirim said. "This would be an AI system that when tasked with a goal might miss things, even shiny things, so as to flexibly and safely interact with the world."
[3]
How the brain decides what to focus on - and what to ignore - Earth.com
We often think we're aware of everything around us, but our brains are constantly deciding where to focus. Think about walking down a busy street. You might glance at a flashy car or a bright advertisement. But when it comes time to cross the street, those distractions vanish. Instead, your brain zooms in on the walk signal, moving vehicles, and people nearby. That switch in focus - how we go from being drawn to attention-grabbing sights to zoning in on a specific task - has puzzled scientists for years. Researchers at Yale University wanted to understand not just what grabs our attention, but how we shift it toward things that really matter in the moment. Most attention research looks at what draws our eyes - a sudden movement, a bold color, or something new. But this study focused on something different: what happens when our attention is deliberately steered toward a goal. "We have a limited number of resources with which we can see the world," said Ilker Yildirim, assistant professor of psychology in Yale's Faculty of Arts and Sciences and senior author of the study. "We think of these resources as elementary computational processes; each perception we experience, such as the position of an object or how fast it's moving, is a result of exerting some number of these elementary perceptual computations." To explore this, the team created a model called "adaptive computation." This system, like a mental rationing tool, allocates our perceptual resources to the things most relevant to what we're trying to do. If you're trying to cross a street safely, the model prioritizes the walk sign over a flashy distraction. "Our model reveals a mechanism by which human attention identifies what things in a dynamic scene are relevant to the goal at hand, and then rations perceptual computations accordingly," said study co-author Mario Belledonne, a student in Yale's Graduate School of Arts and Sciences. To test their ideas, the researchers ran experiments using a screen filled with colored dots. Participants were asked to track certain moving dots while ignoring others. The dots moved unpredictably, forcing people to adjust their focus constantly. As the dots flashed briefly, participants were asked to press a key when they noticed one. This helped the researchers map how attention moved - almost second by second. The adaptive computation model was able to predict where and when these shifts would happen, based on which dots were more relevant to the task. In another version of the task, participants still tracked a handful of dots, but the researchers changed how many "distractor" dots appeared and how fast they moved. Afterward, people rated how hard the task felt. Interestingly, the model's predictions matched the participants' feelings: when the model worked harder to track the targets, participants said the task felt harder. In other words, the model gave insight into something very human - the mental effort we feel when concentrating on a tough task. One surprising takeaway from this work is how our brains selectively "ignore" things. That eye-catching car or glowing billboard? If it doesn't help with what we're trying to do, it might just disappear from our awareness. "We want to work out the computational logic of the human mind, by creating new algorithms of perception and attention, and comparing the performance of these algorithms to that of humans," Yildirim said. This approach could even guide how we build future artificial intelligence. Unlike current AI systems that try to absorb everything, this new model mimics how humans sometimes miss obvious things - not due to error, but because it's a smart way to focus. "We think this line of work can lead to systems that are a bit different from today's AI, something more human-like," Yildirim said. "This would be an AI system that when tasked with a goal might miss things, even shiny things, so as to flexibly and safely interact with the world." Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
Share
Share
Copy Link
Yale psychologists have developed a new model called "adaptive computation" that explains how the human brain allocates attention in complex, dynamic environments. This breakthrough could lead to more human-like AI systems.
Researchers at Yale University have developed a groundbreaking model that sheds light on how the human brain allocates attention in complex, dynamic environments. The study, published in the journal Psychological Review, introduces the concept of "adaptive computation," which explains how our minds prioritize goal-relevant information while filtering out distractions
1
.The new model, termed "adaptive computation," is essentially a software program that rations elementary computational processes to focus on goal-relevant objects. This system mimics the brain's ability to prioritize important visual details based on task relevance
2
.Source: Neuroscience News
"We have a limited number of resources with which we can see the world," explained Ilker Yildirim, assistant professor of psychology at Yale and senior author of the study. "Each perception we experience, such as the position of an object or how fast it's moving, is a result of exerting some number of these elementary perceptual computations."
1
To test their model, the researchers conducted experiments involving multiple moving objects:
3
.In another experiment, researchers varied the number of distractor objects and their speed, asking participants to rate task difficulty. The model's predictions aligned with participants' subjective difficulty ratings, providing a computational signature of the feeling of mental exertion
2
.This research could have significant implications for the development of artificial intelligence systems. Unlike current AI that attempts to process all available information, this model mimics the human ability to selectively focus on relevant data while ignoring distractions
3
."We think this line of work can lead to systems that are a bit different from today's AI, something more human-like," Yildirim stated. "This would be an AI system that when tasked with a goal might miss things, even shiny things, so as to flexibly and safely interact with the world."
1
Source: Earth.com
Related Stories
The adaptive computation model also helps explain what's sometimes considered a "human quirk": the ability to make perceptions of non-task-oriented objects disappear when focusing on a specific goal. This selective attention allows us to navigate complex environments efficiently
2
.Source: Medical Xpress
The researchers aim to further explore the computational logic of the human mind by creating new algorithms of perception and attention and comparing their performance to that of humans. This approach could lead to advancements in both our understanding of human cognition and the development of more sophisticated AI systems
3
.As we continue to unravel the mysteries of human attention, this research opens up new avenues for cognitive science and artificial intelligence, potentially revolutionizing how we approach machine learning and human-computer interaction.
Summarized by
Navi
[1]
[2]
20 Apr 2025•Science and Research
17 Jun 2025•Science and Research
17 Sept 2024