4 Sources
4 Sources
[1]
Affordances in the brain: The human superpower AI hasn't mastered
How do you intuitively know that you can walk on a footpath and swim in a lake? Researchers from the University of Amsterdam have discovered unique brain activations that reflect how we can move our bodies through an environment. The study not only sheds new light on how the human brain works, but also shows where artificial intelligence is lagging behind. According to the researchers, AI could become more sustainable and human-friendly if it incorporated this knowledge about the human brain. When we see a picture of an unfamiliar environment -- a mountain path, a busy street, or a river -- we immediately know how we could move around in it: walk, cycle, swim or not go any further. That sounds simple, but how does your brain actually determine these action opportunities? PhD student Clemens Bartnik and a team of co-authors show how we make estimates of possible actions thanks to unique brain patterns. The team, led by computational neuroscientist Iris Groen, also compared this human ability with a large number of AI models, including ChatGPT. "AI models turned out to be less good at this and still have a lot to learn from the efficient human brain," Groen concludes. Viewing images in the MRI scanner Using an MRI scanner, the team investigated what happens in the brain when people look at various photos of indoor and outdoor environments. The participants used a button to indicate whether the image invited them to walk, cycle, drive, swim, boat or climb. At the same time, their brain activity was measured. "We wanted to know: when you look at a scene, do you mainly see what is there -- such as objects or colors -- or do you also automatically see what you can do with it," says Groen. "Psychologists call the latter "affordances" -- opportunities for action; imagine a staircase that you can climb, or an open field that you can run through." Unique processes in the brain The team discovered that certain areas in the visual cortex become active in a way that cannot be explained by visible objects in the image. "What we saw was unique," says Groen. "These brain areas not only represent what can be seen, but also what you can do with it." The brain did this even when participants were not given an explicit action instruction. 'These action possibilities are therefore processed automatically," says Groen. "Even if you do not consciously think about what you can do in an environment, your brain still registers it." The research thus demonstrates for the first time that affordances are not only a psychological concept, but also a measurable property of our brains. What AI doesn't understand yet The team also compared how well AI algorithms -- such as image recognition models or GPT-4 -- can estimate what you can do in a given environment. They were worse at predicting possible actions. "When trained specifically for action recognition, they could somewhat approximate human judgments, but the human brain patterns didn't match the models' internal calculations," Groen explains. "Even the best AI models don't give exactly the same answers as humans, even though it's such a simple task for us," Groen says. "This shows that our way of seeing is deeply intertwined with how we interact with the world. We connect our perception to our experience in a physical world. AI models can't do that because they only exist in a computer." AI can still learn from the human brain The research thus touches on larger questions about the development of reliable and efficient AI. "As more sectors -- from healthcare to robotics -- use AI, it is becoming important that machines not only recognize what something is, but also understand what it can do," Groen explains. "For example, a robot that has to find its way in a disaster area, or a self-driving car that can tell apart a bike path from a driveway." Groen also points out the sustainable aspect of AI. "Current AI training methods use a huge amount of energy and are often only accessible to large tech companies. More knowledge about how our brain works, and how the human brain processes certain information very quickly and efficiently, can help make AI smarter, more economical and more human-friendly."
[2]
Your Brain Instantly Sees What You Can Do, AI Still Can't - Neuroscience News
Summary: New research shows that the human brain automatically recognizes what actions an environment affords, like walking, climbing, or swimming, even without conscious thought. Using MRI scans, researchers found unique activity in visual brain regions that went beyond simply processing objects or colors, revealing deep neural encoding of "affordances," or possible actions. When compared to AI models, including GPT-4, humans significantly outperformed machines at identifying what could be done in a scene. This work highlights how perception and action are tightly linked in the brain, and how AI still has much to learn from human cognition. When we see a picture of an unfamiliar environment - a mountain path, a busy street, or a river - we immediately know how we could move around in it: walk, cycle, swim or not go any further. That sounds simple, but how does your brain actually determine these action opportunities? PhD student Clemens Bartnik and a team of co-authors show how we make estimates of possible actions thanks to unique brain patterns. The team, led by computational neuroscientist Iris Groen, also compared this human ability with a large number of AI models, including ChatGPT. 'AI models turned out to be less good at this and still have a lot to learn from the efficient human brain,' Groen concludes. Using an MRI scanner, the team investigated what happens in the brain when people look at various photos of indoor and outdoor environments. The participants used a button to indicate whether the image invited them to walk, cycle, drive, swim, boat or climb. At the same time, their brain activity was measured. 'We wanted to know: when you look at a scene, do you mainly see what is there - such as objects or colours - or do you also automatically see what you can do with it,' says Groen. 'Psychologists call the latter "affordances" - opportunities for action; imagine a staircase that you can climb, or an open field that you can run through.' The team discovered that certain areas in the visual cortex become active in a way that cannot be explained by visible objects in the image. 'What we saw was unique,' says Groen. 'These brain areas not only represent what can be seen, but also what you can do with it.' The brain did this even when participants were not given an explicit action instruction. 'These action possibilities are therefore processed automatically,' says Groen. 'Even if you do not consciously think about what you can do in an environment, your brain still registers it.' The research thus demonstrates for the first time that affordances are not only a psychological concept, but also a measurable property of our brains. The team also compared how well AI algorithms - such as image recognition models or GPT-4 - can estimate what you can do in a given environment. They were worse at predicting possible actions. 'When trained specifically for action recognition, they could somewhat approximate human judgments, but the human brain patterns didn't match the models' internal calculations,' Groen explains. 'Even the best AI models don't give exactly the same answers as humans, even though it's such a simple task for us,' Groen says. 'This shows that our way of seeing is deeply intertwined with how we interact with the world. We connect our perception to our experience in a physical world. AI models can't do that because they only exist in a computer.' The research thus touches on larger questions about the development of reliable and efficient AI. 'As more sectors - from healthcare to robotics - use AI, it is becoming important that machines not only recognise what something is, but also understand what it can do,' Groen explains. 'For example, a robot that has to find its way in a disaster area, or a self-driving car that can tell apart a bike path from a driveway.' Groen also points out the sustainable aspect of AI. 'Current AI training methods use a huge amount of energy and are often only accessible to large tech companies. More knowledge about how our brain works, and how the human brain processes certain information very quickly and efficiently, can help make AI smarter, more economical and more human-friendly.' Representation of locomotive action affordances in human behavior, brains, and deep neural networks To decide how to move around the world, we must determine which locomotive actions (e.g., walking, swimming, or climbing) are afforded by the immediate visual environment. The neural basis of our ability to recognize locomotive affordances is unknown. Here, we compare human behavioral annotations, functional MRI (fMRI) measurements, and deep neural network (DNN) activations to both indoor and outdoor real-world images to demonstrate that the human visual cortex represents locomotive action affordances in complex visual scenes. Hierarchical clustering of behavioral annotations of six possible locomotive actions show that humans group environments into distinct affordance clusters using at least three separate dimensions. Representational similarity analysis of multivoxel fMRI responses in the scene-selective visual cortex shows that perceived locomotive affordances are represented independently from other scene properties such as objects, surface materials, scene category, or global properties and independent of the task performed in the scanner. Visual feature activations from DNNs trained on object or scene classification as well as a range of other visual understanding tasks correlate comparatively lower with behavioral and neural representations of locomotive affordances than with object representations. Training DNNs directly on affordance labels or using affordance-centered language embeddings increases alignment with human behavior, but none of the tested models fully captures locomotive action affordance perception. These results uncover a type of representation in the human brain that reflects locomotive action affordances.
[3]
How the human brain thinks differently than AI - Earth.com
Humans glance at a scene and instantly know what action to take - stroll, pedal, or dive. Artificial intelligence (AI), despite headline‑grabbing advances, still struggles with that snap judgment. PhD student Clemens Bartnik of the University of Amsterdam and colleagues used brain‑scanning to show why the gap remains. In 1979, psychologist James Gibson coined the term affordances, describing how objects invite action. The new Amsterdam work places that idea squarely in the living human brain. Participants lay in a scanner and viewed snapshots of shorelines, staircases, and alleyways. They pressed a button to pick walking, cycling, driving, swimming, boating, or climbing while the machine tracked blood flow in visual areas. "These action possibilities are therefore processed automatically," said lead scientist Iris Groen. Activity patterns in the visual cortex changed not just with what was visible but with what the body could do. The signature appeared even when volunteers made no explicit choice about movement. That means the brain tags potential actions as part of its basic image pipeline, well before conscious deliberation. Earlier work hinted at such fast coding for grasping tools, yet locomotion is broader and demands constant spatial updating. By isolating the signal in higher‑order scene regions, the team showed a dedicated circuit rather than a by‑product of object recognition. Even in early development, humans link sight with movement. Babies crawl toward open spaces and avoid drop-offs not because they understand height, but because their bodies learn consequences through trial and error. This tight loop between action and feedback trains the brain to anticipate what a space allows. By the time we're adults, these patterns run automatically, helping us judge what's possible in a split second. Vision systems built on deep neural networks excel at labeling objects or entire scenes. But when the researchers fed the same photos to leading models, the machines mis‑guessed feasible actions about one‑quarter of the time. "Even the best AI models don't give exactly the same answers as humans," Groen noted. Even large language‑vision hybrids such as GPT‑4 improve only after extra training on affordance labels. Analysis of the networks' hidden layers revealed weak alignment with the fMRI patterns. The difference suggests current architectures ignore geometric and bodily constraints that matter to humans. What makes the human edge even sharper is that we've spent our entire lives testing these environments. The sensorimotor system doesn't just interpret images, it overlays them with memories of movement, pain, balance, and success. AI models don't grow up in a world of slippery floors, steep curbs, or off‑trail adventures. They haven't fallen on ice or scrambled over rocks, and that limits their ability to map pictures to possible actions with the same nuance. Training gargantuan models consumes megawatt‑hours and tons of carbon. If engineers can borrow the brain's lean affordance code, future systems might reach better decisions with fewer parameters. Robots navigating rubble, drones flying through forests, and wheelchairs plotting ramps all need that frugal insight. Instead of photographing every walkway on Earth, designers could hard‑wire a few spatial heuristics and learn the rest on‑site. Energy savings translate to slimmer batteries and wider access outside big tech campuses. Hospitals, schools, and small‑town emergency crews stand to gain from models that think more like the people they serve. Disaster‑response robots already use lidar and stereo cameras, yet they fail when smoke or dust hides surfaces. A cortex‑inspired layer could fill gaps by inferring where treads may grip or where water flows. Virtual‑reality therapists also watch the project. Stroke patients relearn walking faster when simulations adjust paths to match perceived affordances, not textbook dimensions. Self‑driving cars face the nuance of a bike lane merging with a crosswalk at dusk. Embedding affordance sensors might cut false positives and avoid abrupt braking that unnerves riders. Researchers still debate whether affordance maps arise from vision alone or feed back from motor plans. Future experiments will likely combine fMRI with muscle recordings to trace the loop. Another unknown is how culture tweaks perception. A skateboarder and a hiker read the same staircase differently, and algorithms may need similar flexibility. The findings remind us that seeing is inseparable from doing. Our eyes deliver a running forecast of possible moves, shaping intuition long before words enter the chat box. Acknowledging that layered wisdom could steer AI toward tools that extend rather than replace human ability. Nature's shortcut may yet teach silicon to tread lightly while thinking ahead. The study is published in Proceedings of the National Academy of Sciences. -- - Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
[4]
Brain study reveals how humans intuitively navigate different environments, offering direction for better AI
How do you intuitively know that you can walk on a footpath and swim in a lake? Researchers from the University of Amsterdam have discovered unique brain activations that reflect how we can move our bodies through an environment. Published in Proceedings of the National Academy of Sciences, the study not only sheds new light on how the human brain works, but also shows where artificial intelligence is lagging behind. According to the researchers, AI could become more sustainable and human-friendly if it incorporated this knowledge about the human brain. When we see a picture of an unfamiliar environment -- a mountain path, a busy street, or a river -- we immediately know how we could move around in it: walk, cycle, swim or not go any further. That sounds simple, but how does your brain actually determine these action opportunities? Ph.D. student Clemens Bartnik and a team of co-authors show how we make estimates of possible actions thanks to unique brain patterns. The team, led by computational neuroscientist Iris Groen, also compared this human ability with a large number of AI models, including ChatGPT. "AI models turned out to be less good at this and still have a lot to learn from the efficient human brain," Groen concludes. Viewing images in the MRI scanner Using an MRI scanner, the team investigated what happens in the brain when people look at various photos of indoor and outdoor environments. The participants used a button to indicate whether the image invited them to walk, cycle, drive, swim, boat or climb. At the same time, their brain activity was measured. "We wanted to know: when you look at a scene, do you mainly see what is there -- such as objects or colors -- or do you also automatically see what you can do with it," says Groen. "Psychologists call the latter 'affordances' -- opportunities for action; imagine a staircase that you can climb, or an open field that you can run through." Unique processes in the brain The team discovered that certain areas in the visual cortex become active in a way that cannot be explained by visible objects in the image. "What we saw was unique," says Groen. "These brain areas not only represent what can be seen, but also what you can do with it." The brain did this even when participants were not given an explicit action instruction. "These action possibilities are therefore processed automatically," says Groen. "Even if you do not consciously think about what you can do in an environment, your brain still registers it." The research thus demonstrates for the first time that affordances are not only a psychological concept, but also a measurable property of our brains. What AI doesn't understand yet The team also compared how well AI algorithms -- such as image recognition models or GPT-4 -- can estimate what you can do in a given environment. They were worse at predicting possible actions. "When trained specifically for action recognition, they could somewhat approximate human judgments, but the human brain patterns didn't match the models' internal calculations," Groen explains. "Even the best AI models don't give exactly the same answers as humans, even though it's such a simple task for us," Groen says. "This shows that our way of seeing is deeply intertwined with how we interact with the world. We connect our perception to our experience in a physical world. AI models can't do that because they only exist in a computer." AI can still learn from the human brain The research thus touches on larger questions about the development of reliable and efficient AI. "As more sectors -- from health care to robotics -- use AI, it is becoming important that machines not only recognize what something is, but also understand what it can do," Groen explains. "For example, a robot that has to find its way in a disaster area, or a self-driving car that can tell apart a bike path from a driveway." Groen also points out the sustainable aspect of AI. "Current AI training methods use a huge amount of energy and are often only accessible to large tech companies. More knowledge about how our brain works, and how the human brain processes certain information very quickly and efficiently, can help make AI smarter, more economical and more human-friendly."
Share
Share
Copy Link
Researchers at the University of Amsterdam discover unique brain activations that reflect how humans intuitively navigate environments, revealing a capability that current AI models lack.
Researchers from the University of Amsterdam have made a groundbreaking discovery about how the human brain processes environmental information, shedding light on our innate ability to navigate various settings. This study, published in the Proceedings of the National Academy of Sciences, not only advances our understanding of brain function but also highlights a significant gap between human cognition and artificial intelligence (AI) capabilities
1
.Source: ScienceDaily
Led by computational neuroscientist Iris Groen and PhD student Clemens Bartnik, the research team explored the psychological concept of "affordances" – the action possibilities that an environment presents to an individual. For instance, how we instantly recognize that we can walk on a footpath or swim in a lake
2
.The study utilized MRI scanning to observe brain activity as participants viewed various indoor and outdoor scenes. Subjects were asked to indicate potential actions (such as walking, cycling, or swimming) for each image while their brain activity was monitored
3
.Source: Tech Xplore
Unique Brain Activations: The researchers discovered that certain areas in the visual cortex become active in ways that cannot be explained solely by visible objects in the image. These activations represent not just what is seen, but also what actions are possible in the environment
1
.Automatic Processing: Remarkably, these brain areas processed action possibilities automatically, even when participants were not explicitly instructed to consider potential actions
2
.Measurable Brain Property: This study provides the first evidence that affordances are not just a psychological concept but a measurable property of our brains
4
.Related Stories
The research team compared human performance with various AI models, including advanced systems like GPT-4. The results were striking:
AI Limitations: Current AI algorithms, including image recognition models, were significantly less adept at predicting possible actions in given environments
1
.Misalignment with Brain Patterns: Even when AI models were specifically trained for action recognition, their internal calculations did not align well with human brain patterns
3
.Embodied Cognition: The study highlights that human perception is deeply intertwined with our physical experiences in the world, a connection that AI models, existing only in computers, currently lack
4
.Source: Neuroscience News
This research has significant implications for the future of AI:
Enhanced AI Capabilities: Understanding how the human brain efficiently processes environmental information could lead to more intuitive and capable AI systems
1
.Practical Applications: Improved AI could benefit various sectors, from healthcare to robotics, enabling machines to better understand not just what objects are, but what can be done with them
4
.Sustainability in AI: By mimicking the brain's efficient processing, future AI models could become more energy-efficient and accessible, addressing current concerns about the high energy consumption of AI training methods
2
.This groundbreaking study not only advances our understanding of human cognition but also paves the way for more intuitive, efficient, and human-like artificial intelligence systems. As researchers continue to unravel the mysteries of the brain, the gap between human and machine intelligence may narrow, leading to more sophisticated and beneficial AI technologies.
Summarized by
Navi
[2]
20 Apr 2025•Science and Research
11 Sept 2024
10 Dec 2024•Science and Research
1
Business and Economy
2
Technology
3
Business and Economy