AI models fall for the same optical illusions as humans, revealing how both brains work

2 Sources

Share

Researchers discovered that AI models experience optical illusions the same way humans do when shown the rotating snakes illusion. The breakthrough suggests both artificial and biological vision systems rely on predictive coding—a mechanism that anticipates future visual input rather than passively processing images. This finding could finally explain why our brains get tricked by certain patterns.

AI Model Fooled by Optical Illusion in Groundbreaking Study

Eiji Watanabe, an associate professor of neurophysiology at the National Institute for Basic Biology in Japan, led a research team that made a startling discovery about AI's perceptual capabilities

1

. When deep neural networks (DNNs) were shown the rotating snakes illusion—a famous visual trick created by Akiyoshi Kitaoka—the AI hallucinates motion in exactly the same way the human brain does. The illusion consists of static images showing circles formed by concentric rings of colored segments. While completely motionless, viewers perceive all circles as rotating except the specific one they focus on at any given moment.

Source: Digit

Source: Digit

The team used PredNet, a DNN trained to predict future frames in videos based on knowledge from previous ones

1

. The model learned from thousands of hours of natural landscape footage—propellers spinning, cars moving, balls rolling—until it understood the physics of motion

2

. Crucially, it received no training on optical illusions. When researchers presented various versions of the rotating snakes illusion alongside altered versions that don't trick humans, the AI was fooled by the same images as humans while correctly identifying the non-illusion versions.

Source: Creative Bloq

Source: Creative Bloq

Predictive Coding in Human Vision Explains the Phenomenon

Watanabe believes this discovery validates the theory of predictive coding, which fundamentally changes how we understand visual processing

1

. Rather than passively receiving information, the human brain functions as a prediction machine that constantly anticipates what visual input will arrive next based on past experience. The system then processes only the discrepancies between predictions and actual new data. If our brains waited to process every photon hitting the retina, we would react to the world with massive delays.

According to this theory, specific elements in the rotating snakes images trigger our brain to assume motion based on previous experience. When the AI analyzes the high-contrast shading patterns, its learned parameters interpret these as motion cues, predicting the next frame should be shifted. When the static image remains unchanged, the prediction fires again, creating a continuous loop of anticipated motion. This optimization for speed allows faster interpretation of surroundings, though it occasionally causes misreading of scenes—perceiving motion in a static image.

Key Differences Between Human Perception and AI Vision

Despite the remarkable similarities, important differences emerged in how AI and humans experience these optical illusions. While humans can "freeze" any specific circle by staring directly at it, PredNet always perceives all circles as moving simultaneously

1

. Watanabe attributes this limitation to PredNet's lack of an attention mechanism, which prevents it from focusing on one specific spot the way biological vision systems can.

The researcher emphasizes that no deep neural networks can currently experience all optical illusions in the same way humans do

1

. This gap reveals fundamental differences in how artificial and biological systems process visual information, despite their architectural similarities.

Why This Matters for AI Development

The fact that AI succumbs to optical illusions represents one of the strongest validations of modern neural network architecture. As these systems become more capable of navigating the real world, they appear to be converging on the same solution nature discovered millions of years ago through evolution. The research suggests that what we often perceive as flaws—prediction error and visual hallucinations—are actually mathematical inevitabilities of any system, biological or artificial, that attempts to predict future states efficiently.

While tech giants invest billions trying to create artificial general intelligence surpassing human cognition, these findings offer perspective on current AI limitations

1

. The discovery that AI models share our perceptual vulnerabilities indicates they are not simply recording data but actively trying to understand it. In the realm of vision, these hallucinations prove the machine is learning to see—and in doing so, learning to be tricked just like us.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo