AI's Sensory Limitations: The Challenge of Understanding Concepts Like Flowers

3 Sources

Share

A new study reveals that AI models struggle to fully comprehend sensory-based concepts like flowers, highlighting the importance of multi-modal learning and potential future developments in AI.

AI's Sensory Limitations in Understanding Concepts

A groundbreaking study published in Nature Human Behaviour has revealed that artificial intelligence (AI) models, despite their advanced capabilities, struggle to fully comprehend sensory-based concepts in the same way humans do. The research, led by Qihui Xu from The Ohio State University, highlights the limitations of AI in understanding concepts like flowers, which humans experience through multiple senses

1

.

Source: Tech Xplore

Source: Tech Xplore

Comparing Human and AI Comprehension

The study compared the understanding of nearly 4,500 words between humans and large language models (LLMs) such as OpenAI's GPT-3.5 and GPT-4, and Google's PaLM and Gemini. Participants were asked to rate words on various aspects, including emotional arousal and connections to senses and physical interactions

2

.

Results showed that while AI models performed well in comprehending abstract concepts without sensory associations, they struggled significantly with words linked to physical experiences. For instance, AI models tended to associate experiencing flowers with the torso, a concept most humans would find unusual

1

.

The Role of Sensory Experience in Concept Formation

Source: Gizmodo

Source: Gizmodo

Researchers attribute this discrepancy to the AI's lack of sensory and motor experiences. As Xu explains, "A large language model can't smell a rose, touch the petals of a daisy or walk through a field of wildflowers"

3

. This limitation prevents AI from forming a complete representation of concepts like flowers, which for humans involve a rich tapestry of sensory inputs and emotional responses.

Implications for AI Development

The study's findings have significant implications for AI development and human-AI interactions. It suggests that future advancements in AI may require more than just processing vast amounts of text data. Multi-modal training, incorporating visual information alongside text, has shown promise in improving AI's understanding of visual concepts

2

.

The Potential of Embodied AI

Researchers propose that giving AI models a physical form through robotics could lead to substantial improvements in their ability to understand and interact with the world. Philip Feldman from the University of Maryland, Baltimore County, suggests that exposing AI to sensorimotor input through a robot body could result in a significant leap in capabilities

1

.

However, this approach comes with its own set of challenges and risks. Feldman warns about the potential for physical harm and the need for careful implementation of safety measures in robotic AI systems

1

.

Future Directions in AI Research

As AI continues to evolve, researchers are exploring ways to bridge the gap between machine learning and human-like understanding. The integration of sensor data and robotics in future AI models may enable them to make inferences about and interact with the physical world more effectively

3

.

This research underscores the complexity of human cognition and the challenges that remain in creating AI systems that can truly replicate the depth and richness of human understanding. As Xu concludes, "The human experience is far richer than words alone can hold"

2

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo