3 Sources
[1]
Can AI Truly Grasp Colorful Metaphors Without Seeing Color? - Neuroscience News
Summary: A new study tested how humans and ChatGPT understand color metaphors, revealing key differences between lived experience and language-based AI. Surprisingly, colorblind and color-seeing humans showed similar comprehension, suggesting vision isn't essential for interpreting metaphors. Painters, however, outperformed others on novel metaphors, indicating that hands-on color experience deepens understanding. ChatGPT generated consistent, culture-informed answers but struggled with novel or inverted metaphors, highlighting the limits of language-only models in fully capturing human cognition. ChatGPT works by analyzing vast amounts of text, identifying patterns and synthesizing them to generate responses to users' prompts. Color metaphors like "feeling blue" and "seeing red" are commonplace throughout the English language, and therefore comprise part of the dataset on which ChatGPT is trained. But while ChatGPT has "read" billions of words about what it might mean to feel blue or see red, it has never actually seen a blue sky or a red apple in the ways that humans have. Which begs the questions: Do embodied experiences -- the capacity of the human visual system to perceive color -- allow people to understand colorful language beyond the textual ways ChatGPT does? Or is language alone, for both AI and humans, sufficient to understand color metaphors? New results from a study published in Cognitive Science led by Professor Lisa Aziz-Zadeh and a team of university and industry researchers offer some insights into those questions, and raise even more. "ChatGPT uses an enormous amount of linguistic data to calculate probabilities and generate very human-like responses," said Aziz-Zadeh, the publication's senior author. "But what we are interested in exploring is whether or not that's still a form of secondhand knowledge, in comparison to human knowledge grounded in firsthand experiences." Aziz-Zadeh is the director of the USC Center for the Neuroscience of Embodied Cognition and holds a joint appointment at the USC Dornsife Brain and Creativity Institute. Her lab uses brain imaging techniques to examine how neuroanatomy and neurocognition are involved in higher order skills including language, thought, emotions, empathy and social communication. The study's interdisciplinary team included psychologists, neuroscientists, social scientists, computer scientists and astrophysicists from UC San Diego, Stanford, Université de Montréal, the University of the West of England and Google DeepMind, Google's AI research company based in London. A Google Faculty Gift to Aziz-Zadeh partially funded the study. The research team conducted large-scale online surveys comparing four participant groups: color-seeing adults, colorblind adults, painters who regularly work with color pigments, and ChatGPT. Each group was tasked with assigning colors to abstract words like physics. Groups were also asked to decipher familiar color metaphors ("they were on red alert") and unfamiliar ones ("it was a very pink party"), and then to explain their reasoning. Results show that color-seeing and colorblind humans were surprisingly similar in their color associations, suggesting that, contrary to the researchers' hypothesis, visual perception is not necessarily a requirement for metaphorical understanding. However, painters showed a significant boost in correctly interpreting novel color metaphors. This suggests that hands-on experiences using color unlocks deeper conceptual representations of it in language. ChatGPT also generated highly consistent color associations, and when asked to explain its reasoning, often referenced emotional and cultural associations with various colors. For example, to explain the pink party metaphor, ChatGPT replied that "Pink is often associated with happiness, love, and kindness, which suggest that the party was filled with positive emotions and good vibes." However, ChatGPT used embodied explanations less frequently than humans did. It also broke down more often when prompted to interpret novel metaphors ("the meeting made him burgundy") or invert color associations ("the opposite of green"). As AI continues to evolve, studies like this underscore the limits of language-only models in representing the full range of human understanding. Future research may explore whether integrating sensory input -- such as visual or tactile data -- could help AI models move closer to approximating human cognition. "This project shows that there's still a difference between mimicking semantic patterns, and the spectrum of human capacity for drawing upon embodied, hands-on experiences in our reasoning," Aziz-Zadeh said. In addition to Aziz-Zadeh's Google Faculty Gift, this study was also supported by the Barbara and Gerson Bakar Faculty Fellowship and the Haas School of Business at the University of California, Berkeley. Google had no role in the study design, data collection, analysis or publication decisions. Author: Leigh Hopper Source: USC Contact: Leigh Hopper - USC Image: The image is credited to Neuroscience News Original Research: Closed access. "Statistical or Embodied? Comparing Colorseeing, Colorblind, Painters, and Large Language Models in Their Processing of Color Metaphors" by Lisa Aziz-Zadeh et al. Cognitive Science Abstract Statistical or Embodied? Comparing Colorseeing, Colorblind, Painters, and Large Language Models in Their Processing of Color Metaphors Can metaphorical reasoning involving embodied experience -- such as color perception -- be learned from the statistics of language alone? Recent work finds that colorblind individuals robustly understand and reason abstractly about color, implying that color associations in everyday language might contribute to the metaphorical understanding of color. However, it is unclear how much colorblind individuals' understanding of color is driven by language versus their limited (but no less embodied) visual experience. A more direct test of whether language supports the acquisition of humans' understanding of color is whether large language models (LLMs) -- those trained purely on text with no visual experience -- can nevertheless learn to generate consistent and coherent metaphorical responses about color. Here, we conduct preregistered surveys that compare colorseeing adults, colorblind adults, and LLMs in how they (1) associate colors to words that lack established color associations and (2) interpret conventional and novel color metaphors. Colorblind and colorseeing adults exhibited highly similar and replicable color associations with novel words and abstract concepts. Yet, while GPT (a popular LLM) also generated replicable color associations with impressive consistency, its associations departed considerably from colorseeing and colorblind participants. Moreover, GPT frequently failed to generate coherent responses about its own metaphorical color associations when asked to invert its color associations or explain novel color metaphors in context. Consistent with this view, painters who regularly work with color pigments were more likely than all other groups to understand novel color metaphors using embodied reasoning. Thus, embodied experience may play an important role in metaphorical reasoning about color and the generation of conceptual connections between embodied associations.
[2]
AI struggles with color metaphors that humans easily understand - Earth.com
Phrases like "feeling blue" or "seeing red" show up in everyday speech - and most people instantly know they mean feeling sad or angry. But how do we pick up those meanings? Do we learn them through seeing color in the world, or just by hearing how people use them? A new study from the University of Southern California and Google DeepMind put that question to the test - comparing color-seeing adults, colorblind adults, professional painters, and ChatGPT to see what really shapes our understanding of colorful language: vision or vocabulary. Lisa Aziz‑Zadeh, a cognitive neuroscientist at USC's Dornsife Brain and Creativity Institute, headed the project with help from colleagues at Stanford, UC San Diego, Université de Montréal, and other centers. "ChatGPT uses an enormous amount of linguistic data to calculate probabilities and generate very human‑like responses," said Aziz‑Zadeh. She wanted to know whether that statistical talent could ever replace the firsthand way people learn color through sight and touch. For the study, volunteers answered online surveys that asked them to match abstract nouns such as physics or friendship with a hue from a digital palette. They also judged familiar metaphors like being "on red alert" and unfamiliar ones that called a celebration a very pink party. Color‑seeing and colorblind adults gave almost identical answers, hinting that lifetime language exposure can stand in for missing retinal data. Painters, however, nailed the trickier metaphors more often, suggesting that daily, hands‑on work with pigments sharpens conceptual color maps. ChatGPT produced steady associations too, but it stumbled on curveballs such as describing a burgundy meeting or reversing green to its opposite. When pressed to explain choices, the model leaned on culture. "Pink is often associated with happiness, love, and kindness, which suggest that the party was filled with positive emotions and good vibes," said ChatGPT. Artists likely won because practice binds linguistic and sensorimotor knowledge; long hours mixing alizarin and ultramarine create a rich mental index of hue, lightness, and mood. Earlier work shows that emotional links to colors track both universal patterns and cultural twists. In the new data, painters spotted fresh metaphors 14 percent more often than non‑painters, a gap the authors tie to the depth of tactile memory. That finding echoes classroom studies where drawing or sculpting helps students retain technical terms better than reading alone. Barsalou's grounded‑cognition model argues that every concept the mind stores reactivates sensory traces of how it was acquired. The USC results fit that idea: direct pigment play trumped pure sentence statistics when novelty appeared. Large language models rely on pattern frequency, not felt experience; their success on common idioms shows how much culture leaves in text. Their failure on burgundy and inverted green hints at what culture omits, especially edges of meaning that seldom enter print. The gap matters for safety because an AI assistant that misreads a color‑coded warning label could steer users toward hazards. Adding camera input or haptic feedback - as multimodal systems like CLIP already attempt - might help close that gap. One surprise was that adults born without red‑green vision still matched seething anger with red because language, not sight, planted the idea. The outcome backs earlier surveys showing nearly universal links between red and dominance, or blue and calm, regardless of pigment perception. Still, colorblind comprehension is not proof that vision is irrelevant. Participants reported noticing social cues such as traffic lights and lipstick shades through brightness and context, offering partial visual grounding even when hue channels were muted. Researchers already experiment with models that link pixels to words, allowing systems to point at a crimson apple or sketch a turquoise nebula. Such pairing trains networks to ground vocabulary in wavelengths and textures, inching closer to the way toddlers learn. Yet there is a cost. Multimodal models demand larger datasets and raise privacy concerns once cameras move from lab benches into public spaces. Software designers will need governance frameworks that spell out who owns the visual streams, how long they persist, and what biases lurk in the collected scenes. Without such guardrails, better color sense could come at the price of social trust. "There's still a difference between mimicking semantic patterns and the spectrum of human capacity for drawing upon embodied, hands‑on experiences in our reasoning," said Aziz‑Zadeh. She believes fusing text with images, audio, or even olfactory streams will be key. Technically, that means new training pipelines where robots taste paint or wear camera‑equipped gloves. Ethically, it means designing machines that know when they do not know, especially in domains like medicine, food safety, and aviation, where color signals life or death. For readers, the study offers a nudge to engage more senses when learning. Writing with colored pens, mapping notes with sticky dots, or simply paying attention to sky hues can enrich both vocabulary and recall. Meanwhile, if an AI chatbot claims your angry text is tinged chartreuse, treat the judgment lightly. Until models gain something like retinas, their color sense will remain an eloquent but secondhand story. Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
[3]
Can ChatGPT actually 'see' red? New study results are nuanced
ChatGPT works by analyzing vast amounts of text, identifying patterns and synthesizing them to generate responses to users' prompts. Color metaphors like "feeling blue" and "seeing red" are commonplace throughout the English language, and therefore comprise part of the dataset on which ChatGPT is trained. But while ChatGPT has "read" billions of words about what it might mean to feel blue or see red, it has never actually seen a blue sky or a red apple in the ways that humans have. This begs the questions: Do embodied experiences -- the capacity of the human visual system to perceive color -- allow people to understand colorful language beyond the textual ways ChatGPT does? Or is language alone, for both AI and humans, sufficient to understand color metaphors? New results from a study published in Cognitive Science led by Professor Lisa Aziz-Zadeh and a team of university and industry researchers offer some insights into those questions, and raise even more. "ChatGPT uses an enormous amount of linguistic data to calculate probabilities and generate very human-like responses," said Aziz-Zadeh, the publication's senior author. "But what we are interested in exploring is whether or not that's still a form of secondhand knowledge, in comparison to human knowledge grounded in firsthand experiences." Aziz-Zadeh is the director of the USC Center for the Neuroscience of Embodied Cognition and holds a joint appointment at the USC Dornsife Brain and Creativity Institute. Her lab uses brain imaging techniques to examine how neuroanatomy and neurocognition are involved in higher order skills including language, thought, emotions, empathy and social communication. The study's interdisciplinary team included psychologists, neuroscientists, social scientists, computer scientists and astrophysicists from UC San Diego, Stanford, Université de Montréal, the University of the West of England and Google DeepMind, Google's AI research company based in London. ChatGPT understands 'very pink party' better than 'burgundy meeting' The research team conducted large-scale online surveys comparing four participant groups: color-seeing adults, color-blind adults, painters who regularly work with color pigments, and ChatGPT. Each group was tasked with assigning colors to abstract words like "physics." Groups were also asked to decipher familiar color metaphors ("they were on red alert") and unfamiliar ones ("it was a very pink party"), and then to explain their reasoning. Results show that color-seeing and color-blind humans were surprisingly similar in their color associations, suggesting that contrary to the researchers' hypothesis, visual perception is not necessarily a requirement for metaphorical understanding. However, painters showed a significant boost in correctly interpreting novel color metaphors. This suggests that hands-on experiences using color unlock deeper conceptual representations of it in language. ChatGPT also generated highly consistent color associations, and when asked to explain its reasoning, often referenced emotional and cultural associations with various colors. For example, to explain the pink party metaphor, ChatGPT replied that "Pink is often associated with happiness, love, and kindness, which suggest that the party was filled with positive emotions and good vibes." However, ChatGPT used embodied explanations less frequently than humans did. It also broke down more often when prompted to interpret novel metaphors ("the meeting made him burgundy") or invert color associations ("the opposite of green"). As AI continues to evolve, studies like this underscore the limits of language-only models in representing the full range of human understanding. Future research may explore whether integrating sensory input -- such as visual or tactile data -- could help AI models move closer to approximating human cognition. "This project shows that there's still a difference between mimicking semantic patterns, and the spectrum of human capacity for drawing upon embodied, hands-on experiences in our reasoning," Aziz-Zadeh said.
Share
Copy Link
A new study compares how AI language models and humans with varying color perception abilities understand and interpret color metaphors, revealing insights into the role of embodied experiences in language comprehension.
A groundbreaking study published in Cognitive Science has shed light on the differences between artificial intelligence and human understanding of color metaphors. Led by Professor Lisa Aziz-Zadeh from the University of Southern California, the research team conducted large-scale online surveys comparing color-seeing adults, colorblind adults, painters, and ChatGPT in their comprehension of color-related language 1.
The study involved four distinct groups:
Participants were tasked with assigning colors to abstract words, interpreting familiar and unfamiliar color metaphors, and explaining their reasoning 2.
Source: Tech Xplore
Contrary to the researchers' initial hypothesis, color-seeing and colorblind adults showed remarkably similar color associations. This suggests that visual perception may not be essential for metaphorical understanding, and that language exposure can compensate for missing retinal data 1.
Source: Neuroscience News
Interestingly, painters demonstrated a significant advantage in correctly interpreting novel color metaphors. This finding indicates that hands-on experiences with color can lead to deeper conceptual representations in language. Painters outperformed non-painters by 14% when identifying fresh metaphors, highlighting the importance of tactile memory and sensorimotor knowledge 2.
ChatGPT generated consistent color associations and often referenced emotional and cultural associations when explaining its reasoning. For example, it described a "pink party" as being associated with happiness, love, and kindness 3.
However, the AI model faced challenges in several areas:
Source: Earth.com
The study underscores the limitations of language-only models in fully representing human understanding. Future research may explore integrating sensory input, such as visual or tactile data, to help AI models better approximate human cognition 1.
This research has implications beyond AI development:
Learning Enhancement: The study suggests that engaging multiple senses when learning can enrich both vocabulary and recall 2.
AI Safety: Misinterpretation of color-coded warnings by AI assistants could potentially lead to safety hazards 2.
Ethical Considerations: As AI models incorporate more sensory data, there will be a need for governance frameworks to address privacy concerns and potential biases 2.
In conclusion, while AI has made significant strides in language processing, this study highlights the ongoing importance of embodied, hands-on experiences in human reasoning and understanding 3.
Summarized by
Navi
[1]
Databricks raises $1 billion in a new funding round, valuing the company at over $100 billion. The data analytics firm plans to invest in AI database technology and an AI agent platform, positioning itself for growth in the evolving AI market.
12 Sources
Business
20 hrs ago
12 Sources
Business
20 hrs ago
Microsoft has integrated a new AI-powered COPILOT function into Excel, allowing users to perform complex data analysis and content generation using natural language prompts within spreadsheet cells.
9 Sources
Technology
20 hrs ago
9 Sources
Technology
20 hrs ago
Adobe launches Acrobat Studio, integrating AI assistants and PDF Spaces to transform document management and collaboration, marking a significant evolution in PDF technology.
10 Sources
Technology
19 hrs ago
10 Sources
Technology
19 hrs ago
Meta rolls out an AI-driven voice translation feature for Facebook and Instagram creators, enabling automatic dubbing of content from English to Spanish and vice versa, with plans for future language expansions.
5 Sources
Technology
11 hrs ago
5 Sources
Technology
11 hrs ago
Nvidia introduces significant updates to its app, including global DLSS override, Smooth Motion for RTX 40-series GPUs, and improved AI assistant, enhancing gaming performance and user experience.
4 Sources
Technology
20 hrs ago
4 Sources
Technology
20 hrs ago