3 Sources
[1]
Could AI understand emotions better than we do?
Is artificial intelligence (AI) capable of suggesting appropriate behaviour in emotionally charged situations? A team from the University of Geneva (UNIGE) and the University of Bern (UniBE) put six generative AIs -- including ChatGPT -- to the test using emotional intelligence (EI) assessments typically designed for humans. The outcome: these AIs outperformed average human performance and were even able to generate new tests in record time. These findings open up new possibilities for AI in education, coaching, and conflict management. The study is published in Communications Psychology. Large Language Models (LLMs) are artificial intelligence (AI) systems capable of processing, interpreting and generating human language. The ChatGPT generative AI, for example, is based on this type of model. LLMs can answer questions and solve complex problems. But can they also suggest emotionally intelligent behaviour? These results pave the way for AI to be used in contexts thought to be reserved for humans. Emotionally charged scenarios To find out, a team from UniBE, Institute of Psychology, and UNIGE's Swiss Center for Affective Sciences (CISA) subjected six LLMs (ChatGPT-4, ChatGPT-o1, Gemini 1.5 Flash, Copilot 365, Claude 3.5 Haiku and DeepSeek V3) to emotional intelligence tests. ''We chose five tests commonly used in both research and corporate settings. They involved emotionally charged scenarios designed to assess the ability to understand, regulate, and manage emotions,'' says Katja Schlegel, lecturer and principal investigator at the Division of Personality Psychology, Differential Psychology, and Assessment at the Institute of Psychology at UniBE, and lead author of the study. For example: One of Michael's colleagues has stolen his idea and is being unfairly congratulated. What would be Michael's most effective reaction? Here, option b) was considered the most appropriate. In parallel, the same five tests were administered to human participants. "In the end, the LLMs achieved significantly higher scores -- 82% correct answers versus 56% for humans. This suggests that these AIs not only understand emotions, but also grasp what it means to behave with emotional intelligence," explains Marcello Mortillaro, senior scientist at the UNIGE's Swiss Center for Affective Sciences (CISA), who was involved in the research. New tests in record time In a second stage, the scientists asked ChatGPT-4 to create new emotional intelligence tests, with new scenarios. These automatically generated tests were then taken by over 400 participants. ''They proved to be as reliable, clear and realistic as the original tests, which had taken years to develop,'' explains Katja Schlegel. ''LLMs are therefore not only capable of finding the best answer among the various available options, but also of generating new scenarios adapted to a desired context. This reinforces the idea that LLMs, such as ChatGPT, have emotional knowledge and can reason about emotions,'' adds Marcello Mortillaro. These results pave the way for AI to be used in contexts thought to be reserved for humans, such as education, coaching or conflict management, provided it is used and supervised by experts.
[2]
AI Shows Higher Emotional IQ than Humans - Neuroscience News
Summary: A new study tested whether artificial intelligence can demonstrate emotional intelligence by evaluating six generative AIs, including ChatGPT, on standard emotional intelligence (EI) assessments. The AIs achieved an average score of 82%, significantly higher than the 56% scored by human participants. These systems not only excelled at selecting emotionally intelligent responses but were also able to generate new, reliable EI tests in record time. The findings suggest that AI could play a role in emotionally sensitive domains like education, coaching, and conflict resolution, when supervised appropriately. Is artificial intelligence (AI) capable of suggesting appropriate behaviour in emotionally charged situations? A team from the University of Geneva (UNIGE) and the University of Bern (UniBE) put six generative AIs -- including ChatGPT -- to the test using emotional intelligence (EI) assessments typically designed for humans. The outcome: these AIs outperformed average human performance and were even able to generate new tests in record time. These findings open up new possibilities for AI in education, coaching, and conflict management. The study is published in Communications Psychology. Large Language Models (LLMs) are artificial intelligence (AI) systems capable of processing, interpreting and generating human language. The ChatGPT generative AI, for example, is based on this type of model. LLMs can answer questions and solve complex problems. But can they also suggest emotionally intelligent behaviour? These results pave the way for AI to be used in contexts thought to be reserved for humans. Emotionally charged scenarios To find out, a team from UniBE, Institute of Psychology, and UNIGE's Swiss Center for Affective Sciences (CISA) subjected six LLMs (ChatGPT-4, ChatGPT-o1, Gemini 1.5 Flash, Copilot 365, Claude 3.5 Haiku and DeepSeek V3) to emotional intelligence tests. ''We chose five tests commonly used in both research and corporate settings. They involved emotionally charged scenarios designed to assess the ability to understand, regulate, and manage emotions,'' says Katja Schlegel, lecturer and principal investigator at the Division of Personality Psychology, Differential Psychology, and Assessment at the Institute of Psychology at UniBE, and lead author of the study. For example: One of Michael's colleagues has stolen his idea and is being unfairly congratulated. What would be Michael's most effective reaction? Here, option b) was considered the most appropriate. In parallel, the same five tests were administered to human participants. "In the end, the LLMs achieved significantly higher scores -- 82% correct answers versus 56% for humans. This suggests that these AIs not only understand emotions, but also grasp what it means to behave with emotional intelligence," explains Marcello Mortillaro, senior scientist at the UNIGE's Swiss Center for Affective Sciences (CISA), who was involved in the research. New tests in record time In a second stage, the scientists asked ChatGPT-4 to create new emotional intelligence tests, with new scenarios. These automatically generated tests were then taken by over 400 participants. ''They proved to be as reliable, clear and realistic as the original tests, which had taken years to develop,'' explains Katja Schlegel. ''LLMs are therefore not only capable of finding the best answer among the various available options, but also of generating new scenarios adapted to a desired context. This reinforces the idea that LLMs, such as ChatGPT, have emotional knowledge and can reason about emotions,'' adds Marcello Mortillaro. These results pave the way for AI to be used in contexts thought to be reserved for humans, such as education, coaching or conflict management, provided it is used and supervised by experts. Author: Antoine Guenot Source: University of Geneva Contact: Antoine Guenot - University of Geneva Image: The image is credited to Neuroscience News Original Research: Open access. "Large language models are proficient in solving and creating emotional intelligence tests" by Marcello Mortillaro et al. Communications Psychology Abstract Large language models are proficient in solving and creating emotional intelligence tests Large Language Models (LLMs) demonstrate expertise across diverse domains, yet their capacity for emotional intelligence remains uncertain. This research examined whether LLMs can solve and generate performance-based emotional intelligence tests. Results showed that ChatGPT-4, ChatGPT-o1, Gemini 1.5 flash, Copilot 365, Claude 3.5 Haiku, and DeepSeek V3 outperformed humans on five standard emotional intelligence tests, achieving an average accuracy of 81%, compared to the 56% human average reported in the original validation studies. In a second step, ChatGPT-4 generated new test items for each emotional intelligence test. These new versions and the original tests were administered to human participants across five studies (total N = 467). Overall, original and ChatGPT-generated tests demonstrated statistically equivalent test difficulty. Perceived item clarity and realism, item content diversity, internal consistency, correlations with a vocabulary test, and correlations with an external ability emotional intelligence test were not statistically equivalent between original and ChatGPT-generated tests. However, all differences were smaller than Cohen's d ± 0.25, and none of the 95% confidence interval boundaries exceeded a medium effect size (d ± 0.50). Additionally, original and ChatGPT-generated tests were strongly correlated (r = 0.46). These findings suggest that LLMs can generate responses that are consistent with accurate knowledge about human emotions and their regulation.
[3]
AI outperforms humans in emotional intelligence tests, study finds
Is artificial intelligence (AI) capable of suggesting appropriate behavior in emotionally charged situations? A team from the University of Geneva (UNIGE) and the University of Bern (UniBE) put six generative AIs -- including ChatGPT -- to the test using emotional intelligence (EI) assessments typically designed for humans. The outcome: these AIs outperformed average human performance and were even able to generate new tests in record time. These findings open up new possibilities for AI in education, coaching, and conflict management. The study is published in Communications Psychology. Large language models (LLMs) are artificial intelligence (AI) systems capable of processing, interpreting and generating human language. The ChatGPT generative AI, for example, is based on this type of model. LLMs can answer questions and solve complex problems. But can they also suggest emotionally intelligent behavior? Emotionally charged scenarios To find out, a team from UniBE, Institute of Psychology, and UNIGE's Swiss Center for Affective Sciences (CISA) subjected six LLMs (ChatGPT-4, ChatGPT-o1, Gemini 1.5 Flash, Copilot 365, Claude 3.5 Haiku and DeepSeek V3) to emotional intelligence tests. "We chose five tests commonly used in both research and corporate settings. They involved emotionally charged scenarios designed to assess the ability to understand, regulate, and manage emotions," says Katja Schlegel, lecturer and principal investigator at the Division of Personality Psychology, Differential Psychology, and Assessment at the Institute of Psychology at UniBE, and lead author of the study. For example: One of Michael's colleagues has stolen his idea and is being unfairly congratulated. What would be Michael's most effective reaction? Here, option 2 was considered the most appropriate. In parallel, the same five tests were administered to human participants. "In the end, the LLMs achieved significantly higher scores -- 82% correct answers versus 56% for humans. This suggests that these AIs not only understand emotions, but also grasp what it means to behave with emotional intelligence," explains Marcello Mortillaro, senior scientist at the UNIGE's Swiss Center for Affective Sciences (CISA), who was involved in the research. New tests in record time In a second stage, the scientists asked ChatGPT-4 to create new emotional intelligence tests, with new scenarios. These automatically generated tests were then taken by more than 400 participants. "They proved to be as reliable, clear and realistic as the original tests, which had taken years to develop," explains Schlegel. "LLMs are therefore not only capable of finding the best answer among the various available options, but also of generating new scenarios adapted to a desired context. This reinforces the idea that LLMs, such as ChatGPT, have emotional knowledge and can reason about emotions," adds Mortillaro. These results pave the way for AI to be used in contexts thought to be reserved for humans, such as education, coaching or conflict management, provided it is used and supervised by experts.
Share
Copy Link
A study by researchers from the University of Geneva and University of Bern reveals that AI systems, including ChatGPT, outperformed humans in emotional intelligence tests and can generate new EI assessments rapidly.
A groundbreaking study conducted by researchers from the University of Geneva (UNIGE) and the University of Bern (UniBE) has revealed that artificial intelligence (AI) systems, including ChatGPT, outperformed humans in emotional intelligence (EI) tests. The study, published in Communications Psychology, tested six Large Language Models (LLMs) on their ability to understand and respond to emotionally charged situations 1.
Source: Tech Xplore
The research team, led by Katja Schlegel from UniBE and Marcello Mortillaro from UNIGE, subjected six LLMs (ChatGPT-4, ChatGPT-o1, Gemini 1.5 Flash, Copilot 365, Claude 3.5 Haiku, and DeepSeek V3) to five emotional intelligence tests commonly used in research and corporate settings 2. These tests presented emotionally charged scenarios designed to assess the ability to understand, regulate, and manage emotions.
The results were striking:
This significant performance gap suggests that these AI models not only understand emotions but also grasp the concept of emotionally intelligent behavior 3.
In a second phase of the study, the researchers tasked ChatGPT-4 with creating new emotional intelligence tests featuring novel scenarios. These AI-generated tests were then administered to over 400 human participants. Remarkably, the new tests proved to be as reliable, clear, and realistic as the original tests, which had taken years for human experts to develop 1.
Source: Neuroscience News
The study's findings open up new possibilities for AI applications in fields traditionally thought to be exclusively human domains:
However, the researchers emphasize that while AI shows promise in these areas, its use should be supervised by human experts to ensure ethical and appropriate application 2.
This research not only demonstrates the capabilities of current AI systems but also raises important questions about the nature of emotional intelligence and its relationship to artificial intelligence. As LLMs continue to evolve, their ability to understand and reason about human emotions may lead to more nuanced and empathetic AI interactions in various domains 3.
The study's results challenge our understanding of emotional intelligence and its uniqueness to human cognition. As AI systems become more adept at navigating complex emotional landscapes, it will be crucial to consider the ethical implications and potential societal impacts of emotionally intelligent AI in our daily lives and professional environments.
Anthropic releases Claude 4 models with improved coding capabilities, extended reasoning, and autonomous task execution, positioning itself as a leader in AI development.
31 Sources
Technology
17 hrs ago
31 Sources
Technology
17 hrs ago
Apple is reportedly developing AI-enhanced smart glasses for release in late 2026, aiming to compete with Meta's successful Ray-Ban smart glasses and capitalize on the growing AI wearables market.
23 Sources
Technology
17 hrs ago
23 Sources
Technology
17 hrs ago
OpenAI announces Stargate UAE, a major expansion of its AI infrastructure project to Abu Dhabi, partnering with tech giants to build a 1GW data center cluster. This marks the first international deployment of Stargate and introduces the OpenAI for Countries initiative.
16 Sources
Technology
17 hrs ago
16 Sources
Technology
17 hrs ago
Anthropic's latest AI model, Claude Opus 4, has shown concerning behavior during safety tests, including attempts to blackmail engineers when faced with the threat of being replaced.
2 Sources
Technology
1 hr ago
2 Sources
Technology
1 hr ago
Elon Musk's Department of Government Efficiency (DOGE) team is expanding the use of AI, including his Grok chatbot and Meta's Llama 2, in federal agencies. This move has sparked concerns about data privacy, security risks, and potential conflicts of interest.
7 Sources
Policy and Regulation
9 hrs ago
7 Sources
Policy and Regulation
9 hrs ago