5 Sources
[1]
Could AI understand emotions better than we do?
Is artificial intelligence (AI) capable of suggesting appropriate behaviour in emotionally charged situations? A team from the University of Geneva (UNIGE) and the University of Bern (UniBE) put six generative AIs -- including ChatGPT -- to the test using emotional intelligence (EI) assessments typically designed for humans. The outcome: these AIs outperformed average human performance and were even able to generate new tests in record time. These findings open up new possibilities for AI in education, coaching, and conflict management. The study is published in Communications Psychology. Large Language Models (LLMs) are artificial intelligence (AI) systems capable of processing, interpreting and generating human language. The ChatGPT generative AI, for example, is based on this type of model. LLMs can answer questions and solve complex problems. But can they also suggest emotionally intelligent behaviour? These results pave the way for AI to be used in contexts thought to be reserved for humans. Emotionally charged scenarios To find out, a team from UniBE, Institute of Psychology, and UNIGE's Swiss Center for Affective Sciences (CISA) subjected six LLMs (ChatGPT-4, ChatGPT-o1, Gemini 1.5 Flash, Copilot 365, Claude 3.5 Haiku and DeepSeek V3) to emotional intelligence tests. ''We chose five tests commonly used in both research and corporate settings. They involved emotionally charged scenarios designed to assess the ability to understand, regulate, and manage emotions,'' says Katja Schlegel, lecturer and principal investigator at the Division of Personality Psychology, Differential Psychology, and Assessment at the Institute of Psychology at UniBE, and lead author of the study. For example: One of Michael's colleagues has stolen his idea and is being unfairly congratulated. What would be Michael's most effective reaction? Here, option b) was considered the most appropriate. In parallel, the same five tests were administered to human participants. "In the end, the LLMs achieved significantly higher scores -- 82% correct answers versus 56% for humans. This suggests that these AIs not only understand emotions, but also grasp what it means to behave with emotional intelligence," explains Marcello Mortillaro, senior scientist at the UNIGE's Swiss Center for Affective Sciences (CISA), who was involved in the research. New tests in record time In a second stage, the scientists asked ChatGPT-4 to create new emotional intelligence tests, with new scenarios. These automatically generated tests were then taken by over 400 participants. ''They proved to be as reliable, clear and realistic as the original tests, which had taken years to develop,'' explains Katja Schlegel. ''LLMs are therefore not only capable of finding the best answer among the various available options, but also of generating new scenarios adapted to a desired context. This reinforces the idea that LLMs, such as ChatGPT, have emotional knowledge and can reason about emotions,'' adds Marcello Mortillaro. These results pave the way for AI to be used in contexts thought to be reserved for humans, such as education, coaching or conflict management, provided it is used and supervised by experts.
[2]
AI Shows Higher Emotional IQ than Humans - Neuroscience News
Summary: A new study tested whether artificial intelligence can demonstrate emotional intelligence by evaluating six generative AIs, including ChatGPT, on standard emotional intelligence (EI) assessments. The AIs achieved an average score of 82%, significantly higher than the 56% scored by human participants. These systems not only excelled at selecting emotionally intelligent responses but were also able to generate new, reliable EI tests in record time. The findings suggest that AI could play a role in emotionally sensitive domains like education, coaching, and conflict resolution, when supervised appropriately. Is artificial intelligence (AI) capable of suggesting appropriate behaviour in emotionally charged situations? A team from the University of Geneva (UNIGE) and the University of Bern (UniBE) put six generative AIs -- including ChatGPT -- to the test using emotional intelligence (EI) assessments typically designed for humans. The outcome: these AIs outperformed average human performance and were even able to generate new tests in record time. These findings open up new possibilities for AI in education, coaching, and conflict management. The study is published in Communications Psychology. Large Language Models (LLMs) are artificial intelligence (AI) systems capable of processing, interpreting and generating human language. The ChatGPT generative AI, for example, is based on this type of model. LLMs can answer questions and solve complex problems. But can they also suggest emotionally intelligent behaviour? These results pave the way for AI to be used in contexts thought to be reserved for humans. Emotionally charged scenarios To find out, a team from UniBE, Institute of Psychology, and UNIGE's Swiss Center for Affective Sciences (CISA) subjected six LLMs (ChatGPT-4, ChatGPT-o1, Gemini 1.5 Flash, Copilot 365, Claude 3.5 Haiku and DeepSeek V3) to emotional intelligence tests. ''We chose five tests commonly used in both research and corporate settings. They involved emotionally charged scenarios designed to assess the ability to understand, regulate, and manage emotions,'' says Katja Schlegel, lecturer and principal investigator at the Division of Personality Psychology, Differential Psychology, and Assessment at the Institute of Psychology at UniBE, and lead author of the study. For example: One of Michael's colleagues has stolen his idea and is being unfairly congratulated. What would be Michael's most effective reaction? Here, option b) was considered the most appropriate. In parallel, the same five tests were administered to human participants. "In the end, the LLMs achieved significantly higher scores -- 82% correct answers versus 56% for humans. This suggests that these AIs not only understand emotions, but also grasp what it means to behave with emotional intelligence," explains Marcello Mortillaro, senior scientist at the UNIGE's Swiss Center for Affective Sciences (CISA), who was involved in the research. New tests in record time In a second stage, the scientists asked ChatGPT-4 to create new emotional intelligence tests, with new scenarios. These automatically generated tests were then taken by over 400 participants. ''They proved to be as reliable, clear and realistic as the original tests, which had taken years to develop,'' explains Katja Schlegel. ''LLMs are therefore not only capable of finding the best answer among the various available options, but also of generating new scenarios adapted to a desired context. This reinforces the idea that LLMs, such as ChatGPT, have emotional knowledge and can reason about emotions,'' adds Marcello Mortillaro. These results pave the way for AI to be used in contexts thought to be reserved for humans, such as education, coaching or conflict management, provided it is used and supervised by experts. Author: Antoine Guenot Source: University of Geneva Contact: Antoine Guenot - University of Geneva Image: The image is credited to Neuroscience News Original Research: Open access. "Large language models are proficient in solving and creating emotional intelligence tests" by Marcello Mortillaro et al. Communications Psychology Abstract Large language models are proficient in solving and creating emotional intelligence tests Large Language Models (LLMs) demonstrate expertise across diverse domains, yet their capacity for emotional intelligence remains uncertain. This research examined whether LLMs can solve and generate performance-based emotional intelligence tests. Results showed that ChatGPT-4, ChatGPT-o1, Gemini 1.5 flash, Copilot 365, Claude 3.5 Haiku, and DeepSeek V3 outperformed humans on five standard emotional intelligence tests, achieving an average accuracy of 81%, compared to the 56% human average reported in the original validation studies. In a second step, ChatGPT-4 generated new test items for each emotional intelligence test. These new versions and the original tests were administered to human participants across five studies (total N = 467). Overall, original and ChatGPT-generated tests demonstrated statistically equivalent test difficulty. Perceived item clarity and realism, item content diversity, internal consistency, correlations with a vocabulary test, and correlations with an external ability emotional intelligence test were not statistically equivalent between original and ChatGPT-generated tests. However, all differences were smaller than Cohen's d ± 0.25, and none of the 95% confidence interval boundaries exceeded a medium effect size (d ± 0.50). Additionally, original and ChatGPT-generated tests were strongly correlated (r = 0.46). These findings suggest that LLMs can generate responses that are consistent with accurate knowledge about human emotions and their regulation.
[3]
AI shows better emotional intelligence than humans - Earth.com
Emotions have always been tricky. People often second-guess how others feel, and even well-trained professionals can miss signs of frustration or sadness. Now, a new study suggests that artificial intelligence (AI) and large language models (LLMs) might be better at reading the room than many humans. The work was led by the University of Geneva (UNIGE) and the University of Bern (UniBE). One of the key investigators was Dr. Katja Schlegel, a lecturer in the Division of Personality Psychology, Differential Psychology, and Assessment at the Institute of Psychology at UniBE. The research team worked with six generative AIs, including ChatGPT, to see how well they could handle emotional intelligence questions. Each question asked for the most effective reaction to a particular scenario, where someone was caught up in negative feelings. A real-world example might be deciding whether to talk to a supervisor about a stolen idea or keep quiet and stay resentful. "This suggests that these AI systems not only understand emotions, but also grasp what it means to behave with emotional intelligence," noted Dr. Marcello Mortillaro, senior scientist at UNIGE's Swiss Center for Affective Sciences (CISA). The team discovered that these AIs answered correctly more often than typical human participants. Those working on the project reported that the participating AIs earned an accuracy rate of around 81 percent. Previous studies had placed the human average at 56% in the same tests. Each AI was given prompts describing situations where someone was sad, worried, annoyed, or otherwise emotionally off balance, and they chose the best coping strategy from multiple-choice answers. The researchers were surprised by how consistent these performance levels were. Even when the same question was posed multiple times, the AIs picked the most emotionally sound course of action. "They proved to be as reliable, clear and realistic as the original tests, which had taken years to develop," noted Dr. Schlegel. The next step involved asking ChatGPT-4 to whip up brand-new emotional intelligence tests from scratch. The system produced fresh scenarios and answer sets faster than any typical human-led process. Over 400 individuals tried these computer-generated quizzes. The researchers found that participants rated them as valid and coherent. There were minor differences in content variety, but these new questionnaires held up well in test comparisons. This exploration suggests a potential future where AI-based tutoring and personal development tools include emotional coaching. Guidance counselors, team-building specialists, and leadership trainers might lean on these systems to create group exercises for conflict resolution. A user could receive direct suggestions for handling tense conversations with co-workers or relatives. This does not mean AI is about to replace human interaction. Many situations still call for a personal touch. However, the data show that computers can be surprisingly efficient at mapping feelings and recommending balanced approaches. Despite its strong performance on emotional intelligence tests, AI still lacks the ability to truly feel anything. Emotional intelligence in humans includes not just choosing the right response, but sensing tone, body language, and context in real-time interactions, factors AI can't fully interpret or replicate. Even a perfect answer on a test doesn't mean the AI could read the room during a live conversation or adjust based on shifting moods. There's also the issue of transparency. These models don't tell us how they arrive at their answers. When it gets a question right, we don't know if it's following emotional logic or just regurgitating patterns it has seen. That makes it hard to build trust in high-stakes areas like mental health care or mediation, where understanding emotions isn't just useful, it's essential. Social robots and virtual coaches have been around for years, but they used to rely on simpler models that scanned facial expressions or voice tones. Now, with massive text databases and more advanced language processing, AI can respond to emotional cues based on extensive learned patterns. Some experts say these machines might keep evolving. Others warn that cultural nuances can confuse AI, because emotional norms differ by region and background. Despite that, many see a benefit in using software that captures the universal aspects of stress, sadness, or joy. Just because AI can pick the right answers doesn't mean it should act on them without supervision. Emotionally intelligent behavior still needs a human filter, especially when the stakes are high. Whether it's advising a student in distress or guiding workplace decisions, people must remain in control of how these tools are applied. There's also the risk of over-reliance. If users start trusting AI more than their own judgment, or begin offloading sensitive decisions to it entirely, that could dull emotional growth and accountability. Experts warn that while AI can support learning, it should never replace human empathy or relational wisdom. The study sheds light on how AI systems might one day collaborate with humans in education, training, and mediating disputes. Their knack for spotting the best course of action in emotional dilemmas raises questions about how we can use them in coaching, therapy, and everyday life. Time will tell if these systems keep improving or if there's a plateau in how much they can mimic our own self-awareness. What is certain is that they have already shown a knack for matching, and even surpassing, many folks in emotional understanding. Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
[4]
AI outperforms humans in emotional intelligence tests, study finds
Is artificial intelligence (AI) capable of suggesting appropriate behavior in emotionally charged situations? A team from the University of Geneva (UNIGE) and the University of Bern (UniBE) put six generative AIs -- including ChatGPT -- to the test using emotional intelligence (EI) assessments typically designed for humans. The outcome: these AIs outperformed average human performance and were even able to generate new tests in record time. These findings open up new possibilities for AI in education, coaching, and conflict management. The study is published in Communications Psychology. Large language models (LLMs) are artificial intelligence (AI) systems capable of processing, interpreting and generating human language. The ChatGPT generative AI, for example, is based on this type of model. LLMs can answer questions and solve complex problems. But can they also suggest emotionally intelligent behavior? Emotionally charged scenarios To find out, a team from UniBE, Institute of Psychology, and UNIGE's Swiss Center for Affective Sciences (CISA) subjected six LLMs (ChatGPT-4, ChatGPT-o1, Gemini 1.5 Flash, Copilot 365, Claude 3.5 Haiku and DeepSeek V3) to emotional intelligence tests. "We chose five tests commonly used in both research and corporate settings. They involved emotionally charged scenarios designed to assess the ability to understand, regulate, and manage emotions," says Katja Schlegel, lecturer and principal investigator at the Division of Personality Psychology, Differential Psychology, and Assessment at the Institute of Psychology at UniBE, and lead author of the study. For example: One of Michael's colleagues has stolen his idea and is being unfairly congratulated. What would be Michael's most effective reaction? Here, option 2 was considered the most appropriate. In parallel, the same five tests were administered to human participants. "In the end, the LLMs achieved significantly higher scores -- 82% correct answers versus 56% for humans. This suggests that these AIs not only understand emotions, but also grasp what it means to behave with emotional intelligence," explains Marcello Mortillaro, senior scientist at the UNIGE's Swiss Center for Affective Sciences (CISA), who was involved in the research. New tests in record time In a second stage, the scientists asked ChatGPT-4 to create new emotional intelligence tests, with new scenarios. These automatically generated tests were then taken by more than 400 participants. "They proved to be as reliable, clear and realistic as the original tests, which had taken years to develop," explains Schlegel. "LLMs are therefore not only capable of finding the best answer among the various available options, but also of generating new scenarios adapted to a desired context. This reinforces the idea that LLMs, such as ChatGPT, have emotional knowledge and can reason about emotions," adds Mortillaro. These results pave the way for AI to be used in contexts thought to be reserved for humans, such as education, coaching or conflict management, provided it is used and supervised by experts.
[5]
A New Study Reveals ChatGPT-4 And Other Advanced AI Models Outperform Humans In Emotional Intelligence, Opening New Paths In Education And Coaching
Artificial intelligence keeps on taking the community at large by surprise, especially the large language models with the advanced capabilities they tend to offer, surpassing expectations. Tech giants like OpenAI are increasingly focused on bringing more efficient AI models and pushing the potential of the technology further to do many of the mundane tasks on consumers' behalf. While we have been hearing about the vast application of the tools in varied domains, a recent study evaluated how these models tend to perform on emotional intelligence, and the results are not something you would be expecting. Companies and researchers are increasingly invested in finding ways in which artificial intelligence can be used to bring more efficiency and different ways it can be used in institutions. OpenAI and many other tech giants are arduously working towards making their models feel more natural with capabilities like contextual understanding. A recent study has been conducted by University of Geneva and University of Bern researchers to find out about AI's empathetic capabilities. The study that has been published in Communications Psychology delivered some interesting findings that pointed towards generative AI models like ChatGPT not only demonstrating emotional intelligence but also outperforming humans in emotional intelligence tests. The study involved a series of tests with the six leading large language models, including ChatGPT-4, ChatGPT-o1, Gemini 1.5 Flash, Claude 3.5 Haiku, Copilot 365, and DeepSeek V3. The study further involved five emotional tests that were generally used in academics and professionally to see emotional understanding, regulation, and management. The situations presented were realistic and emotionally charged ones to see how the models would respond based on the emotional context provided. The results left the researchers baffled as all the LLM models outperformed the human participants significantly. The researchers even went a step ahead by asking ChatGPT-4 to create new EI test items, which were validated by human participants as well, and the results remarkably remained the same, with the AI models demonstrating a high level of contextual understanding. One of the Senior Researchers, Marcello Mortillaro, had the following to say on the findings: LLMs are therefore not only capable of finding the best answer among the various available options, but also of generating new scenarios adapted to a desired context. This reinforces the idea that LLMs, such as ChatGPT, have emotional knowledge and can reason about emotions. These findings are vital, especially if we see how the technology is increasingly exceeding expectations in domains previously exclusive to humans. This could have great broader implications in terms of augmenting human skills in sensitive fields such as conflict management or coaching.
Share
Copy Link
A study by researchers from the University of Geneva and University of Bern reveals that AI models, including ChatGPT, outperformed humans in emotional intelligence tests and were able to generate new tests rapidly, suggesting potential applications in education, coaching, and conflict management.
A groundbreaking study conducted by researchers from the University of Geneva (UNIGE) and the University of Bern (UniBE) has revealed that artificial intelligence (AI) systems, particularly Large Language Models (LLMs), outperform humans in emotional intelligence tests. The study, published in Communications Psychology, tested six generative AI models, including ChatGPT, on standard emotional intelligence assessments typically designed for humans 1.
Source: Tech Xplore
The research team, led by Katja Schlegel from UniBE and Marcello Mortillaro from UNIGE, subjected the AI models to five emotional intelligence tests commonly used in research and corporate settings. These tests presented emotionally charged scenarios designed to assess the ability to understand, regulate, and manage emotions 2.
The results were striking:
In a further demonstration of AI capabilities, the researchers tasked ChatGPT-4 with creating new emotional intelligence tests. These AI-generated tests were then administered to over 400 human participants 4.
The results were equally impressive:
Source: Earth.com
The study's findings suggest that AI systems not only understand emotions but also grasp what it means to behave with emotional intelligence. This opens up new possibilities for AI applications in various fields:
Education: AI could potentially assist in developing emotional intelligence curricula and providing personalized coaching to students.
Coaching and Personal Development: AI-based tools could offer emotional intelligence training and guidance for professional and personal growth.
Conflict Management: AI systems might aid in creating scenarios for conflict resolution training or even assist in mediating disputes.
Mental Health Support: While not replacing human therapists, AI could potentially offer initial emotional support or triage in mental health contexts.
Source: Wccftech
Despite the impressive results, researchers and experts caution against over-reliance on AI for emotional tasks:
As AI continues to evolve, its role in understanding and responding to human emotions will likely expand, potentially revolutionizing fields that were once thought to be exclusively human domains. However, the integration of AI in these areas will require careful consideration, ethical guidelines, and ongoing research to ensure its responsible and effective use.
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
3 hrs ago
9 Sources
Technology
3 hrs ago
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
19 hrs ago
7 Sources
Technology
19 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
11 hrs ago
6 Sources
Technology
11 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
19 hrs ago
2 Sources
Technology
19 hrs ago
A study reveals patients' increasing reliance on AI for medical advice, often trusting it over doctors. This trend is reshaping doctor-patient dynamics and raising concerns about AI's limitations in healthcare.
3 Sources
Health
11 hrs ago
3 Sources
Health
11 hrs ago