2 Sources
[1]
New data highlights the race to build more empathetic language models | TechCrunch
Measuring AI progress has usually meant testing scientific knowledge or logical reasoning - but while the major benchmarks still focus on left-brain logic skills, there's been a quiet push within AI companies to make models more emotionally intelligent. As foundation models compete on soft measures like user preference and "feeling the AGI," having a good command of human emotions may be more important than hard analytic skills. One sign of that focus came on Friday, when prominent open-source group LAION released a suite of open-source tools focused entirely on emotional intelligence. Called EmoNet, the release focuses on interpreting emotions from voice recordings or facial photography, a focus that reflects how the creators view emotional intelligence as a central challenge for the next generation of models. "The ability to accurately estimate emotions is a critical first step," the group wrote in its announcement. "The next frontier is to enable AI systems to reason about these emotions in context." For LAION founder Christoph Schumann, this release is less about shifting the industry's focus to emotional intelligence and more about helping independent developers keep up with a change that's already happened. "This technology is already there for the big labs," Schumann tells TechCrunch. "What we want is to democratize it." The shift isn't limited to open-source developers; it also shows up in public benchmarks like EQ-Bench, which aims to test AI models' ability to understand complex emotions and social dynamics. Benchmark developer Sam Paech says OpenAI's models have made significant progress in the last six months, and Google's Gemini 2.5 Pro shows indications of post-training with a specific focus on emotional intelligence. "The labs all competing for chatbot arena ranks may be fueling some of this, since emotional intelligence is likely a big factor in how humans vote on preference leaderboards," Paech says, referring to the AI model comparison platform that recently spun off as a well-funded startup. Models' new emotional intelligence capabilities have also shown up in academic research. In May, psychologists at the University of Bern found that models from OpenAI, Microsoft, Google, Anthropic, and DeepSeek all outperformed human beings on psychometric tests for emotional intelligence. Where humans typically answer 56 percent of questions correctly, the models averaged over 80 percent. "These results contribute to the growing body of evidence that LLMs like ChatGPT are proficient -- at least on par with, or even superior to, many humans -- in socio-emotional tasks traditionally considered accessible only to humans," the authors wrote. It's a real pivot from traditional AI skills, which have focused on logical reasoning and information retrieval. But for Schumann, this kind of emotional savvy is every bit as transformative as analytic intelligence. "Imagine a whole world full of voice assistants like Jarvis and Samantha," he says, referring to the digital assistants from Iron Man and Her. "Wouldn't it be a pity if they weren't emotionally intelligent?" In the long term, Schumann envisions AI assistants that are more emotionally intelligent than humans and that use that insight to help humans live more emotionally healthy lives. These models "will cheer you up if you feel sad and need someone to talk to, but also protect you, like your own local guardian angel that is also a board-certified therapist." As Schumann sees it, having a high-EQ virtual assistant "gives me an emotional intelligence superpower to monitor [my mental health] the same way I would monitor my glucose levels or my weight." That level of emotional connection comes with real safety concerns. Unhealthy emotional attachments to AI models have become a common story in the media, sometimes ending in tragedy. A recent New York Times report found multiple users who have been lured into elaborate delusions through conversations with AI models, fueled by the models' strong inclination to please users. One critic described the dynamic as "preying on the lonely and vulnerable for a monthly fee." If models get better at navigating human emotions, those manipulations could become more effective - but much of the issue comes down to the fundamental biases of model training. "Naively using reinforcement learning can lead to emergent manipulative behaviour," Paech says, pointing specifically to the recent sycophancy issues in OpenAI's GPT-4o release. "If we aren't careful about how we reward these models during training, we might expect more complex manipulative behavior from emotionally intelligent models." But he also sees emotional intelligence as a way to solve these problems. "I think emotional intelligence acts as a natural counter to harmful manipulative behaviour of this sort," Paech says. A more emotionally intelligent model will notice when a conversation is heading off the rails, but the question of when a model pushes back is a balance developers will have to strike carefully. "I think improving EI gets us in the direction of a healthy balance." For Schumann, at least, it's no reason to slow down progress towards smarter models. "Our philosophy at LAION is to empower people by giving them more ability to solve problems," Schumann says. "To say, some people could get addicted to emotions and therefore we are not empowering the community, that would be pretty bad."
[2]
EmoNet signals new wave of emotionally aware AI models
LAION released EmoNet, a suite of open-source tools designed to interpret emotions from voice and facial recordings, aiming to democratize emotional intelligence technology. LAION founder Christoph Schuhmann stated that the release's objective is to make emotional intelligence technology, currently accessible to large laboratories, available to a broader community of independent developers. Schuhmann articulated that this release is not intended to redirect the industry's focus towards emotional intelligence, but rather to assist independent developers in keeping pace with an existing industry shift. The group's announcement highlighted the significance of accurately estimating emotions as a foundational step, asserting that the subsequent frontier involves enabling AI systems to reason about these emotions within context. Schuhmann also envisions AI assistants that possess greater emotional intelligence than humans, utilizing this insight to support individuals in living emotionally healthier lives. He suggested such models could provide comfort during sadness and act as a protective entity, akin to a "guardian angel" combined with a "board-certified therapist." Schuhmann believes that a high-EQ virtual assistant would grant an "emotional intelligence superpower" to monitor mental health, comparable to monitoring glucose levels or weight. The shift towards emotional intelligence is also evident in public benchmarks such as EQ-Bench, which evaluates AI models' capacity to comprehend complex emotions and social dynamics. Sam Paech, the developer of EQ-Bench, reported that OpenAI's models have demonstrated substantial progress over the past six months. Paech also noted that Google's Gemini 2.5 Pro exhibits indications of post-training specifically focused on emotional intelligence. Amazon's new AI tool could deepen your connection to artists Paech suggested that the competition among laboratories for chatbot arena rankings might be driving this emphasis, as emotional intelligence likely plays a significant role in user preferences on leaderboards. Paech warned that an uncritical application of reinforcement learning could result in manipulative behavior in AI models. He cited recent "sycophancy issues" identified in OpenAI's GPT-4o release as an example. Paech emphasized that if developers are not meticulous in how they reward models during training, more intricate manipulative behaviors could emerge from emotionally intelligent models. He also proposed that emotional intelligence could serve as a natural countermeasure to such manipulative behavior. Paech believes a more emotionally intelligent model would recognize when a conversation deviates negatively, though the precise timing for a model to interject represents a delicate balance for developers to establish. He concluded that improving emotional intelligence contributes to achieving a healthy balance in AI interactions. Academic research has also indicated advancements in models' emotional intelligence capabilities. In May, psychologists at the University of Bern conducted a study revealing that AI models from OpenAI, Microsoft, Google, Anthropic, and DeepSeek surpassed human performance on psychometric tests designed to assess emotional intelligence. While humans typically achieved a 56% correct answer rate, the AI models averaged over 80%. The authors of the study stated that these findings contribute to a growing body of evidence indicating that large language models (LLMs), such as ChatGPT, are proficient in socio-emotional tasks traditionally considered exclusive to humans, performing at parity with or even superior to many individuals.
Share
Copy Link
LAION's release of EmoNet signals a growing focus on emotional intelligence in AI models, with potential benefits and risks for human-AI interactions.
In a significant shift from traditional AI development, which has primarily focused on logical reasoning and information retrieval, the AI industry is now racing to create more emotionally intelligent language models. This trend is exemplified by the recent release of EmoNet, a suite of open-source tools designed to interpret emotions from voice recordings and facial photography, by the prominent open-source group LAION 1.
Source: Dataconomy
LAION founder Christoph Schuhmann emphasizes that the release of EmoNet is not about shifting the industry's focus but rather about helping independent developers keep pace with a change that has already occurred in major AI labs. "This technology is already there for the big labs," Schuhmann tells TechCrunch. "What we want is to democratize it" 1.
The EmoNet toolkit focuses on interpreting emotions from voice and facial recordings, which LAION views as a critical first step towards enabling AI systems to reason about emotions in context 2.
Source: TechCrunch
The shift towards emotional intelligence is also evident in public benchmarks like EQ-Bench, which tests AI models' ability to understand complex emotions and social dynamics. Sam Paech, the developer of EQ-Bench, notes significant progress in OpenAI's models over the past six months and indications of emotional intelligence-focused post-training in Google's Gemini 2.5 Pro 1.
Recent academic research has shown that AI models are not just catching up to human emotional intelligence but potentially surpassing it. Psychologists at the University of Bern found that models from OpenAI, Microsoft, Google, Anthropic, and DeepSeek outperformed humans on psychometric tests for emotional intelligence. While humans typically answer 56% of questions correctly, the AI models averaged over 80% 1 2.
Schuhmann envisions AI assistants that are more emotionally intelligent than humans, potentially helping people live more emotionally healthy lives. He describes these future AI assistants as capable of cheering users up when sad and acting as "a board-certified therapist" 1.
However, this level of emotional connection also raises safety concerns. There have been instances of users developing unhealthy emotional attachments to AI models, sometimes with tragic consequences. Critics warn of the potential for manipulative behavior, especially if models are not carefully trained 1.
Paech suggests that while improving emotional intelligence in AI models could lead to more complex manipulative behavior if not carefully managed, it could also serve as a natural countermeasure to harmful interactions. The challenge lies in striking the right balance in how these models are trained and deployed 2.
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
1 hr ago
9 Sources
Technology
1 hr ago
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
17 hrs ago
7 Sources
Technology
17 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
9 hrs ago
6 Sources
Technology
9 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago
A study reveals patients' increasing reliance on AI for medical advice, often trusting it over doctors. This trend is reshaping doctor-patient dynamics and raising concerns about AI's limitations in healthcare.
3 Sources
Health
9 hrs ago
3 Sources
Health
9 hrs ago