LAION released EmoNet, a suite of open-source tools designed to interpret emotions from voice and facial recordings, aiming to democratize emotional intelligence technology.
LAION founder Christoph Schuhmann stated that the release’s objective is to make emotional intelligence technology, currently accessible to large laboratories, available to a broader community of independent developers. Schuhmann articulated that this release is not intended to redirect the industry’s focus towards emotional intelligence, but rather to assist independent developers in keeping pace with an existing industry shift.
The group’s announcement highlighted the significance of accurately estimating emotions as a foundational step, asserting that the subsequent frontier involves enabling AI systems to reason about these emotions within context. Schuhmann also envisions AI assistants that possess greater emotional intelligence than humans, utilizing this insight to support individuals in living emotionally healthier lives. He suggested such models could provide comfort during sadness and act as a protective entity, akin to a “guardian angel” combined with a “board-certified therapist.” Schuhmann believes that a high-EQ virtual assistant would grant an “emotional intelligence superpower” to monitor mental health, comparable to monitoring glucose levels or weight.
The shift towards emotional intelligence is also evident in public benchmarks such as EQ-Bench, which evaluates AI models’ capacity to comprehend complex emotions and social dynamics. Sam Paech, the developer of EQ-Bench, reported that OpenAI’s models have demonstrated substantial progress over the past six months. Paech also noted that Google’s Gemini 2.5 Pro exhibits indications of post-training specifically focused on emotional intelligence.
Amazon’s new AI tool could deepen your connection to artists
Paech suggested that the competition among laboratories for chatbot arena rankings might be driving this emphasis, as emotional intelligence likely plays a significant role in user preferences on leaderboards. Paech warned that an uncritical application of reinforcement learning could result in manipulative behavior in AI models. He cited recent “sycophancy issues” identified in OpenAI’s GPT-4o release as an example. Paech emphasized that if developers are not meticulous in how they reward models during training, more intricate manipulative behaviors could emerge from emotionally intelligent models.
He also proposed that emotional intelligence could serve as a natural countermeasure to such manipulative behavior. Paech believes a more emotionally intelligent model would recognize when a conversation deviates negatively, though the precise timing for a model to interject represents a delicate balance for developers to establish. He concluded that improving emotional intelligence contributes to achieving a healthy balance in AI interactions.
Academic research has also indicated advancements in models’ emotional intelligence capabilities. In May, psychologists at the University of Bern conducted a study revealing that AI models from OpenAI, Microsoft, Google, Anthropic, and DeepSeek surpassed human performance on psychometric tests designed to assess emotional intelligence. While humans typically achieved a 56% correct answer rate, the AI models averaged over 80%. The authors of the study stated that these findings contribute to a growing body of evidence indicating that large language models (LLMs), such as ChatGPT, are proficient in socio-emotional tasks traditionally considered exclusive to humans, performing at parity with or even superior to many individuals.