Researchers just ran a massive test to see how well large language models (LLMs) like ChatGPT can understand human emotions — and the results might surprise you.
They pitted LLMs against humans in tasks measuring emotional intelligence (EQ), and in many cases, the AIs not only matched but outperformed people. This included understanding social dynamics, interpreting feelings, and reacting with empathy — especially in text-based scenarios.
But here’s the kicker: while AIs nailed structured EQ tests, they’re still far from replicating the complex, messy, and nuanced emotions that come with real human experiences. Researchers say these results should be seen as “functional” EQ — useful in specific contexts, like therapy bots or digital assistants, but not proof of real empathy.
In short: LLMs might be learning to sound emotionally intelligent, but that doesn’t mean they feel anything. Keep in mind this is all based off of a test, but not any real-life situations meaning that the AI STUDIED behavior based off of text. make of that what you will
So, is your chatbot emotionally smarter than you? Maybe in a quiz — but it still can’t cry during the flashback scene in Disney’s “Up”