Heartificial Intelligence: Exploring Empathy in Language Models
- URL: http://arxiv.org/abs/2508.08271v1
- Date: Wed, 30 Jul 2025 14:09:33 GMT
- Title: Heartificial Intelligence: Exploring Empathy in Language Models
- Authors: Victoria Williams, Benjamin Rosman,
- Abstract summary: Small and large language models consistently outperformed humans on cognitive empathy tasks.<n>Despite their cognitive strengths, both small and large language models showed significantly lower affective empathy compared to human participants.
- Score: 8.517406772939292
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models have become increasingly common, used by millions of people worldwide in both professional and personal contexts. As these models continue to advance, they are frequently serving as virtual assistants and companions. In human interactions, effective communication typically involves two types of empathy: cognitive empathy (understanding others' thoughts and emotions) and affective empathy (emotionally sharing others' feelings). In this study, we investigated both cognitive and affective empathy across several small (SLMs) and large (LLMs) language models using standardized psychological tests. Our results revealed that LLMs consistently outperformed humans - including psychology students - on cognitive empathy tasks. However, despite their cognitive strengths, both small and large language models showed significantly lower affective empathy compared to human participants. These findings highlight rapid advancements in language models' ability to simulate cognitive empathy, suggesting strong potential for providing effective virtual companionship and personalized emotional support. Additionally, their high cognitive yet lower affective empathy allows objective and consistent emotional support without running the risk of emotional fatigue or bias.
Related papers
- Human attribution of empathic behaviour to AI systems [0.3364554138758564]
We examined differences in perceived empathy signals between human-written and large language model (LLM)-generated relationship advice, and the influence of authorship labels.<n>Findings suggest that perceptions of empathic communication are primarily driven by linguistic features rather than authorship beliefs.
arXiv Detail & Related papers (2026-02-19T11:57:06Z) - Reflecting Twice before Speaking with Empathy: Self-Reflective Alternating Inference for Empathy-Aware End-to-End Spoken Dialogue [53.95386201009769]
We introduce EmpathyEval, a descriptive natural-language-based evaluation model for assessing empathetic quality in spoken dialogues.<n>We propose ReEmpathy, an end-to-end Spoken Language Models that enhances empathetic dialogue through a novel Empathetic Self-Reflective Alternating Inference mechanism.
arXiv Detail & Related papers (2026-01-26T09:04:50Z) - PERM: Psychology-grounded Empathetic Reward Modeling for Large Language Models [45.377102925731826]
Large Language Models (LLMs) are increasingly deployed in human-centric applications, yet they often fail to provide substantive emotional support.<n>We propose Psychology-grounded Empathetic Reward Modeling (PERM) to address this limitation.
arXiv Detail & Related papers (2026-01-15T15:56:55Z) - AER-LLM: Ambiguity-aware Emotion Recognition Leveraging Large Language Models [18.482881562645264]
This study is the first to explore the potential of Large Language Models (LLMs) in recognizing ambiguous emotions.<n>We design zero-shot and few-shot prompting and incorporate past dialogue as context information for ambiguous emotion recognition.
arXiv Detail & Related papers (2024-09-26T23:25:21Z) - APTNESS: Incorporating Appraisal Theory and Emotion Support Strategies for Empathetic Response Generation [71.26755736617478]
Empathetic response generation is designed to comprehend the emotions of others.
We develop a framework that combines retrieval augmentation and emotional support strategy integration.
Our framework can enhance the empathy ability of LLMs from both cognitive and affective empathy perspectives.
arXiv Detail & Related papers (2024-07-23T02:23:37Z) - Enablers and Barriers of Empathy in Software Developer and User
Interaction: A Mixed Methods Case Study [11.260371501613994]
We studied how empathy is practised between developers and end users.
We identified the nature of awareness required to trigger empathy and enablers of empathy.
We discovered barriers to empathy and a set of potential strategies to overcome these barriers.
arXiv Detail & Related papers (2024-01-17T06:42:21Z) - The Good, The Bad, and Why: Unveiling Emotions in Generative AI [73.94035652867618]
We show that EmotionPrompt can boost the performance of AI models while EmotionAttack can hinder it.
EmotionDecode reveals that AI models can comprehend emotional stimuli akin to the mechanism of dopamine in the human brain.
arXiv Detail & Related papers (2023-12-18T11:19:45Z) - Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench [83.41621219298489]
We evaluate Large Language Models' (LLMs) anthropomorphic capabilities using the emotion appraisal theory from psychology.
We collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study.
We conduct a human evaluation involving more than 1,200 subjects worldwide.
arXiv Detail & Related papers (2023-08-07T15:18:30Z) - Large Language Models Understand and Can be Enhanced by Emotional
Stimuli [53.53886609012119]
We take the first step towards exploring the ability of Large Language Models to understand emotional stimuli.
Our experiments show that LLMs have a grasp of emotional intelligence, and their performance can be improved with emotional prompts.
Our human study results demonstrate that EmotionPrompt significantly boosts the performance of generative tasks.
arXiv Detail & Related papers (2023-07-14T00:57:12Z) - EMP-EVAL: A Framework for Measuring Empathy in Open Domain Dialogues [0.0]
EMP-EVAL is a simple yet effective automatic empathy evaluation method.
The proposed technique takes the influence of Emotion, Cognitive and Emotional empathy.
We show that our metrics can correlate with human preference, achieving comparable results with human judgments.
arXiv Detail & Related papers (2023-01-29T18:42:19Z) - Towards Persona-Based Empathetic Conversational Models [58.65492299237112]
Empathetic conversational models have been shown to improve user satisfaction and task outcomes in numerous domains.
In Psychology, persona has been shown to be highly correlated to personality, which in turn influences empathy.
We propose a new task towards persona-based empathetic conversations and present the first empirical study on the impact of persona on empathetic responding.
arXiv Detail & Related papers (2020-04-26T08:51:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.