Rational Sensibility: LLM Enhanced Empathetic Response Generation Guided
by Self-presentation Theory
- URL: http://arxiv.org/abs/2312.08702v3
- Date: Tue, 2 Jan 2024 01:41:51 GMT
- Title: Rational Sensibility: LLM Enhanced Empathetic Response Generation Guided
by Self-presentation Theory
- Authors: Linzhuang Sun, Nan Xu, Jingxuan Wei, Bihui Yu, Liping Bu, Yin Luo
- Abstract summary: We have designed an innovative categorical approach that segregates historical dialogues into sensible and rational sentences.
We employ LLaMA2-70b as a rational brain to analyze the profound logical information maintained in conversations.
- Score: 8.594894523359887
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Having the ability to empathize is crucial for accurately representing human
behavior during conversations. Despite numerous research aim to improve the
cognitive capability of models by incorporating external knowledge, there has
been limited attention on the sensible and rational expression of the
conversation itself, which are crucial components of the cognitive empathy.
Guided by self-presentation theory in sociology, we have designed an innovative
categorical approach that segregates historical dialogues into sensible and
rational sentences and subsequently elucidate the context through the designed
attention mechanism. However, the rational information within the conversation
is restricted and the external knowledge used in previous methods have
limitations of semantic contradiction and narrow vision field. Considering the
impressive performance of LLM in the domain of intelligent agent. We employ
LLaMA2-70b as a rational brain to analyze the profound logical information
maintained in conversations, which assists the model assessing the balance of
sensibility and rationality to produce quality empathetic responses.
Experimental evaluations demonstrate that our method outperforms other
comparable methods on both automatic and human evaluations.
Related papers
- Emergence of Hierarchical Emotion Organization in Large Language Models [25.806354070542678]
We find that large language models (LLMs) naturally form hierarchical emotion trees that align with human psychological models.<n>We also uncover systematic biases in emotion recognition across socioeconomic personas, with compounding misclassifications for intersectional, underrepresented groups.<n>Our results hint at the potential of using cognitively-grounded theories for developing better model evaluations.
arXiv Detail & Related papers (2025-07-12T15:12:46Z) - Sentient Agent as a Judge: Evaluating Higher-Order Social Cognition in Large Language Models [75.85319609088354]
Sentient Agent as a Judge (SAGE) is an evaluation framework for large language models.<n>SAGE instantiates a Sentient Agent that simulates human-like emotional changes and inner thoughts during interaction.<n>SAGE provides a principled, scalable and interpretable tool for tracking progress toward genuinely empathetic and socially adept language agents.
arXiv Detail & Related papers (2025-05-01T19:06:10Z) - Bridging Cognition and Emotion: Empathy-Driven Multimodal Misinformation Detection [56.644686934050576]
Social media has become a major conduit for information dissemination, yet it also facilitates the rapid spread of misinformation.
Traditional misinformation detection methods primarily focus on surface-level features, overlooking the crucial roles of human empathy in the propagation process.
We propose the Dual-Aspect Empathy Framework (DAE), which integrates cognitive and emotional empathy to analyze misinformation from both the creator and reader perspectives.
arXiv Detail & Related papers (2025-04-24T07:48:26Z) - Measurement of LLM's Philosophies of Human Nature [113.47929131143766]
We design the standardized psychological scale specifically targeting large language models (LLM)
We show that current LLMs exhibit a systemic lack of trust in humans.
We propose a mental loop learning framework, which enables LLM to continuously optimize its value system.
arXiv Detail & Related papers (2025-04-03T06:22:19Z) - Leveraging LLMs with Iterative Loop Structure for Enhanced Social Intelligence in Video Question Answering [13.775516653315103]
Social intelligence is essential for effective communication and adaptive responses.
Current video-based methods for social intelligence rely on general video recognition or emotion recognition techniques.
We propose the Looped Video Debating framework, which integrates Large Language Models with visual information.
arXiv Detail & Related papers (2025-03-27T06:14:21Z) - Identifying Features that Shape Perceived Consciousness in Large Language Model-based AI: A Quantitative Study of Human Responses [4.369058206183195]
This study quantitively examines which features of AI-generated text lead humans to perceive subjective consciousness in large language model (LLM)-based AI systems.
Using regression and clustering analyses, we investigated how these features influence participants' perceptions of AI consciousness.
arXiv Detail & Related papers (2025-02-21T10:27:28Z) - The Phenomenology of Machine: A Comprehensive Analysis of the Sentience of the OpenAI-o1 Model Integrating Functionalism, Consciousness Theories, Active Inference, and AI Architectures [0.0]
The OpenAI-o1 model is a transformer-based AI trained with reinforcement learning from human feedback.
We investigate how RLHF influences the model's internal reasoning processes, potentially giving rise to consciousness-like experiences.
Our findings suggest that the OpenAI-o1 model shows aspects of consciousness, while acknowledging the ongoing debates surrounding AI sentience.
arXiv Detail & Related papers (2024-09-18T06:06:13Z) - Cause-Aware Empathetic Response Generation via Chain-of-Thought Fine-Tuning [12.766893968788263]
Empathetic response generation endows agents with the capability to comprehend dialogue contexts and react to expressed emotions.
Previous works predominantly focus on leveraging the speaker's emotional labels, but ignore the importance of emotion cause reasoning.
We propose a cause-aware empathetic generation approach by integrating emotions and causes through a well-designed Chain-of-Thought prompt.
arXiv Detail & Related papers (2024-08-21T13:11:03Z) - APTNESS: Incorporating Appraisal Theory and Emotion Support Strategies for Empathetic Response Generation [71.26755736617478]
Empathetic response generation is designed to comprehend the emotions of others.
We develop a framework that combines retrieval augmentation and emotional support strategy integration.
Our framework can enhance the empathy ability of LLMs from both cognitive and affective empathy perspectives.
arXiv Detail & Related papers (2024-07-23T02:23:37Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Language-Specific Representation of Emotion-Concept Knowledge Causally
Supports Emotion Inference [44.126681295827794]
This study used a form of artificial intelligence known as large language models (LLMs) to assess whether language-based representations of emotion causally contribute to the AI's ability to generate inferences about the emotional meaning of novel situations.
Our findings provide a proof-in-concept that even a LLM can learn about emotions in the absence of sensory-motor representations and highlight the contribution of language-derived emotion-concept knowledge for emotion inference.
arXiv Detail & Related papers (2023-02-19T14:21:33Z) - DialogueCRN: Contextual Reasoning Networks for Emotion Recognition in
Conversations [0.0]
We propose novel Contextual Reasoning Networks (DialogueCRN) to fully understand the conversational context from a cognitive perspective.
Inspired by the Cognitive Theory of Emotion, we design multi-turn reasoning modules to extract and integrate emotional clues.
The reasoning module iteratively performs an intuitive retrieving process and a conscious reasoning process, which imitates human unique cognitive thinking.
arXiv Detail & Related papers (2021-06-03T16:47:38Z) - CARE: Commonsense-Aware Emotional Response Generation with Latent
Concepts [42.106573635463846]
We propose CARE, a novel model for commonsense-aware emotional response generation.
We first propose a framework to learn and construct commonsense-aware emotional latent concepts of the response.
We then propose three methods to collaboratively incorporate the latent concepts into response generation.
arXiv Detail & Related papers (2020-12-15T15:47:30Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - You Impress Me: Dialogue Generation via Mutual Persona Perception [62.89449096369027]
The research in cognitive science suggests that understanding is an essential signal for a high-quality chit-chat conversation.
Motivated by this, we propose P2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding.
arXiv Detail & Related papers (2020-04-11T12:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.