Sibyl: Sensible Empathetic Dialogue Generation with Visionary Commonsense Knowledge
- URL: http://arxiv.org/abs/2311.15316v2
- Date: Thu, 30 May 2024 06:18:20 GMT
- Title: Sibyl: Sensible Empathetic Dialogue Generation with Visionary Commonsense Knowledge
- Authors: Lanrui Wang, Jiangnan Li, Chenxu Yang, Zheng Lin, Hongyin Tang, Huan Liu, Xiaolei Huang, Yanan Cao, Jingang Wang, Weiping Wang,
- Abstract summary: We present an innovative framework named Sensible Empathetic Dialogue Generation with Visionary Commonsense Knowledge (Sibyl)
Designed to concentrate on the imminent dialogue future, this paradigm directs LLMs toward the implicit requirements of the conversation.
Experimental results demonstrate that incorporating our paradigm for acquiring commonsense knowledge into LLMs comprehensively enhances the quality of their responses.
- Score: 42.93002246089028
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, there has been a heightened interest in building chatbots based on Large Language Models (LLMs) to emulate human-like qualities in dialogues, including expressing empathy and offering emotional support. Despite having access to commonsense knowledge to better understand the psychological aspects and causality of dialogue context, even these powerful LLMs struggle to achieve the goals of empathy and emotional support. As current approaches do not adequately anticipate dialogue future, they may mislead language models to ignore complex dialogue goals of empathy and emotional support, resulting in unsupportive responses lacking empathy. To address this issue, we present an innovative framework named Sensible Empathetic Dialogue Generation with Visionary Commonsense Knowledge (Sibyl). Designed to concentrate on the imminent dialogue future, this paradigm directs LLMs toward the implicit requirements of the conversation, aiming to provide more sensible responses. Experimental results demonstrate that incorporating our paradigm for acquiring commonsense knowledge into LLMs comprehensively enhances the quality of their responses.
Related papers
- Interactive Dialogue Agents via Reinforcement Learning on Hindsight Regenerations [58.65755268815283]
Many real dialogues are interactive, meaning an agent's utterances will influence their conversational partner, elicit information, or change their opinion.
We use this fact to rewrite and augment existing suboptimal data, and train via offline reinforcement learning (RL) an agent that outperforms both prompting and learning from unaltered human demonstrations.
Our results in a user study with real humans show that our approach greatly outperforms existing state-of-the-art dialogue agents.
arXiv Detail & Related papers (2024-11-07T21:37:51Z) - Data Augmentation of Multi-turn Psychological Dialogue via Knowledge-driven Progressive Thought Prompting [46.919537239016734]
Large language models (LLMs) have simplified the implementation of multi-turn dialogues.
It remains challenging to deliver satisfactory performance in low-resource domain, like psychological dialogue dialogue.
We propose a knowledge-driven progressive thought prompting method to guide LLM to generate psychology-related dialogue.
arXiv Detail & Related papers (2024-06-24T12:02:56Z) - An Iterative Associative Memory Model for Empathetic Response Generation [22.68709119989059]
Empathetic response generation aims to comprehend the cognitive and emotional states in dialogue utterances.
We propose an Iterative Associative Memory Model (IAMM) for empathetic response generation.
arXiv Detail & Related papers (2024-02-28T00:49:06Z) - Think Before You Speak: Cultivating Communication Skills of Large Language Models via Inner Monologue [73.69510478736483]
Large language models (LLMs) can generate fluent, coherent, and diverse responses.
However, they lack a crucial ability: communication skills.
This article aims to empower LLMs with communication skills through inner monologues.
Experimental results show that the proposed CSIM strategy improves the backbone models and outperforms the baselines.
arXiv Detail & Related papers (2023-11-13T16:19:42Z) - SoulChat: Improving LLMs' Empathy, Listening, and Comfort Abilities
through Fine-tuning with Multi-turn Empathy Conversations [19.11368665202549]
When large language models are applied in the field of psychological counseling, they often rush to provide universal advice.
We constructed a multi-turn empathetic conversation dataset of more than 2 million samples.
Experiments have shown that the empathy ability of LLMs can be significantly enhanced when finetuning by using multi-turn dialogue history.
arXiv Detail & Related papers (2023-11-01T03:49:52Z) - Affect Recognition in Conversations Using Large Language Models [9.689990547610664]
Affect recognition plays a pivotal role in human communication.
This study investigates the capacity of large language models (LLMs) to recognise human affect in conversations.
arXiv Detail & Related papers (2023-09-22T14:11:23Z) - Building Emotional Support Chatbots in the Era of LLMs [64.06811786616471]
We introduce an innovative methodology that synthesizes human insights with the computational prowess of Large Language Models (LLMs)
By utilizing the in-context learning potential of ChatGPT, we generate an ExTensible Emotional Support dialogue dataset, named ExTES.
Following this, we deploy advanced tuning techniques on the LLaMA model, examining the impact of diverse training strategies, ultimately yielding an LLM meticulously optimized for emotional support interactions.
arXiv Detail & Related papers (2023-08-17T10:49:18Z) - Prompting and Evaluating Large Language Models for Proactive Dialogues:
Clarification, Target-guided, and Non-collaboration [72.04629217161656]
This work focuses on three aspects of proactive dialogue systems: clarification, target-guided, and non-collaborative dialogues.
To trigger the proactivity of LLMs, we propose the Proactive Chain-of-Thought prompting scheme.
arXiv Detail & Related papers (2023-05-23T02:49:35Z) - Knowledge Bridging for Empathetic Dialogue Generation [52.39868458154947]
Lack of external knowledge makes empathetic dialogue systems difficult to perceive implicit emotions and learn emotional interactions from limited dialogue history.
We propose to leverage external knowledge, including commonsense knowledge and emotional lexical knowledge, to explicitly understand and express emotions in empathetic dialogue generation.
arXiv Detail & Related papers (2020-09-21T09:21:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.