CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning
- URL: http://arxiv.org/abs/2110.03949v1
- Date: Fri, 8 Oct 2021 07:44:47 GMT
- Title: CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning
- Authors: Jiun-Hao Jhan, Chao-Peng Liu, Shyh-Kang Jeng, Hung-Yi Lee
- Abstract summary: This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
- Score: 60.348822346249854
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Apart from the coherence and fluency of responses, an empathetic chatbot
emphasizes more on people's feelings. By considering altruistic behaviors
between human interaction, empathetic chatbots enable people to get a better
interactive and supportive experience. This study presents a framework whereby
several empathetic chatbots are based on understanding users' implied feelings
and replying empathetically for multiple dialogue turns. We call these chatbots
CheerBots. CheerBots can be retrieval-based or generative-based and were
finetuned by deep reinforcement learning. To respond in an empathetic way, we
develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in
training with considerations on changes in user's emotional states in the
future to arouse sympathy. Finally, automatic metrics and human rating results
demonstrate that CheerBots outperform other baseline chatbots and achieves
reciprocal altruism. The code and the pre-trained models will be made
available.
Related papers
- AI Mimicry and Human Dignity: Chatbot Use as a Violation of Self-Respect [0.04260910081285213]
We argue that interacting with chatbots in this way is incompatible with the dignity of users.
We show that, since second-personal respect is premised on reciprocal recognition of second-personal authority, behaving towards chatbots in ways that convey second-personal respect is bound to misfire.
arXiv Detail & Related papers (2025-02-17T19:02:12Z) - Will you donate money to a chatbot? The effect of chatbot anthropomorphic features and persuasion strategies on willingness to donate [4.431473323414383]
We investigate the effect of personification and persuasion strategies on users' perceptions and donation likelihood.
Results suggest that interaction with a personified chatbots evokes perceived anthropomorphism; however, it does not elicit greater willingness to donate.
In fact, we found that commonly used anthropomorphic features, like name and narrative, led to negative attitudes toward an AI agent in the donation context.
arXiv Detail & Related papers (2024-12-28T02:17:46Z) - Empathetic Response in Audio-Visual Conversations Using Emotion Preference Optimization and MambaCompressor [44.499778745131046]
Our study introduces a dual approach: firstly, we employ Emotional Preference Optimization (EPO) to train chatbots.
This training enables the model to discern fine distinctions between correct and counter-emotional responses.
Secondly, we introduce MambaCompressor to effectively compress and manage extensive conversation histories.
Our comprehensive experiments across multiple datasets demonstrate that our model significantly outperforms existing models in generating empathetic responses and managing lengthy dialogues.
arXiv Detail & Related papers (2024-12-23T13:44:51Z) - The Illusion of Empathy: How AI Chatbots Shape Conversation Perception [10.061399479158903]
GPT-based chatbots were perceived as less empathetic than human conversational partners.
Empathy ratings from GPT-4o annotations aligned with users' ratings, reinforcing the perception of lower empathy.
Empathy models trained on human-human conversations detected no significant differences in empathy language.
arXiv Detail & Related papers (2024-11-19T21:47:08Z) - Neural Generation Meets Real People: Building a Social, Informative
Open-Domain Dialogue Agent [65.68144111226626]
Chirpy Cardinal aims to be both informative and conversational.
We let both the user and bot take turns driving the conversation.
Chirpy Cardinal placed second out of nine bots in the Alexa Prize Socialbot Grand Challenge.
arXiv Detail & Related papers (2022-07-25T09:57:23Z) - EmpBot: A T5-based Empathetic Chatbot focusing on Sentiments [75.11753644302385]
Empathetic conversational agents should not only understand what is being discussed, but also acknowledge the implied feelings of the conversation partner.
We propose a method based on a transformer pretrained language model (T5)
We evaluate our model on the EmpatheticDialogues dataset using both automated metrics and human evaluation.
arXiv Detail & Related papers (2021-10-30T19:04:48Z) - Exemplars-guided Empathetic Response Generation Controlled by the
Elements of Human Communication [88.52901763928045]
We propose an approach that relies on exemplars to cue the generative model on fine stylistic properties that signal empathy to the interlocutor.
We empirically show that these approaches yield significant improvements in empathetic response quality in terms of both automated and human-evaluated metrics.
arXiv Detail & Related papers (2021-06-22T14:02:33Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z) - Spot The Bot: A Robust and Efficient Framework for the Evaluation of
Conversational Dialogue Systems [21.36935947626793]
emphSpot The Bot replaces human-bot conversations with conversations between bots.
Human judges only annotate for each entity in a conversation whether they think it is human or not.
emphSurvival Analysis measures which bot can uphold human-like behavior the longest.
arXiv Detail & Related papers (2020-10-05T16:37:52Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z) - "Love is as Complex as Math": Metaphor Generation System for Social
Chatbot [13.128146708018438]
We investigate the usage of a commonly used rhetorical device by human -- metaphor for social chatbots.
Our work first designs a metaphor generation framework, which generates topic-aware and novel figurative sentences.
Human annotators validate the novelty and properness of the generated metaphors.
arXiv Detail & Related papers (2020-01-03T05:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.