The Illusion of Empathy: How AI Chatbots Shape Conversation Perception
- URL: http://arxiv.org/abs/2411.12877v1
- Date: Tue, 19 Nov 2024 21:47:08 GMT
- Title: The Illusion of Empathy: How AI Chatbots Shape Conversation Perception
- Authors: Tingting Liu, Salvatore Giorgi, Ankit Aich, Allison Lahnala, Brenda Curtis, Lyle Ungar, João Sedoc,
- Abstract summary: GPT-based chatbots were perceived as less empathetic than human conversational partners.
Empathy ratings from GPT-4o annotations aligned with users' ratings, reinforcing the perception of lower empathy.
Empathy models trained on human-human conversations detected no significant differences in empathy language.
- Score: 10.061399479158903
- License:
- Abstract: As AI chatbots become more human-like by incorporating empathy, understanding user-centered perceptions of chatbot empathy and its impact on conversation quality remains essential yet under-explored. This study examines how chatbot identity and perceived empathy influence users' overall conversation experience. Analyzing 155 conversations from two datasets, we found that while GPT-based chatbots were rated significantly higher in conversational quality, they were consistently perceived as less empathetic than human conversational partners. Empathy ratings from GPT-4o annotations aligned with users' ratings, reinforcing the perception of lower empathy in chatbots. In contrast, 3 out of 5 empathy models trained on human-human conversations detected no significant differences in empathy language between chatbots and humans. Our findings underscore the critical role of perceived empathy in shaping conversation quality, revealing that achieving high-quality human-AI interactions requires more than simply embedding empathetic language; it necessitates addressing the nuanced ways users interpret and experience empathy in conversations with chatbots.
Related papers
- Exploring ChatGPT's Empathic Abilities [0.138120109831448]
This study investigates the extent to which ChatGPT based on GPT-3.5 can exhibit empathetic responses and emotional expressions.
In 91.7% of the cases, ChatGPT was able to correctly identify emotions and produces appropriate answers.
In conversations, ChatGPT reacted with a parallel emotion in 70.7% of cases.
arXiv Detail & Related papers (2023-08-07T12:23:07Z) - Neural Generation Meets Real People: Building a Social, Informative
Open-Domain Dialogue Agent [65.68144111226626]
Chirpy Cardinal aims to be both informative and conversational.
We let both the user and bot take turns driving the conversation.
Chirpy Cardinal placed second out of nine bots in the Alexa Prize Socialbot Grand Challenge.
arXiv Detail & Related papers (2022-07-25T09:57:23Z) - A Deep Learning Approach to Integrate Human-Level Understanding in a
Chatbot [0.4632366780742501]
Unlike humans, chatbots can serve multiple customers at a time, are available 24/7 and reply in less than a fraction of a second.
We performed sentiment analysis, emotion detection, intent classification and named-entity recognition using deep learning to develop chatbots with humanistic understanding and intelligence.
arXiv Detail & Related papers (2021-12-31T22:26:41Z) - EmpBot: A T5-based Empathetic Chatbot focusing on Sentiments [75.11753644302385]
Empathetic conversational agents should not only understand what is being discussed, but also acknowledge the implied feelings of the conversation partner.
We propose a method based on a transformer pretrained language model (T5)
We evaluate our model on the EmpatheticDialogues dataset using both automated metrics and human evaluation.
arXiv Detail & Related papers (2021-10-30T19:04:48Z) - CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning [60.348822346249854]
This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
arXiv Detail & Related papers (2021-10-08T07:44:47Z) - Exemplars-guided Empathetic Response Generation Controlled by the
Elements of Human Communication [88.52901763928045]
We propose an approach that relies on exemplars to cue the generative model on fine stylistic properties that signal empathy to the interlocutor.
We empirically show that these approaches yield significant improvements in empathetic response quality in terms of both automated and human-evaluated metrics.
arXiv Detail & Related papers (2021-06-22T14:02:33Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z) - Towards Persona-Based Empathetic Conversational Models [58.65492299237112]
Empathetic conversational models have been shown to improve user satisfaction and task outcomes in numerous domains.
In Psychology, persona has been shown to be highly correlated to personality, which in turn influences empathy.
We propose a new task towards persona-based empathetic conversations and present the first empirical study on the impact of persona on empathetic responding.
arXiv Detail & Related papers (2020-04-26T08:51:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.