Increasing happiness through conversations with artificial intelligence
- URL: http://arxiv.org/abs/2504.02091v1
- Date: Wed, 02 Apr 2025 19:52:02 GMT
- Title: Increasing happiness through conversations with artificial intelligence
- Authors: Joseph Heffner, Chongyu Qin, Martin Chadwick, Chris Knutsen, Christopher Summerfield, Zeb Kurth-Nelson, Robb B. Rutledge,
- Abstract summary: We found that happiness after AI conversations was higher than after journaling.<n>When discussing negative topics, participants gradually aligned their sentiment with the AI's positivity.<n>Using computational modeling, we find the history of these sentiment prediction errors over the course of a conversation predicts greater post-conversation happiness.
- Score: 4.225027291187279
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Chatbots powered by artificial intelligence (AI) have rapidly become a significant part of everyday life, with over a quarter of American adults using them multiple times per week. While these tools offer potential benefits and risks, a fundamental question remains largely unexplored: How do conversations with AI influence subjective well-being? To investigate this, we conducted a study where participants either engaged in conversations with an AI chatbot (N = 334) or wrote journal entires (N = 193) on the same randomly assigned topics and reported their momentary happiness afterward. We found that happiness after AI chatbot conversations was higher than after journaling, particularly when discussing negative topics such as depression or guilt. Leveraging large language models for sentiment analysis, we found that the AI chatbot mirrored participants' sentiment while maintaining a consistent positivity bias. When discussing negative topics, participants gradually aligned their sentiment with the AI's positivity, leading to an overall increase in happiness. We hypothesized that the history of participants' sentiment prediction errors, the difference between expected and actual emotional tone when responding to the AI chatbot, might explain this happiness effect. Using computational modeling, we find the history of these sentiment prediction errors over the course of a conversation predicts greater post-conversation happiness, demonstrating a central role of emotional expectations during dialogue. Our findings underscore the effect that AI interactions can have on human well-being.
Related papers
- Manipulation and the AI Act: Large Language Model Chatbots and the Danger of Mirrors [0.0]
Personifying AI chatbots could foreseeably increase their trust with users.<n>However, it could also make them more capable of manipulation, by creating the illusion of a close and intimate relationship with an artificial entity.<n>The European Commission has finalized the AI Act, with the EU Parliament making amendments banning manipulative and deceptive AI systems that cause significant harm to users.
arXiv Detail & Related papers (2025-03-24T06:56:29Z) - The Illusion of Empathy: How AI Chatbots Shape Conversation Perception [10.061399479158903]
We found that GPT-based chatbots were perceived as less empathetic than human conversational partners.<n>Our findings underscore the critical role of perceived empathy in shaping conversation quality.
arXiv Detail & Related papers (2024-11-19T21:47:08Z) - Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - Journalists, Emotions, and the Introduction of Generative AI Chatbots: A Large-Scale Analysis of Tweets Before and After the Launch of ChatGPT [0.0]
This study investigated the emotional responses of journalists to the release of ChatGPT at the time of its launch.
By analyzing nearly 1 million Tweets from journalists at major U.S. news outlets, we tracked changes in emotional tone and sentiment.
We found an increase in positive emotion and a more favorable tone post-launch, suggesting initial optimism toward AI's potential.
arXiv Detail & Related papers (2024-09-13T12:09:20Z) - Commonsense Reasoning for Conversational AI: A Survey of the State of
the Art [0.76146285961466]
The paper lists relevant training datasets and describes the primary approaches to include commonsense in conversational AI.
The paper presents preliminary observations of the limited commonsense capabilities of two state-of-the-art open dialogue models, BlenderBot3 and LaMDA.
arXiv Detail & Related papers (2023-02-15T19:55:57Z) - Neural Generation Meets Real People: Building a Social, Informative
Open-Domain Dialogue Agent [65.68144111226626]
Chirpy Cardinal aims to be both informative and conversational.
We let both the user and bot take turns driving the conversation.
Chirpy Cardinal placed second out of nine bots in the Alexa Prize Socialbot Grand Challenge.
arXiv Detail & Related papers (2022-07-25T09:57:23Z) - CPED: A Large-Scale Chinese Personalized and Emotional Dialogue Dataset
for Conversational AI [48.67259855309959]
Most existing datasets for conversational AI ignore human personalities and emotions.
We propose CPED, a large-scale Chinese personalized and emotional dialogue dataset.
CPED contains more than 12K dialogues of 392 speakers from 40 TV shows.
arXiv Detail & Related papers (2022-05-29T17:45:12Z) - EmpBot: A T5-based Empathetic Chatbot focusing on Sentiments [75.11753644302385]
Empathetic conversational agents should not only understand what is being discussed, but also acknowledge the implied feelings of the conversation partner.
We propose a method based on a transformer pretrained language model (T5)
We evaluate our model on the EmpatheticDialogues dataset using both automated metrics and human evaluation.
arXiv Detail & Related papers (2021-10-30T19:04:48Z) - CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning [60.348822346249854]
This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
arXiv Detail & Related papers (2021-10-08T07:44:47Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z) - Artificial intelligence in communication impacts language and social
relationships [11.212791488179757]
We study the social consequences of one of the most pervasive AI applications: algorithmic response suggestions ("smart replies")
We find that using algorithmic responses increases communication efficiency, use of positive emotional language, and positive evaluations by communication partners.
However, consistent with common assumptions about the negative implications of AI, people are evaluated more negatively if they are suspected to be using algorithmic responses.
arXiv Detail & Related papers (2021-02-10T22:05:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.