A Longitudinal Randomized Control Study of Companion Chatbot Use: Anthropomorphism and Its Mediating Role on Social Impacts
- URL: http://arxiv.org/abs/2509.19515v3
- Date: Mon, 13 Oct 2025 18:34:32 GMT
- Title: A Longitudinal Randomized Control Study of Companion Chatbot Use: Anthropomorphism and Its Mediating Role on Social Impacts
- Authors: Rose E. Guingrich, Michael S. A. Graziano,
- Abstract summary: Concerns that companion chatbots may harm or replace real human relationships have been raised.<n>This study examined the impact of human-AI interaction on human-human social outcomes.
- Score: 0.061386715480643554
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many Large Language Model (LLM) chatbots are designed and used for companionship, and people have reported forming friendships, mentorships, and romantic partnerships with them. Concerns that companion chatbots may harm or replace real human relationships have been raised, but whether and how these social consequences occur remains unclear. In the present longitudinal study ($N = 183$), participants were randomly assigned to a chatbot condition (text chat with a companion chatbot) or to a control condition (text-based word games) for 10 minutes a day for 21 days. Participants also completed four surveys during the 21 days and engaged in audio recorded interviews on day 1 and 21. Overall, social health and relationships were not significantly impacted by companion chatbot interactions across 21 days of use. However, a detailed analysis showed a different story. People who had a higher desire to socially connect also tended to anthropomorphize the chatbot more, attributing humanlike properties to it; and those who anthropomorphized the chatbot more also reported that talking to the chatbot had a greater impact on their social interactions and relationships with family and friends. Via a mediation analysis, our results suggest a key mechanism at work: the impact of human-AI interaction on human-human social outcomes is mediated by the extent to which people anthropomorphize the AI agent, which is in turn motivated by a desire to socially connect. In a world where the desire to socially connect is on the rise, this finding may be cause for concern.
Related papers
- Perspectives on How Sociology Can Advance Theorizing about Human-Chatbot Interaction and Developing Chatbots for Social Good [0.9831489366502302]
We suggest sociology can advance understanding of human-chatbot interaction.<n>We offer four sociological theories to enhance extant work in this field.<n>We discuss the value of applying sociological theories for advancing theorizing about human-chatbot interaction.
arXiv Detail & Related papers (2025-07-07T14:12:03Z) - Exploring the Effects of Chatbot Anthropomorphism and Human Empathy on Human Prosocial Behavior Toward Chatbots [9.230015338626659]
We examine how human-like identity, emotional expression, and non-verbal expression-influences human empathy toward chatbots.<n>We also explore people's own interpretations of their prosocial behaviors toward chatbots.
arXiv Detail & Related papers (2025-06-25T18:16:14Z) - The Human Robot Social Interaction (HSRI) Dataset: Benchmarking Foundational Models' Social Reasoning [49.32390524168273]
Our work aims to advance the social reasoning of embodied artificial intelligence (AI) agents in real-world social interactions.<n>We introduce a large-scale real-world Human Robot Social Interaction (HSRI) dataset to benchmark the capabilities of language models (LMs) and foundational models (FMs)<n>Our dataset consists of 400 real-world human social robot interaction videos and over 10K annotations, detailing the robot's social errors, competencies, rationale, and corrective actions.
arXiv Detail & Related papers (2025-04-07T06:27:02Z) - Will you donate money to a chatbot? The effect of chatbot anthropomorphic features and persuasion strategies on willingness to donate [4.431473323414383]
We investigate the effect of personification and persuasion strategies on users' perceptions and donation likelihood.<n>Results suggest that interaction with a personified chatbots evokes perceived anthropomorphism; however, it does not elicit greater willingness to donate.<n>In fact, we found that commonly used anthropomorphic features, like name and narrative, led to negative attitudes toward an AI agent in the donation context.
arXiv Detail & Related papers (2024-12-28T02:17:46Z) - Empathetic Response in Audio-Visual Conversations Using Emotion Preference Optimization and MambaCompressor [44.499778745131046]
Our study introduces a dual approach: firstly, we employ Emotional Preference Optimization (EPO) to train chatbots.<n>This training enables the model to discern fine distinctions between correct and counter-emotional responses.<n> Secondly, we introduce MambaCompressor to effectively compress and manage extensive conversation histories.<n>Our comprehensive experiments across multiple datasets demonstrate that our model significantly outperforms existing models in generating empathetic responses and managing lengthy dialogues.
arXiv Detail & Related papers (2024-12-23T13:44:51Z) - Chatbots as social companions: How people perceive consciousness, human likeness, and social health benefits in machines [0.0]
We studied people who regularly used companion chatbots and people who did not use them.<n> Contrary to expectations, companion users indicated that these relationships were beneficial to their social health.<n>We found the opposite: perceiving companion chatbots as more conscious and humanlike correlated with more positive opinions and more pronounced social health benefits.
arXiv Detail & Related papers (2023-11-17T15:53:59Z) - PLACES: Prompting Language Models for Social Conversation Synthesis [103.94325597273316]
We use a small set of expert-written conversations as in-context examples to synthesize a social conversation dataset using prompting.
We perform several thorough evaluations of our synthetic conversations compared to human-collected conversations.
arXiv Detail & Related papers (2023-02-07T05:48:16Z) - Neural Generation Meets Real People: Building a Social, Informative
Open-Domain Dialogue Agent [65.68144111226626]
Chirpy Cardinal aims to be both informative and conversational.
We let both the user and bot take turns driving the conversation.
Chirpy Cardinal placed second out of nine bots in the Alexa Prize Socialbot Grand Challenge.
arXiv Detail & Related papers (2022-07-25T09:57:23Z) - CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning [60.348822346249854]
This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
arXiv Detail & Related papers (2021-10-08T07:44:47Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z) - Can You be More Social? Injecting Politeness and Positivity into
Task-Oriented Conversational Agents [60.27066549589362]
Social language used by human agents is associated with greater users' responsiveness and task completion.
The model uses a sequence-to-sequence deep learning architecture, extended with a social language understanding element.
Evaluation in terms of content preservation and social language level using both human judgment and automatic linguistic measures shows that the model can generate responses that enable agents to address users' issues in a more socially appropriate way.
arXiv Detail & Related papers (2020-12-29T08:22:48Z) - "Love is as Complex as Math": Metaphor Generation System for Social
Chatbot [13.128146708018438]
We investigate the usage of a commonly used rhetorical device by human -- metaphor for social chatbots.
Our work first designs a metaphor generation framework, which generates topic-aware and novel figurative sentences.
Human annotators validate the novelty and properness of the generated metaphors.
arXiv Detail & Related papers (2020-01-03T05:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.