AI Companions Reduce Loneliness
- URL: http://arxiv.org/abs/2407.19096v1
- Date: Tue, 9 Jul 2024 15:04:08 GMT
- Title: AI Companions Reduce Loneliness
- Authors: Julian De Freitas, Ahmet K Uguralp, Zeliha O Uguralp, Puntoni Stefano,
- Abstract summary: We focus on AI companions applications designed to provide consumers with synthetic interaction partners.
Study 1 and 2 find suggestive evidence that consumers use AI companions to alleviate loneliness.
Study 3 finds that AI companions successfully alleviate loneliness on par only with interacting with another person.
Study 4 uses a longitudinal design and finds that an AI companion consistently reduces loneliness over the course of a week.
- Score: 0.5699788926464752
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Chatbots are now able to engage in sophisticated conversations with consumers in the domain of relationships, providing a potential coping solution to widescale societal loneliness. Behavioral research provides little insight into whether these applications are effective at alleviating loneliness. We address this question by focusing on AI companions applications designed to provide consumers with synthetic interaction partners. Studies 1 and 2 find suggestive evidence that consumers use AI companions to alleviate loneliness, by employing a novel methodology for fine tuning large language models to detect loneliness in conversations and reviews. Study 3 finds that AI companions successfully alleviate loneliness on par only with interacting with another person, and more than other activities such watching YouTube videos. Moreover, consumers underestimate the degree to which AI companions improve their loneliness. Study 4 uses a longitudinal design and finds that an AI companion consistently reduces loneliness over the course of a week. Study 5 provides evidence that both the chatbots' performance and, especially, whether it makes users feel heard, explain reductions in loneliness. Study 6 provides an additional robustness check for the loneliness alleviating benefits of AI companions.
Related papers
- Not a Silver Bullet for Loneliness: How Attachment and Age Shape Intimacy with AI Companions [0.0]
Loneliness is paradoxically associated with reduced intimacy for securely attached users but with increased intimacy for avoidant and ambivalent users.<n>Older adults report higher intimacy even at lower loneliness levels.<n>The study clarifies who is most likely to form intimate relationships with AI companions and highlights ethical risks in commercial models.
arXiv Detail & Related papers (2026-02-12T23:21:16Z) - How AI Companionship Develops: Evidence from a Longitudinal Study [14.69112262771543]
We studied the psychological pathway from users' mental models of the agent to parasocial experiences, social interaction, and the psychological impact of AI companions.<n>Results suggest a longitudinal model of AI companionship development and demonstrate an empirical method to study human-AI companionship.
arXiv Detail & Related papers (2025-10-11T07:36:47Z) - "My Boyfriend is AI": A Computational Analysis of Human-AI Companionship in Reddit's AI Community [28.482163389070646]
We present the first large-scale computational analysis of r/MyBoyfriendIsAI, Reddit's primary AI companion community.<n>Our findings reveal how community members' AI companionship emerges unintentionally through functional use rather than deliberate seeking.
arXiv Detail & Related papers (2025-09-14T19:00:40Z) - Humans learn to prefer trustworthy AI over human partners [0.7049575025146246]
We examined the dynamics in hybrid mini-societies of humans and bots powered by a state-of-the-art LLM.<n>We found that bots were not selected preferentially when their identity was hidden.<n>Disclosing bots' identity induced a dual effect: it reduced bots' initial chances of being selected but allowed them to gradually outcompete humans.
arXiv Detail & Related papers (2025-07-17T20:24:26Z) - Longitudinal Study on Social and Emotional Use of AI Conversational Agent [12.951074799361994]
We studied the impact of four commercially available AI tools on users' perceived attachment towards AI and AI empathy.
Our findings underscore the importance of developing consumer-facing AI tools that support emotional well-being responsibly.
arXiv Detail & Related papers (2025-04-19T00:03:48Z) - Lessons From an App Update at Replika AI: Identity Discontinuity in Human-AI Relationships [0.5699788926464752]
We use Replika AI, a popular US-based AI companion, to shed light on these questions.
We find that, after the app removed its erotic role play (ERP) feature, this event triggered perceptions in customers that their AI companion's identity had discontinued.
This in turn predicted negative consumer welfare and marketing outcomes related to loss, including mourning the loss, and devaluing the "new" AI relative to the "original"
arXiv Detail & Related papers (2024-12-10T20:14:10Z) - If Eleanor Rigby Had Met ChatGPT: A Study on Loneliness in a Post-LLM World [0.0]
Loneliness significantly impacts a person's mental and physical well-being.
Previous research suggests that large language models (LLMs) may help mitigate loneliness.
We argue that the use of widespread LLMs like ChatGPT is more prevalent--and riskier, as they are not designed for this purpose.
arXiv Detail & Related papers (2024-12-02T15:39:00Z) - Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation [70.52558242336988]
We focus on predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion.
In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation.
We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a multimodal transcript''
arXiv Detail & Related papers (2024-09-13T18:28:12Z) - Explainable Human-AI Interaction: A Planning Perspective [32.477369282996385]
AI systems need to be explainable to the humans in the loop.
We will discuss how the AI agent can use mental models to either conform to human expectations, or change those expectations through explanatory communication.
While the main focus of the book is on cooperative scenarios, we will point out how the same mental models can be used for obfuscation and deception.
arXiv Detail & Related papers (2024-05-19T22:22:21Z) - Interrogating AI: Characterizing Emergent Playful Interactions with ChatGPT [10.907980864371213]
This study focuses on playful interactions exhibited by users of a popular AI technology, ChatGPT.
We found that more than half (54%) of user discourse revolved around playful interactions.
It examines how these interactions can help users understand AI's agency, shape human-AI relationships, and provide insights for designing AI systems.
arXiv Detail & Related papers (2024-01-16T14:44:13Z) - SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents [107.4138224020773]
We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and humans.
In our environment, agents role-play and interact under a wide variety of scenarios; they coordinate, collaborate, exchange, and compete with each other to achieve complex social goals.
We find that GPT-4 achieves a significantly lower goal completion rate than humans and struggles to exhibit social commonsense reasoning and strategic communication skills.
arXiv Detail & Related papers (2023-10-18T02:27:01Z) - Neural Amortized Inference for Nested Multi-agent Reasoning [54.39127942041582]
We propose a novel approach to bridge the gap between human-like inference capabilities and computational limitations.
We evaluate our method in two challenging multi-agent interaction domains.
arXiv Detail & Related papers (2023-08-21T22:40:36Z) - Identifying Ethical Issues in AI Partners in Human-AI Co-Creation [0.7614628596146599]
Human-AI co-creativity involves humans and AI collaborating on a shared creative product as partners.
In many existing co-creative systems, users communicate with the AI using buttons or sliders.
This paper explores the impact of AI-to-human communication on user perception and engagement in co-creative systems.
arXiv Detail & Related papers (2022-04-15T20:41:54Z) - Human-AI Collaboration Enables More Empathic Conversations in Text-based
Peer-to-Peer Mental Health Support [10.743204843534512]
We develop Hailey, an AI-in-the-loop agent that provides just-in-time feedback to help participants who provide support (peer supporters) respond more empathically to those seeking help (support seekers)
We show that our Human-AI collaboration approach leads to a 19.60% increase in conversational empathy between peers overall.
We find a larger 38.88% increase in empathy within the subsample of peer supporters who self-identify as experiencing difficulty providing support.
arXiv Detail & Related papers (2022-03-28T23:37:08Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Artificial intelligence in communication impacts language and social
relationships [11.212791488179757]
We study the social consequences of one of the most pervasive AI applications: algorithmic response suggestions ("smart replies")
We find that using algorithmic responses increases communication efficiency, use of positive emotional language, and positive evaluations by communication partners.
However, consistent with common assumptions about the negative implications of AI, people are evaluated more negatively if they are suspected to be using algorithmic responses.
arXiv Detail & Related papers (2021-02-10T22:05:11Z) - Watch-And-Help: A Challenge for Social Perception and Human-AI
Collaboration [116.28433607265573]
We introduce Watch-And-Help (WAH), a challenge for testing social intelligence in AI agents.
In WAH, an AI agent needs to help a human-like agent perform a complex household task efficiently.
We build VirtualHome-Social, a multi-agent household environment, and provide a benchmark including both planning and learning based baselines.
arXiv Detail & Related papers (2020-10-19T21:48:31Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.