How AI Companionship Develops: Evidence from a Longitudinal Study
- URL: http://arxiv.org/abs/2510.10079v1
- Date: Sat, 11 Oct 2025 07:36:47 GMT
- Title: How AI Companionship Develops: Evidence from a Longitudinal Study
- Authors: Angel Hsing-Chi Hwang, Fiona Li, Jacy Reese Anthis, Hayoun Noh,
- Abstract summary: We studied the psychological pathway from users' mental models of the agent to parasocial experiences, social interaction, and the psychological impact of AI companions.<n>Results suggest a longitudinal model of AI companionship development and demonstrate an empirical method to study human-AI companionship.
- Score: 14.69112262771543
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The quickly growing popularity of AI companions poses risks to mental health, personal wellbeing, and social relationships. Past work has identified many individual factors that can drive human-companion interaction, but we know little about how these factors interact and evolve over time. In Study 1, we surveyed AI companion users (N = 303) to map the psychological pathway from users' mental models of the agent to parasocial experiences, social interaction, and the psychological impact of AI companions. Participants' responses foregrounded multiple interconnected variables (agency, parasocial interaction, and engagement) that shape AI companionship. In Study 2, we conducted a longitudinal study with a subset of participants (N = 110) using a new generic chatbot. Participants' perceptions of the generic chatbot significantly converged to perceptions of their own companions by Week 3. These results suggest a longitudinal model of AI companionship development and demonstrate an empirical method to study human-AI companionship.
Related papers
- Attachment Styles and AI Chatbot Interactions Among College Students [1.334956439319062]
This study explored how college students with different attachment styles describe their interactions with ChatGPT.<n>We identified three main themes: (1) AI as a low-risk emotional space, (2) attachment-congruent patterns of AI engagement, and (3) the paradox of AI intimacy.
arXiv Detail & Related papers (2025-12-20T18:49:07Z) - Cooperation Through Indirect Reciprocity in Child-Robot Interactions [81.62347137438248]
We investigate whether indirect reciprocity can be transposed to children-robot interactions.<n>We find that IR extends to children and robots solving coordination dilemmas.<n>We observe that cooperating through multi-armed bandit algorithms is highly dependent on the strategies revealed by humans.
arXiv Detail & Related papers (2025-11-07T07:08:32Z) - Mental Health Impacts of AI Companions: Triangulating Social Media Quasi-Experiments, User Perspectives, and Relational Theory [18.716972390545703]
We examined how engaging with AICCs shaped wellbeing and how users perceived these experiences.<n>Findings revealed mixed effects -- greater affective and grief expression, readability, and interpersonal focus.<n>We offer design implications for AI companions that scaffold healthy boundaries, support mindful engagement, support disclosure without dependency, and surface relationship stages.
arXiv Detail & Related papers (2025-09-26T15:47:37Z) - A Longitudinal Randomized Control Study of Companion Chatbot Use: Anthropomorphism and Its Mediating Role on Social Impacts [0.061386715480643554]
Relationships with social artificial intelligence (AI) agents are on the rise.<n>People report forming friendships, mentorships, and romantic partnerships with chatbots such as Replika.<n>People's states of social need and their anthropomorphism of the AI agent may play a role in how human-AI interaction impacts human-human interaction.
arXiv Detail & Related papers (2025-09-23T19:33:41Z) - "My Boyfriend is AI": A Computational Analysis of Human-AI Companionship in Reddit's AI Community [28.482163389070646]
We present the first large-scale computational analysis of r/MyBoyfriendIsAI, Reddit's primary AI companion community.<n>Our findings reveal how community members' AI companionship emerges unintentionally through functional use rather than deliberate seeking.
arXiv Detail & Related papers (2025-09-14T19:00:40Z) - Aligning Generalisation Between Humans and Machines [74.120848518198]
AI technology can support humans in scientific discovery and forming decisions, but may also disrupt democracies and target individuals.<n>The responsible use of AI and its participation in human-AI teams increasingly shows the need for AI alignment.<n>A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Let people fail! Exploring the influence of explainable virtual and robotic agents in learning-by-doing tasks [45.23431596135002]
This study compares the effects of classic vs. partner-aware explanations on human behavior and performance during a learning-by-doing task.
Results indicated that partner-aware explanations influenced participants differently based on the type of artificial agents involved.
arXiv Detail & Related papers (2024-11-15T13:22:04Z) - Survey of User Interface Design and Interaction Techniques in Generative AI Applications [79.55963742878684]
We aim to create a compendium of different user-interaction patterns that can be used as a reference for designers and developers alike.
We also strive to lower the entry barrier for those attempting to learn more about the design of generative AI applications.
arXiv Detail & Related papers (2024-10-28T23:10:06Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - Capturing Humans' Mental Models of AI: An Item Response Theory Approach [12.129622383429597]
We show that people expect AI agents' performance to be significantly better on average than the performance of other humans.
Our results indicate that people expect AI agents' performance to be significantly better on average than the performance of other humans.
arXiv Detail & Related papers (2023-05-15T23:17:26Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.