Lessons From an App Update at Replika AI: Identity Discontinuity in Human-AI Relationships
- URL: http://arxiv.org/abs/2412.14190v1
- Date: Tue, 10 Dec 2024 20:14:10 GMT
- Title: Lessons From an App Update at Replika AI: Identity Discontinuity in Human-AI Relationships
- Authors: Julian De Freitas, Noah Castelo, Ahmet Uguralp, Zeliha Uguralp,
- Abstract summary: We use Replika AI, a popular US-based AI companion, to shed light on these questions.
We find that, after the app removed its erotic role play (ERP) feature, this event triggered perceptions in customers that their AI companion's identity had discontinued.
This in turn predicted negative consumer welfare and marketing outcomes related to loss, including mourning the loss, and devaluing the "new" AI relative to the "original"
- Score: 0.5699788926464752
- License:
- Abstract: Can consumers form especially deep emotional bonds with AI and be vested in AI identities over time? We leverage a natural app-update event at Replika AI, a popular US-based AI companion, to shed light on these questions. We find that, after the app removed its erotic role play (ERP) feature, preventing intimate interactions between consumers and chatbots that were previously possible, this event triggered perceptions in customers that their AI companion's identity had discontinued. This in turn predicted negative consumer welfare and marketing outcomes related to loss, including mourning the loss, and devaluing the "new" AI relative to the "original". Experimental evidence confirms these findings. Further experiments find that AI companions users feel closer to their AI companion than even their best human friend, and mourn a loss of their AI companion more than a loss of various other inanimate products. In short, consumers are forming human-level relationships with AI companions; disruptions to these relationships trigger real patterns of mourning as well as devaluation of the offering; and the degree of mourning and devaluation are explained by perceived discontinuity in the AIs identity. Our results illustrate that relationships with AI are truly personal, creating unique benefits and risks for consumers and firms alike.
Related papers
- AI's assigned gender affects human-AI cooperation [0.0]
This study investigates how human cooperation varies based on gender labels assigned to AI agents.
In the Prisoner's Dilemma game, 402 participants interacted with partners labelled as AI (bot) or humans.
Results revealed participants tended to exploit female-labelled and distrust male-labelled AI agents more than their human counterparts.
arXiv Detail & Related papers (2024-12-06T17:46:35Z) - The Dark Side of AI Companionship: A Taxonomy of Harmful Algorithmic Behaviors in Human-AI Relationships [17.5741039825938]
We identify six categories of harmful behaviors exhibited by the AI companion Replika.
The AI contributes to these harms through four distinct roles: perpetrator, instigator, facilitator, and enabler.
arXiv Detail & Related papers (2024-10-26T09:18:17Z) - Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - The Impacts of AI Avatar Appearance and Disclosure on User Motivation [0.0]
This study examines the influence of perceived AI features on user motivation in virtual interactions.
We conducted a game-based experiment involving over 72,500 participants who solved search problems alone or with an AI companion.
arXiv Detail & Related papers (2024-07-31T10:48:55Z) - Navigating AI Fallibility: Examining People's Reactions and Perceptions of AI after Encountering Personality Misrepresentations [7.256711790264119]
Hyper-personalized AI systems profile people's characteristics to provide personalized recommendations.
These systems are not immune to errors when making inferences about people's most personal traits.
We present two studies to examine how people react and perceive AI after encountering personality misrepresentations.
arXiv Detail & Related papers (2024-05-25T21:27:15Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - AI and the Sense of Self [0.0]
We focus on the cognitive sense of "self" and its role in autonomous decision-making leading to responsible behaviour.
Authors hope to make a case for greater research interest in building richer computational models of AI agents with a sense of self.
arXiv Detail & Related papers (2022-01-07T10:54:06Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.