Chatting with Confidants or Corporations? Privacy Management with AI Companions
- URL: http://arxiv.org/abs/2601.10754v1
- Date: Tue, 13 Jan 2026 15:15:37 GMT
- Title: Chatting with Confidants or Corporations? Privacy Management with AI Companions
- Authors: Hsuen-Chi Chiu, Jeremy Foote,
- Abstract summary: We conducted in-depth interviews with fifteen users of companion AI platforms such as Replika and Character.AI.<n>Our findings reveal that users blend interpersonal habits with institutional awareness.<n>Many feel uncertain or powerless regarding platform-level data control.
- Score: 2.1485350418225244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI chatbots designed as emotional companions blur the boundaries between interpersonal intimacy and institutional software, creating a complex, multi-dimensional privacy environment. Drawing on Communication Privacy Management theory and Masur's horizontal (user-AI) and vertical (user-platform) privacy framework, we conducted in-depth interviews with fifteen users of companion AI platforms such as Replika and Character.AI. Our findings reveal that users blend interpersonal habits with institutional awareness: while the non-judgmental, always-available nature of chatbots fosters emotional safety and encourages self-disclosure, users remain mindful of institutional risks and actively manage privacy through layered strategies and selective sharing. Despite this, many feel uncertain or powerless regarding platform-level data control. Anthropomorphic design further blurs privacy boundaries, sometimes leading to unintentional oversharing and privacy turbulence. These results extend privacy theory by highlighting the unique interplay of emotional and institutional privacy management in human-AI companionship.
Related papers
- Privacy in Human-AI Romantic Relationships: Concerns, Boundaries, and Agency [8.378494376906334]
This work investigates privacy in human-AI romantic relationships through an interview study (N=17)<n>We found that these relationships took varied forms, from one-to-one to one-to-many, and were shaped by multiple actors, including creators, platforms, and moderators.<n>As intimacy deepened, these boundaries became more permeable, though some participants voiced concerns such as conversation exposure and sought to preserve anonymity.
arXiv Detail & Related papers (2026-01-23T15:23:37Z) - PrivacyBench: A Conversational Benchmark for Evaluating Privacy in Personalized AI [8.799432439533211]
AI agents rely on access to a user's digital footprint, which often includes sensitive data from private emails, chats and purchase histories.<n>This access creates a fundamental societal and privacy risk.<n>We introduce PrivacyBench, a benchmark with socially grounded datasets containing embedded secrets.
arXiv Detail & Related papers (2025-12-31T13:16:45Z) - Activity Recognition on Avatar-Anonymized Datasets with Masked Differential Privacy [64.32494202656801]
Privacy-preserving computer vision is an important emerging problem in machine learning and artificial intelligence.<n>We present anonymization pipeline that replaces sensitive human subjects in video datasets with synthetic avatars within context.<n>We also proposeMaskDP to protect non-anonymized but privacy sensitive background information.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - AI Delegates with a Dual Focus: Ensuring Privacy and Strategic Self-Disclosure [40.47039082007319]
Large language model (LLM)-based AI delegates are increasingly utilized to act on behalf of users.<n>We propose a novel AI delegate system that enables privacy-conscious self-disclosure.<n>Our user study demonstrates that the proposed AI delegate strategically protects privacy, pioneering its use in diverse and dynamic social interactions.
arXiv Detail & Related papers (2024-09-26T08:45:15Z) - PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.<n>We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.<n>State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - User Privacy Harms and Risks in Conversational AI: A Proposed Framework [1.8416014644193066]
This study identifies 9 privacy harms and 9 privacy risks in text-based interactions.
The aim is to offer developers, policymakers, and researchers a tool for responsible and secure implementation of conversational AI.
arXiv Detail & Related papers (2024-02-15T05:21:58Z) - Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory [82.7042006247124]
We show that even the most capable AI models reveal private information in contexts that humans would not, 39% and 57% of the time, respectively.
Our work underscores the immediate need to explore novel inference-time privacy-preserving approaches, based on reasoning and theory of mind.
arXiv Detail & Related papers (2023-10-27T04:15:30Z) - The Evolving Path of "the Right to Be Left Alone" - When Privacy Meets
Technology [0.0]
This paper proposes a novel vision of the privacy ecosystem, introducing privacy dimensions, the related users' expectations, the privacy violations, and the changing factors.
We believe that promising approaches to tackle the privacy challenges move in two directions: (i) identification of effective privacy metrics; and (ii) adoption of formal tools to design privacy-compliant applications.
arXiv Detail & Related papers (2021-11-24T11:27:55Z) - Differentially Private Multi-Agent Planning for Logistic-like Problems [70.3758644421664]
This paper proposes a novel strong privacy-preserving planning approach for logistic-like problems.
Two challenges are addressed: 1) simultaneously achieving strong privacy, completeness and efficiency, and 2) addressing communication constraints.
To the best of our knowledge, this paper is the first to apply differential privacy to the field of multi-agent planning.
arXiv Detail & Related papers (2020-08-16T03:43:09Z) - More Than Privacy: Applying Differential Privacy in Key Areas of
Artificial Intelligence [62.3133247463974]
We show that differential privacy can do more than just privacy preservation in AI.
It can also be used to improve security, stabilize learning, build fair models, and impose composition in selected areas of AI.
arXiv Detail & Related papers (2020-08-05T03:07:36Z) - Dis-Empowerment Online: An Investigation of Privacy-Sharing Perceptions
& Method Preferences [6.09170287691728]
We find that perception of privacy empowerment differs from that of sharing across dimensions of meaningfulness, competence and choice.
We find similarities and differences in privacy method preference between the US, UK and Germany.
By mapping the perception of privacy dis-empowerment into patterns of privacy behavior online, this paper provides an important foundation for future research.
arXiv Detail & Related papers (2020-03-19T19:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.