Understanding Risk and Dependency in AI Chatbot Use from User Discourse
- URL: http://arxiv.org/abs/2602.09339v1
- Date: Tue, 10 Feb 2026 02:16:57 GMT
- Title: Understanding Risk and Dependency in AI Chatbot Use from User Discourse
- Authors: Jianfeng Zhu, Karin G. Coifman, Ruoming Jin,
- Abstract summary: We present a large-scale computational thematic analysis of posts collected between 2023 and 2025 from two communities, r/AIDangers and r/ChatbotAddiction.<n>We identify 14 recurring thematic categories and synthesize them into five higher-order experiential dimensions.<n>Our findings reveal five empirically derived experiential dimensions of AI-related psychological risk grounded in real-world user discourse.
- Score: 4.1957094635667875
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative AI systems are increasingly embedded in everyday life, yet empirical understanding of how psychological risk associated with AI use emerges, is experienced, and is regulated by users remains limited. We present a large-scale computational thematic analysis of posts collected between 2023 and 2025 from two Reddit communities, r/AIDangers and r/ChatbotAddiction, explicitly focused on AI-related harm and distress. Using a multi-agent, LLM-assisted thematic analysis grounded in Braun and Clarke's reflexive framework, we identify 14 recurring thematic categories and synthesize them into five higher-order experiential dimensions. To further characterize affective patterns, we apply emotion labeling using a BERT-based classifier and visualize emotional profiles across dimensions. Our findings reveal five empirically derived experiential dimensions of AI-related psychological risk grounded in real-world user discourse, with self-regulation difficulties emerging as the most prevalent and fear concentrated in concerns related to autonomy, control, and technical risk. These results provide early empirical evidence from lived user experience of how AI safety is perceived and emotionally experienced outside laboratory or speculative contexts, offering a foundation for future AI safety research, evaluation, and responsible governance.
Related papers
- Assessing Risks of Large Language Models in Mental Health Support: A Framework for Automated Clinical AI Red Teaming [23.573537738272595]
We introduce an evaluation framework that pairs AI psychotherapists with simulated patient agents equipped with cognitive-affective models.<n>We apply this framework to a high-impact test case, Alcohol Use Disorder, evaluating six AI agents.<n>Our large-scale simulation reveals critical safety gaps in the use of AI for mental health support.
arXiv Detail & Related papers (2026-02-23T15:17:18Z) - AI Deception: Risks, Dynamics, and Controls [153.71048309527225]
This project provides a comprehensive and up-to-date overview of the AI deception field.<n>We identify a formal definition of AI deception, grounded in signaling theory from studies of animal deception.<n>We organize the landscape of AI deception research as a deception cycle, consisting of two key components: deception emergence and deception treatment.
arXiv Detail & Related papers (2025-11-27T16:56:04Z) - Towards Emotionally Intelligent and Responsible Reinforcement Learning [0.40719854602160227]
We propose a Responsible Reinforcement Learning framework that integrates emotional and contextual understanding with ethical considerations.<n>We introduce a multi-objective reward function that balances short-term behavioral engagement with long-term user well-being.<n>We discuss the implications of this approach for human-centric domains such as behavioral health, education, and digital therapeutics.
arXiv Detail & Related papers (2025-11-13T18:09:37Z) - Mental Health Impacts of AI Companions: Triangulating Social Media Quasi-Experiments, User Perspectives, and Relational Theory [18.716972390545703]
We examined how engaging with AICCs shaped wellbeing and how users perceived these experiences.<n>Findings revealed mixed effects -- greater affective and grief expression, readability, and interpersonal focus.<n>We offer design implications for AI companions that scaffold healthy boundaries, support mindful engagement, support disclosure without dependency, and surface relationship stages.
arXiv Detail & Related papers (2025-09-26T15:47:37Z) - ANNIE: Be Careful of Your Robots [48.89876809734855]
We present the first systematic study of adversarial safety attacks on embodied AI systems.<n>We show attack success rates exceeding 50% across all safety categories.<n>Results expose a previously underexplored but highly consequential attack surface in embodied AI systems.
arXiv Detail & Related papers (2025-09-03T15:00:28Z) - Feeling Machines: Ethics, Culture, and the Rise of Emotional AI [18.212492056071657]
This paper explores the growing presence of emotionally responsive artificial intelligence through a critical and interdisciplinary lens.<n>It explores how AI systems that simulate or interpret human emotions are reshaping our interactions in areas such as education, healthcare, mental health, caregiving, and digital life.<n>The analysis is structured around four central themes: the ethical implications of emotional AI, the cultural dynamics of human-machine interaction, the risks and opportunities for vulnerable populations, and the emerging regulatory, design, and technical considerations.
arXiv Detail & Related papers (2025-06-14T10:28:26Z) - From Lived Experience to Insight: Unpacking the Psychological Risks of Using AI Conversational Agents [21.66189033227397]
Our work presents a novel risk taxonomy focusing on psychological risks of using AI gathered through the lived experiences of individuals.<n>Our taxonomy features 19 AI behaviors, 21 negative psychological impacts, and 15 contexts related to individuals.
arXiv Detail & Related papers (2024-12-10T22:31:29Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We examine what is known about human wisdom and sketch a vision of its AI counterpart.<n>We argue that AI systems particularly struggle with metacognition.<n>We discuss how wise AI might be benchmarked, trained, and implemented.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions [95.49509269498367]
We present HAICOSYSTEM, a framework examining AI agent safety within diverse and complex social interactions.<n>We run 1840 simulations based on 92 scenarios across seven domains (e.g., healthcare, finance, education)<n>Our experiments show that state-of-the-art LLMs, both proprietary and open-sourced, exhibit safety risks in over 50% cases.
arXiv Detail & Related papers (2024-09-24T19:47:21Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Adversarial Interaction Attack: Fooling AI to Misinterpret Human
Intentions [46.87576410532481]
We show that, despite their current huge success, deep learning based AI systems can be easily fooled by subtle adversarial noise.
Based on a case study of skeleton-based human interactions, we propose a novel adversarial attack on interactions.
Our study highlights potential risks in the interaction loop with AI and humans, which need to be carefully addressed when deploying AI systems in safety-critical applications.
arXiv Detail & Related papers (2021-01-17T16:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.