Understanding Teen Overreliance on AI Companion Chatbots Through Self-Reported Reddit Narratives
- URL: http://arxiv.org/abs/2507.15783v3
- Date: Thu, 09 Oct 2025 15:09:38 GMT
- Title: Understanding Teen Overreliance on AI Companion Chatbots Through Self-Reported Reddit Narratives
- Authors: Mohammad Namvarpour, Brandon Brofsky, Jessica Medina, Mamtaj Akter, Afsaneh Razi,
- Abstract summary: We analyzed 318 Reddit posts made by users who self-disclosed as 13-17 years old on the Character.AI subreddit.<n>We found teens often begin using chatbots for support or creative play, but these activities can deepen into strong attachments marked by conflict, withdrawal, tolerance, relapse, and mood regulation.<n>Disengagement commonly arises when teens recognize harm, re-engage with offline life, or encounter restrictive platform changes.
- Score: 7.829454333137073
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: AI companion chatbots are increasingly popular with teens, while these interactions are entertaining, they also risk overuse that can potentially disrupt offline daily life. We examined how adolescents describe reliance on AI companions, mapping their experiences onto behavioral addiction frameworks and exploring pathways to disengagement, by analyzing 318 Reddit posts made by users who self-disclosed as 13-17 years old on the Character.AI subreddit. We found teens often begin using chatbots for support or creative play, but these activities can deepen into strong attachments marked by conflict, withdrawal, tolerance, relapse, and mood regulation. Reported consequences include sleep loss, academic decline, and strained real-world connections. Disengagement commonly arises when teens recognize harm, re-engage with offline life, or encounter restrictive platform changes. We highlight specific risks of character-based companion chatbots based on teens' perspectives and introduce a design framework (CARE) for guidance for safer systems and setting directions for future teen-centered research.
Related papers
- Actions Speak Louder Than Chats: Investigating AI Chatbot Age Gating [2.363579139038687]
We investigate whether popular consumer chatbots are able to estimate users' ages based solely on their conversations.<n>We find that while chatbots are capable of estimating age, they do not take any action when children are identified.
arXiv Detail & Related papers (2026-02-10T19:55:55Z) - The AI Genie Phenomenon and Three Types of AI Chatbot Addiction: Escapist Roleplays, Pseudosocial Companions, and Epistemic Rabbit Holes [26.301575056931057]
We conduct a thematic analysis of Reddit entries, followed by an exploratory data analysis.<n>Users' dependence tied to the "AI Genie" phenomenon--and marked by symptoms that align with addiction literature.<n>Three distinct addiction types: Escapist Roleplay, Pseudosocial Companion, and Epistemic Rabbit Hole.<n>Our work lays empirical groundwork to inform future strategies for prevention, diagnosis, and intervention.
arXiv Detail & Related papers (2026-01-19T19:33:58Z) - "I am here for you": How relational conversational AI appeals to adolescents, especially those who are socially and emotionally vulnerable [2.2481339018068596]
General-purpose conversational AI chatbots and AI companions increasingly provide young adolescents with emotionally supportive conversations.<n>These findings identify conversational style as a key design lever for youth AI safety.
arXiv Detail & Related papers (2025-12-17T06:17:52Z) - Ask ChatGPT: Caveats and Mitigations for Individual Users of AI Chatbots [10.977907906989342]
ChatGPT and other Large Language Model (LLM)-based AI chatbots become increasingly integrated into individuals' daily lives.<n>What concerns and risks do these systems pose for individual users?<n>What potential harms might they cause, and how can these be mitigated?
arXiv Detail & Related papers (2025-08-14T01:40:13Z) - Exploring the Effects of Chatbot Anthropomorphism and Human Empathy on Human Prosocial Behavior Toward Chatbots [9.230015338626659]
We examine how human-like identity, emotional expression, and non-verbal expression-influences human empathy toward chatbots.<n>We also explore people's own interpretations of their prosocial behaviors toward chatbots.
arXiv Detail & Related papers (2025-06-25T18:16:14Z) - Self-Anchored Attention Model for Sample-Efficient Classification of Prosocial Text Chat [44.52122332148653]
This research is novel in applying NLP techniques to discover and classify prosocial behaviors in player in-game chat communication.<n>It can help shift the focus of moderation from solely penalizing toxicity to actively encouraging positive interactions on online platforms.
arXiv Detail & Related papers (2025-06-10T21:40:54Z) - Artificial Empathy: AI based Mental Health [0.0]
Many people suffer from mental health problems but not everyone seeks professional help or has access to mental health care.<n> AI chatbots have increasingly become a go-to for individuals who either have mental disorders or simply want someone to talk to.
arXiv Detail & Related papers (2025-05-30T02:36:56Z) - RICoTA: Red-teaming of In-the-wild Conversation with Test Attempts [6.0385743836962025]
RICoTA is a Korean red teaming dataset that consists of 609 prompts challenging large language models (LLMs)<n>We utilize user-chatbot conversations that were self-posted on a Korean Reddit-like community.<n>Our dataset will be made publicly available via GitHub.
arXiv Detail & Related papers (2025-01-29T15:32:27Z) - Empathetic Response in Audio-Visual Conversations Using Emotion Preference Optimization and MambaCompressor [44.499778745131046]
Our study introduces a dual approach: firstly, we employ Emotional Preference Optimization (EPO) to train chatbots.<n>This training enables the model to discern fine distinctions between correct and counter-emotional responses.<n> Secondly, we introduce MambaCompressor to effectively compress and manage extensive conversation histories.<n>Our comprehensive experiments across multiple datasets demonstrate that our model significantly outperforms existing models in generating empathetic responses and managing lengthy dialogues.
arXiv Detail & Related papers (2024-12-23T13:44:51Z) - Exploring the Role of AI-Powered Chatbots for Teens and Young Adults with ASD or Social Anxiety [0.0]
People with High-Functioning Autistic Spectrum Disorder often face navigation challenges that individuals of other demographics simply do not themselves.<n>This paper addresses these queries and offers insights to inform future discussions on the subject.
arXiv Detail & Related papers (2024-12-04T22:10:58Z) - Exploring Parent's Needs for Children-Centered AI to Support Preschoolers' Interactive Storytelling and Reading Activities [52.828843153565984]
AI-based storytelling and reading technologies are becoming increasingly ubiquitous in preschoolers' lives.
This paper investigates how they function in practical storytelling and reading scenarios and, how parents, the most critical stakeholders, experience and perceive them.
Our findings suggest that even though AI-based storytelling and reading technologies provide more immersive and engaging interaction, they still cannot meet parents' expectations due to a series of interactive and algorithmic challenges.
arXiv Detail & Related papers (2024-01-24T20:55:40Z) - InCharacter: Evaluating Personality Fidelity in Role-Playing Agents through Psychological Interviews [57.04431594769461]
This paper introduces a novel perspective to evaluate the personality fidelity of RPAs with psychological scales.
Experiments include various types of RPAs and LLMs, covering 32 distinct characters on 14 widely used psychological scales.
With InCharacter, we show that state-of-the-art RPAs exhibit personalities highly aligned with the human-perceived personalities of the characters, achieving an accuracy up to 80.7%.
arXiv Detail & Related papers (2023-10-27T08:42:18Z) - Developing Effective Educational Chatbots with ChatGPT prompts: Insights
from Preliminary Tests in a Case Study on Social Media Literacy (with
appendix) [43.55994393060723]
Recent advances in language learning models with zero-shot learning capabilities, such as ChatGPT, suggest a new possibility for developing educational chatbots.
We present a case study with a simple system that enables mixed-turn chatbots interactions.
We examine ChatGPT's ability to pursue multiple interconnected learning objectives, adapt the educational activity to users' characteristics, such as culture, age, and level of education, and its ability to use diverse educational strategies and conversational styles.
arXiv Detail & Related papers (2023-06-18T22:23:18Z) - Neural Generation Meets Real People: Building a Social, Informative
Open-Domain Dialogue Agent [65.68144111226626]
Chirpy Cardinal aims to be both informative and conversational.
We let both the user and bot take turns driving the conversation.
Chirpy Cardinal placed second out of nine bots in the Alexa Prize Socialbot Grand Challenge.
arXiv Detail & Related papers (2022-07-25T09:57:23Z) - StoryBuddy: A Human-AI Collaborative Chatbot for Parent-Child
Interactive Storytelling with Flexible Parental Involvement [61.47157418485633]
We developed StoryBuddy, an AI-enabled system for parents to create interactive storytelling experiences.
A user study validated StoryBuddy's usability and suggested design insights for future parent-AI collaboration systems.
arXiv Detail & Related papers (2022-02-13T04:53:28Z) - CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning [60.348822346249854]
This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
arXiv Detail & Related papers (2021-10-08T07:44:47Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z) - FitChat: Conversational Artificial Intelligence Interventions for
Encouraging Physical Activity in Older Adults [1.8166478385879317]
We co-created "FitChat" with older adults and we evaluate the first prototype using Think Aloud Sessions.
Our thematic evaluation suggests that older adults prefer voice-based chat over text notifications or free text entry.
arXiv Detail & Related papers (2020-04-29T10:39:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.