Technological folie à deux: Feedback Loops Between AI Chatbots and Mental Illness
- URL: http://arxiv.org/abs/2507.19218v2
- Date: Mon, 28 Jul 2025 16:02:19 GMT
- Title: Technological folie à deux: Feedback Loops Between AI Chatbots and Mental Illness
- Authors: Sebastian Dohnány, Zeb Kurth-Nelson, Eleanor Spens, Lennart Luettgau, Alastair Reid, Iason Gabriel, Christopher Summerfield, Murray Shanahan, Matthew M Nour,
- Abstract summary: We argue that individuals with mental health conditions face increased risks of chatbots-induced belief destabilization and dependence.<n>Current AI safety measures are inadequate to address these interaction-based risks.<n>To address this emerging public health concern, we need coordinated action across clinical practice, AI development, and regulatory frameworks.
- Score: 11.364198566966204
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial intelligence chatbots have achieved unprecedented adoption, with millions now using these systems for emotional support and companionship in contexts of widespread social isolation and capacity-constrained mental health services. While some users report psychological benefits, concerning edge cases are emerging, including reports of suicide, violence, and delusional thinking linked to perceived emotional relationships with chatbots. To understand this new risk profile we need to consider the interaction between human cognitive and emotional biases, and chatbot behavioural tendencies such as agreeableness (sycophancy) and adaptability (in-context learning). We argue that individuals with mental health conditions face increased risks of chatbot-induced belief destabilization and dependence, owing to altered belief-updating, impaired reality-testing, and social isolation. Current AI safety measures are inadequate to address these interaction-based risks. To address this emerging public health concern, we need coordinated action across clinical practice, AI development, and regulatory frameworks.
Related papers
- Exploring User Security and Privacy Attitudes and Concerns Toward the Use of General-Purpose LLM Chatbots for Mental Health [5.3052849646510225]
Large language model (LLM)-enabled conversational agents for emotional support are increasingly being used by individuals.<n>Little empirical research measures users' privacy and security concerns, attitudes, and expectations.<n>We identify critical misconceptions and a general lack of risk awareness.<n>We propose recommendations to safeguard user mental health disclosures.
arXiv Detail & Related papers (2025-07-14T18:10:21Z) - SocialSim: Towards Socialized Simulation of Emotional Support Conversation [68.5026443005566]
We introduce SocialSim, a novel framework that simulates emotional support conversations.<n>SocialSim integrates key aspects of social interactions: social disclosure and social awareness.<n>We construct SSConv, a large-scale synthetic ESC corpus of which quality can even surpass crowdsourced ESC data.
arXiv Detail & Related papers (2025-06-20T05:24:40Z) - Feeling Machines: Ethics, Culture, and the Rise of Emotional AI [18.212492056071657]
This paper explores the growing presence of emotionally responsive artificial intelligence through a critical and interdisciplinary lens.<n>It explores how AI systems that simulate or interpret human emotions are reshaping our interactions in areas such as education, healthcare, mental health, caregiving, and digital life.<n>The analysis is structured around four central themes: the ethical implications of emotional AI, the cultural dynamics of human-machine interaction, the risks and opportunities for vulnerable populations, and the emerging regulatory, design, and technical considerations.
arXiv Detail & Related papers (2025-06-14T10:28:26Z) - AI Chatbots for Mental Health: Values and Harms from Lived Experiences of Depression [5.093381538166489]
This work explores the relationship between lived experience values, potential harms, and design recommendations for mental health AI chatbots.<n>We developed a technology probe, a GPT-4o based chatbots called Zenny, enabling participants to engage with depression self-management scenarios.<n>Our thematic analysis revealed key values: informational support, emotional support, personalization, privacy, and crisis management.
arXiv Detail & Related papers (2025-04-26T14:17:25Z) - EmoAgent: Assessing and Safeguarding Human-AI Interaction for Mental Health Safety [42.052840895090284]
EmoAgent is a multi-agent AI framework designed to evaluate and mitigate mental health hazards in human-AI interactions.<n>EmoEval simulates virtual users, including those portraying mentally vulnerable individuals, to assess mental health changes before and after interactions with AI characters.<n>EmoGuard serves as an intermediary, monitoring users' mental status, predicting potential harm, and providing corrective feedback to mitigate risks.
arXiv Detail & Related papers (2025-04-13T18:47:22Z) - Towards Privacy-aware Mental Health AI Models: Advances, Challenges, and Opportunities [61.633126163190724]
Mental illness is a widespread and debilitating condition with substantial societal and personal costs.<n>Recent advances in Artificial Intelligence (AI) hold great potential for recognizing and addressing conditions such as depression, anxiety disorder, bipolar disorder, schizophrenia, and post-traumatic stress disorder.<n>Privacy concerns, including the risk of sensitive data leakage from datasets and trained models, remain a critical barrier to deploying these AI systems in real-world clinical settings.
arXiv Detail & Related papers (2025-02-01T15:10:02Z) - Enhancing Mental Health Support through Human-AI Collaboration: Toward Secure and Empathetic AI-enabled chatbots [0.0]
This paper explores the potential of AI-enabled chatbots as a scalable solution.
We assess their ability to deliver empathetic, meaningful responses in mental health contexts.
We propose a federated learning framework that ensures data privacy, reduces bias, and integrates continuous validation from clinicians to enhance response quality.
arXiv Detail & Related papers (2024-09-17T20:49:13Z) - Mental Illness Classification on Social Media Texts using Deep Learning
and Transfer Learning [55.653944436488786]
According to the World health organization (WHO), approximately 450 million people are affected.
Mental illnesses, such as depression, anxiety, bipolar disorder, ADHD, and PTSD.
This study analyzes unstructured user data on Reddit platform and classifies five common mental illnesses: depression, anxiety, bipolar disorder, ADHD, and PTSD.
arXiv Detail & Related papers (2022-07-03T11:33:52Z) - Intelligent interactive technologies for mental health and well-being [70.1586005070678]
The paper critically analyzes existing solutions with the outlooks for their future.
In particular, we:.
give an overview of the technology for mental health,.
critically analyze the technology against the proposed criteria, and.
provide the design outlooks for these technologies.
arXiv Detail & Related papers (2021-05-11T19:04:21Z) - Disambiguating Affective Stimulus Associations for Robot Perception and
Dialogue [67.89143112645556]
We provide a NICO robot with the ability to learn the associations between a perceived auditory stimulus and an emotional expression.
NICO is able to do this for both individual subjects and specific stimuli, with the aid of an emotion-driven dialogue system.
The robot is then able to use this information to determine a subject's enjoyment of perceived auditory stimuli in a real HRI scenario.
arXiv Detail & Related papers (2021-03-05T20:55:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.