AI Chatbots for Mental Health: Values and Harms from Lived Experiences of Depression
- URL: http://arxiv.org/abs/2504.18932v1
- Date: Sat, 26 Apr 2025 14:17:25 GMT
- Title: AI Chatbots for Mental Health: Values and Harms from Lived Experiences of Depression
- Authors: Dong Whi Yoo, Jiayue Melissa Shi, Violeta J. Rodriguez, Koustuv Saha,
- Abstract summary: This work explores the relationship between lived experience values, potential harms, and design recommendations for mental health AI chatbots.<n>We developed a technology probe, a GPT-4o based chatbots called Zenny, enabling participants to engage with depression self-management scenarios.<n>Our thematic analysis revealed key values: informational support, emotional support, personalization, privacy, and crisis management.
- Score: 5.093381538166489
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in LLMs enable chatbots to interact with individuals on a range of queries, including sensitive mental health contexts. Despite uncertainties about their effectiveness and reliability, the development of LLMs in these areas is growing, potentially leading to harms. To better identify and mitigate these harms, it is critical to understand how the values of people with lived experiences relate to the harms. In this study, we developed a technology probe, a GPT-4o based chatbot called Zenny, enabling participants to engage with depression self-management scenarios informed by previous research. We used Zenny to interview 17 individuals with lived experiences of depression. Our thematic analysis revealed key values: informational support, emotional support, personalization, privacy, and crisis management. This work explores the relationship between lived experience values, potential harms, and design recommendations for mental health AI chatbots, aiming to enhance self-management support while minimizing risks.
Related papers
- "It Listens Better Than My Therapist": Exploring Social Media Discourse on LLMs as Mental Health Tool [1.223779595809275]
Large language models (LLMs) offer new capabilities in conversational fluency, empathy simulation, and availability.<n>This study explores how users engage with LLMs as mental health tools by analyzing over 10,000 TikTok comments.<n>Results show that nearly 20% of comments reflect personal use, with these users expressing overwhelmingly positive attitudes.
arXiv Detail & Related papers (2025-04-14T17:37:32Z) - EmoAgent: Assessing and Safeguarding Human-AI Interaction for Mental Health Safety [47.57801326804086]
EmoAgent is a multi-agent AI framework designed to evaluate and mitigate mental health hazards in human-AI interactions.<n>EmoEval simulates virtual users, including those portraying mentally vulnerable individuals, to assess mental health changes before and after interactions with AI characters.<n>EmoGuard serves as an intermediary, monitoring users' mental status, predicting potential harm, and providing corrective feedback to mitigate risks.
arXiv Detail & Related papers (2025-04-13T18:47:22Z) - Measurement of LLM's Philosophies of Human Nature [113.47929131143766]
We design the standardized psychological scale specifically targeting large language models (LLM)
We show that current LLMs exhibit a systemic lack of trust in humans.
We propose a mental loop learning framework, which enables LLM to continuously optimize its value system.
arXiv Detail & Related papers (2025-04-03T06:22:19Z) - Towards Privacy-aware Mental Health AI Models: Advances, Challenges, and Opportunities [61.633126163190724]
Mental illness is a widespread and debilitating condition with substantial societal and personal costs.
Recent advances in Artificial Intelligence (AI) hold great potential for recognizing and addressing conditions such as depression, anxiety disorder, bipolar disorder, schizophrenia, and post-traumatic stress disorder.
Privacy concerns, including the risk of sensitive data leakage from datasets and trained models, remain a critical barrier to deploying these AI systems in real-world clinical settings.
arXiv Detail & Related papers (2025-02-01T15:10:02Z) - Enhancing Mental Health Support through Human-AI Collaboration: Toward Secure and Empathetic AI-enabled chatbots [0.0]
This paper explores the potential of AI-enabled chatbots as a scalable solution.
We assess their ability to deliver empathetic, meaningful responses in mental health contexts.
We propose a federated learning framework that ensures data privacy, reduces bias, and integrates continuous validation from clinicians to enhance response quality.
arXiv Detail & Related papers (2024-09-17T20:49:13Z) - PsychoGAT: A Novel Psychological Measurement Paradigm through Interactive Fiction Games with LLM Agents [68.50571379012621]
Psychological measurement is essential for mental health, self-understanding, and personal development.
PsychoGAT (Psychological Game AgenTs) achieves statistically significant excellence in psychometric metrics such as reliability, convergent validity, and discriminant validity.
arXiv Detail & Related papers (2024-02-19T18:00:30Z) - The Typing Cure: Experiences with Large Language Model Chatbots for
Mental Health Support [35.61580610996628]
People experiencing severe distress increasingly use Large Language Model (LLM) chatbots as mental health support tools.
This study builds on interviews with 21 individuals from globally diverse backgrounds to analyze how users create unique support roles.
We introduce the concept of therapeutic alignment, or aligning AI with therapeutic values for mental health contexts.
arXiv Detail & Related papers (2024-01-25T18:08:53Z) - Large Language Models Understand and Can be Enhanced by Emotional
Stimuli [53.53886609012119]
We take the first step towards exploring the ability of Large Language Models to understand emotional stimuli.
Our experiments show that LLMs have a grasp of emotional intelligence, and their performance can be improved with emotional prompts.
Our human study results demonstrate that EmotionPrompt significantly boosts the performance of generative tasks.
arXiv Detail & Related papers (2023-07-14T00:57:12Z) - LLM-empowered Chatbots for Psychiatrist and Patient Simulation:
Application and Evaluation [18.98839299694749]
This work focuses on exploring the potential of ChatGPT in powering chatbots for psychiatrist and patient simulation.
We collaborate with psychiatrists to identify objectives and iteratively develop the dialogue system to closely align with real-world scenarios.
In the evaluation experiments, we recruit real psychiatrists and patients to engage in diagnostic conversations with the chatbots, collecting their ratings for assessment.
arXiv Detail & Related papers (2023-05-23T02:25:01Z) - Chatbots for Mental Health Support: Exploring the Impact of Emohaa on
Reducing Mental Distress in China [50.12173157902495]
The study included 134 participants, split into three groups: Emohaa (CBT-based), Emohaa (Full) and control.
Emohaa is a conversational agent that provides cognitive support through CBT-based exercises and guided conversations.
It also emotionally supports users by enabling them to vent their desired emotional problems.
arXiv Detail & Related papers (2022-09-21T08:23:40Z) - Mental Illness Classification on Social Media Texts using Deep Learning
and Transfer Learning [55.653944436488786]
According to the World health organization (WHO), approximately 450 million people are affected.
Mental illnesses, such as depression, anxiety, bipolar disorder, ADHD, and PTSD.
This study analyzes unstructured user data on Reddit platform and classifies five common mental illnesses: depression, anxiety, bipolar disorder, ADHD, and PTSD.
arXiv Detail & Related papers (2022-07-03T11:33:52Z) - Mental Health Assessment for the Chatbots [39.081479891611664]
We argue that it should have a healthy mental tendency in order to avoid the negative psychological impact on them.
We establish several mental health assessment dimensions for chatbots and introduce the questionnaire-based mental health assessment methods.
arXiv Detail & Related papers (2022-01-14T10:38:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.