Exploring User Security and Privacy Attitudes and Concerns Toward the Use of General-Purpose LLM Chatbots for Mental Health
- URL: http://arxiv.org/abs/2507.10695v1
- Date: Mon, 14 Jul 2025 18:10:21 GMT
- Title: Exploring User Security and Privacy Attitudes and Concerns Toward the Use of General-Purpose LLM Chatbots for Mental Health
- Authors: Jabari Kwesi, Jiaxun Cao, Riya Manchanda, Pardis Emami-Naeini,
- Abstract summary: Large language model (LLM)-enabled conversational agents for emotional support are increasingly being used by individuals.<n>Little empirical research measures users' privacy and security concerns, attitudes, and expectations.<n>We identify critical misconceptions and a general lack of risk awareness.<n>We propose recommendations to safeguard user mental health disclosures.
- Score: 5.3052849646510225
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Individuals are increasingly relying on large language model (LLM)-enabled conversational agents for emotional support. While prior research has examined privacy and security issues in chatbots specifically designed for mental health purposes, these chatbots are overwhelmingly "rule-based" offerings that do not leverage generative AI. Little empirical research currently measures users' privacy and security concerns, attitudes, and expectations when using general-purpose LLM-enabled chatbots to manage and improve mental health. Through 21 semi-structured interviews with U.S. participants, we identified critical misconceptions and a general lack of risk awareness. Participants conflated the human-like empathy exhibited by LLMs with human-like accountability and mistakenly believed that their interactions with these chatbots were safeguarded by the same regulations (e.g., HIPAA) as disclosures with a licensed therapist. We introduce the concept of "intangible vulnerability," where emotional or psychological disclosures are undervalued compared to more tangible forms of information (e.g., financial or location-based data). To address this, we propose recommendations to safeguard user mental health disclosures with general-purpose LLM-enabled chatbots more effectively.
Related papers
- Artificial Empathy: AI based Mental Health [0.0]
Many people suffer from mental health problems but not everyone seeks professional help or has access to mental health care.<n> AI chatbots have increasingly become a go-to for individuals who either have mental disorders or simply want someone to talk to.
arXiv Detail & Related papers (2025-05-30T02:36:56Z) - AI Chatbots for Mental Health: Values and Harms from Lived Experiences of Depression [5.093381538166489]
This work explores the relationship between lived experience values, potential harms, and design recommendations for mental health AI chatbots.<n>We developed a technology probe, a GPT-4o based chatbots called Zenny, enabling participants to engage with depression self-management scenarios.<n>Our thematic analysis revealed key values: informational support, emotional support, personalization, privacy, and crisis management.
arXiv Detail & Related papers (2025-04-26T14:17:25Z) - "It Listens Better Than My Therapist": Exploring Social Media Discourse on LLMs as Mental Health Tool [1.223779595809275]
Large language models (LLMs) offer new capabilities in conversational fluency, empathy simulation, and availability.<n>This study explores how users engage with LLMs as mental health tools by analyzing over 10,000 TikTok comments.<n>Results show that nearly 20% of comments reflect personal use, with these users expressing overwhelmingly positive attitudes.
arXiv Detail & Related papers (2025-04-14T17:37:32Z) - EmoAgent: Assessing and Safeguarding Human-AI Interaction for Mental Health Safety [42.052840895090284]
EmoAgent is a multi-agent AI framework designed to evaluate and mitigate mental health hazards in human-AI interactions.<n>EmoEval simulates virtual users, including those portraying mentally vulnerable individuals, to assess mental health changes before and after interactions with AI characters.<n>EmoGuard serves as an intermediary, monitoring users' mental status, predicting potential harm, and providing corrective feedback to mitigate risks.
arXiv Detail & Related papers (2025-04-13T18:47:22Z) - Towards Privacy-aware Mental Health AI Models: Advances, Challenges, and Opportunities [61.633126163190724]
Mental illness is a widespread and debilitating condition with substantial societal and personal costs.<n>Recent advances in Artificial Intelligence (AI) hold great potential for recognizing and addressing conditions such as depression, anxiety disorder, bipolar disorder, schizophrenia, and post-traumatic stress disorder.<n>Privacy concerns, including the risk of sensitive data leakage from datasets and trained models, remain a critical barrier to deploying these AI systems in real-world clinical settings.
arXiv Detail & Related papers (2025-02-01T15:10:02Z) - Prompt Engineering a Schizophrenia Chatbot: Utilizing a Multi-Agent Approach for Enhanced Compliance with Prompt Instructions [0.0699049312989311]
Patients with schizophrenia often present with cognitive impairments that may hinder their ability to learn about their condition.
While Large Language Models (LLMs) have the potential to make topical mental health information more accessible and engaging, their black-box nature raises concerns about ethics and safety.
arXiv Detail & Related papers (2024-10-10T09:49:24Z) - Roleplay-doh: Enabling Domain-Experts to Create LLM-simulated Patients via Eliciting and Adhering to Principles [58.82161879559716]
We develop Roleplay-doh, a novel human-LLM collaboration pipeline that elicits qualitative feedback from a domain-expert.
We apply this pipeline to enable senior mental health supporters to create customized AI patients for simulated practice partners.
arXiv Detail & Related papers (2024-07-01T00:43:02Z) - NAP^2: A Benchmark for Naturalness and Privacy-Preserving Text Rewriting by Learning from Human [56.46355425175232]
We suggest sanitizing sensitive text using two common strategies used by humans.<n>We curate the first corpus, coined NAP2, through both crowdsourcing and the use of large language models.<n>Compared to the prior works on anonymization, the human-inspired approaches result in more natural rewrites.
arXiv Detail & Related papers (2024-06-06T05:07:44Z) - The Typing Cure: Experiences with Large Language Model Chatbots for Mental Health Support [32.60242402941811]
People experiencing severe distress increasingly use Large Language Model (LLM) chatbots as mental health support tools.<n>This study builds on interviews with 21 individuals from globally diverse backgrounds to analyze how users create unique support roles.<n>We introduce the concept of therapeutic alignment, or aligning AI with therapeutic values for mental health contexts.
arXiv Detail & Related papers (2024-01-25T18:08:53Z) - Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory [82.7042006247124]
We show that even the most capable AI models reveal private information in contexts that humans would not, 39% and 57% of the time, respectively.
Our work underscores the immediate need to explore novel inference-time privacy-preserving approaches, based on reasoning and theory of mind.
arXiv Detail & Related papers (2023-10-27T04:15:30Z) - ChatGPT for Us: Preserving Data Privacy in ChatGPT via Dialogue Text
Ambiguation to Expand Mental Health Care Delivery [52.73936514734762]
ChatGPT has gained popularity for its ability to generate human-like dialogue.
Data-sensitive domains face challenges in using ChatGPT due to privacy and data-ownership concerns.
We propose a text ambiguation framework that preserves user privacy.
arXiv Detail & Related papers (2023-05-19T02:09:52Z) - Mental Illness Classification on Social Media Texts using Deep Learning
and Transfer Learning [55.653944436488786]
According to the World health organization (WHO), approximately 450 million people are affected.
Mental illnesses, such as depression, anxiety, bipolar disorder, ADHD, and PTSD.
This study analyzes unstructured user data on Reddit platform and classifies five common mental illnesses: depression, anxiety, bipolar disorder, ADHD, and PTSD.
arXiv Detail & Related papers (2022-07-03T11:33:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.