Private Yet Social: How LLM Chatbots Support and Challenge Eating Disorder Recovery
- URL: http://arxiv.org/abs/2412.11656v1
- Date: Mon, 16 Dec 2024 10:59:49 GMT
- Title: Private Yet Social: How LLM Chatbots Support and Challenge Eating Disorder Recovery
- Authors: Ryuhaerang Choi, Taehan Kim, Subin Park, Jennifer G Kim, Sung-Ju Lee,
- Abstract summary: Eating disorders (ED) are complex mental health conditions that require long-term management and support.<n>Recent advancements in large language model (LLM)-based chatbots offer the potential to assist individuals in receiving immediate support.
- Score: 5.633853272693508
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Eating disorders (ED) are complex mental health conditions that require long-term management and support. Recent advancements in large language model (LLM)-based chatbots offer the potential to assist individuals in receiving immediate support. Yet, concerns remain about their reliability and safety in sensitive contexts such as ED. We explore the opportunities and potential harms of using LLM-based chatbots for ED recovery. We observe the interactions between 26 participants with ED and an LLM-based chatbot, WellnessBot, designed to support ED recovery, over 10 days. We discovered that our participants have felt empowered in recovery by discussing ED-related stories with the chatbot, which served as a personal yet social avenue. However, we also identified harmful chatbot responses, especially concerning individuals with ED, that went unnoticed partly due to participants' unquestioning trust in the chatbot's reliability. Based on these findings, we provide design implications for safe and effective LLM-based interventions in ED management.
Related papers
- AI Chatbots for Mental Health: Values and Harms from Lived Experiences of Depression [5.093381538166489]
This work explores the relationship between lived experience values, potential harms, and design recommendations for mental health AI chatbots.
We developed a technology probe, a GPT-4o based chatbots called Zenny, enabling participants to engage with depression self-management scenarios.
Our thematic analysis revealed key values: informational support, emotional support, personalization, privacy, and crisis management.
arXiv Detail & Related papers (2025-04-26T14:17:25Z) - Wearable Meets LLM for Stress Management: A Duoethnographic Study Integrating Wearable-Triggered Stressors and LLM Chatbots for Personalized Interventions [1.4808975406270157]
Two researchers interacted with custom chatbots over 22 days, responding to wearable-detected physiological prompts and recording stressor phrases.
They recorded their experiences in autoethnographic diaries and analyzed them during weekly discussions.
Results showed that even though most events triggered by the wearable were meaningful, only one in five warranted an intervention.
arXiv Detail & Related papers (2025-02-24T20:56:23Z) - Interactive Dialogue Agents via Reinforcement Learning on Hindsight Regenerations [58.65755268815283]
Many real dialogues are interactive, meaning an agent's utterances will influence their conversational partner, elicit information, or change their opinion.
We use this fact to rewrite and augment existing suboptimal data, and train via offline reinforcement learning (RL) an agent that outperforms both prompting and learning from unaltered human demonstrations.
Our results in a user study with real humans show that our approach greatly outperforms existing state-of-the-art dialogue agents.
arXiv Detail & Related papers (2024-11-07T21:37:51Z) - Prompt Engineering a Schizophrenia Chatbot: Utilizing a Multi-Agent Approach for Enhanced Compliance with Prompt Instructions [0.0699049312989311]
Patients with schizophrenia often present with cognitive impairments that may hinder their ability to learn about their condition.
While Large Language Models (LLMs) have the potential to make topical mental health information more accessible and engaging, their black-box nature raises concerns about ethics and safety.
arXiv Detail & Related papers (2024-10-10T09:49:24Z) - Understanding the Impact of Long-Term Memory on Self-Disclosure with
Large Language Model-Driven Chatbots for Public Health Intervention [15.430380965922325]
Large language models (LLMs) offer the potential to support public health monitoring by facilitating health disclosure through open-ended conversations.
Augmenting LLMs with long-term memory (LTM) presents an opportunity to improve engagement and self-disclosure.
arXiv Detail & Related papers (2024-02-17T18:05:53Z) - Retrieval Augmented Thought Process for Private Data Handling in Healthcare [53.89406286212502]
We introduce the Retrieval-Augmented Thought Process (RATP)
RATP formulates the thought generation of Large Language Models (LLMs)
On a private dataset of electronic medical records, RATP achieves 35% additional accuracy compared to in-context retrieval-augmented generation for the question-answering task.
arXiv Detail & Related papers (2024-02-12T17:17:50Z) - CataractBot: An LLM-Powered Expert-in-the-Loop Chatbot for Cataract Patients [5.649965979758816]
CataractBot was developed in collaboration with an eye hospital in India.
It answers cataract surgery related questions instantly by querying a curated knowledge base.
Users reported that their trust in the system was established through expert verification.
arXiv Detail & Related papers (2024-02-07T07:07:02Z) - The Typing Cure: Experiences with Large Language Model Chatbots for
Mental Health Support [35.61580610996628]
People experiencing severe distress increasingly use Large Language Model (LLM) chatbots as mental health support tools.
This study builds on interviews with 21 individuals from globally diverse backgrounds to analyze how users create unique support roles.
We introduce the concept of therapeutic alignment, or aligning AI with therapeutic values for mental health contexts.
arXiv Detail & Related papers (2024-01-25T18:08:53Z) - Building Emotional Support Chatbots in the Era of LLMs [64.06811786616471]
We introduce an innovative methodology that synthesizes human insights with the computational prowess of Large Language Models (LLMs)
By utilizing the in-context learning potential of ChatGPT, we generate an ExTensible Emotional Support dialogue dataset, named ExTES.
Following this, we deploy advanced tuning techniques on the LLaMA model, examining the impact of diverse training strategies, ultimately yielding an LLM meticulously optimized for emotional support interactions.
arXiv Detail & Related papers (2023-08-17T10:49:18Z) - A Categorical Archive of ChatGPT Failures [47.64219291655723]
ChatGPT, developed by OpenAI, has been trained using massive amounts of data and simulates human conversation.
It has garnered significant attention due to its ability to effectively answer a broad range of human inquiries.
However, a comprehensive analysis of ChatGPT's failures is lacking, which is the focus of this study.
arXiv Detail & Related papers (2023-02-06T04:21:59Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z) - CASS: Towards Building a Social-Support Chatbot for Online Health
Community [67.45813419121603]
The CASS architecture is based on advanced neural network algorithms.
It can handle new inputs from users and generate a variety of responses to them.
With a follow-up field experiment, CASS is proven useful in supporting individual members who seek emotional support.
arXiv Detail & Related papers (2021-01-04T05:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.