The Typing Cure: Experiences with Large Language Model Chatbots for
Mental Health Support
- URL: http://arxiv.org/abs/2401.14362v2
- Date: Wed, 6 Mar 2024 20:41:53 GMT
- Title: The Typing Cure: Experiences with Large Language Model Chatbots for
Mental Health Support
- Authors: Inhwa Song, Sachin R. Pendse, Neha Kumar, Munmun De Choudhury
- Abstract summary: People experiencing severe distress increasingly use Large Language Model (LLM) chatbots as mental health support tools.
This study builds on interviews with 21 individuals from globally diverse backgrounds to analyze how users create unique support roles.
We introduce the concept of therapeutic alignment, or aligning AI with therapeutic values for mental health contexts.
- Score: 35.61580610996628
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: People experiencing severe distress increasingly use Large Language Model
(LLM) chatbots as mental health support tools. Discussions on social media have
described how engagements were lifesaving for some, but evidence suggests that
general-purpose LLM chatbots also have notable risks that could endanger the
welfare of users if not designed responsibly. In this study, we investigate the
lived experiences of people who have used LLM chatbots for mental health
support. We build on interviews with 21 individuals from globally diverse
backgrounds to analyze how users create unique support roles for their
chatbots, fill in gaps in everyday care, and navigate associated cultural
limitations when seeking support from chatbots. We ground our analysis in
psychotherapy literature around effective support, and introduce the concept of
therapeutic alignment, or aligning AI with therapeutic values for mental health
contexts. Our study offers recommendations for how designers can approach the
ethical and effective use of LLM chatbots and other AI mental health support
tools in mental health care.
Related papers
- Prompt Engineering a Schizophrenia Chatbot: Utilizing a Multi-Agent Approach for Enhanced Compliance with Prompt Instructions [0.0699049312989311]
Patients with schizophrenia often present with cognitive impairments that may hinder their ability to learn about their condition.
While Large Language Models (LLMs) have the potential to make topical mental health information more accessible and engaging, their black-box nature raises concerns about ethics and safety.
arXiv Detail & Related papers (2024-10-10T09:49:24Z) - LLM Roleplay: Simulating Human-Chatbot Interaction [52.03241266241294]
We propose a goal-oriented, persona-based method to automatically generate diverse multi-turn dialogues simulating human-chatbot interaction.
Our method can simulate human-chatbot dialogues with a high indistinguishability rate.
arXiv Detail & Related papers (2024-07-04T14:49:46Z) - Development and Evaluation of Three Chatbots for Postpartum Mood and
Anxiety Disorders [31.018188794627378]
We develop three chatbots to provide context-specific empathetic support to postpartum caregivers.
We present and evaluate the performance of our chatbots using both machine-based metrics and human-based questionnaires.
We conclude by discussing practical benefits of rule-based vs. generative models for supporting individuals with mental health challenges.
arXiv Detail & Related papers (2023-08-14T18:52:03Z) - LLM-empowered Chatbots for Psychiatrist and Patient Simulation:
Application and Evaluation [18.98839299694749]
This work focuses on exploring the potential of ChatGPT in powering chatbots for psychiatrist and patient simulation.
We collaborate with psychiatrists to identify objectives and iteratively develop the dialogue system to closely align with real-world scenarios.
In the evaluation experiments, we recruit real psychiatrists and patients to engage in diagnostic conversations with the chatbots, collecting their ratings for assessment.
arXiv Detail & Related papers (2023-05-23T02:25:01Z) - Mental Illness Classification on Social Media Texts using Deep Learning
and Transfer Learning [55.653944436488786]
According to the World health organization (WHO), approximately 450 million people are affected.
Mental illnesses, such as depression, anxiety, bipolar disorder, ADHD, and PTSD.
This study analyzes unstructured user data on Reddit platform and classifies five common mental illnesses: depression, anxiety, bipolar disorder, ADHD, and PTSD.
arXiv Detail & Related papers (2022-07-03T11:33:52Z) - Making the case for audience design in conversational AI: Rapport
expectations and language ideologies in a task-oriented chatbot [0.0]
This paper argues that insights into users' language ideologies and their rapport expectations can be used to inform the audience design of the bot's language and interaction patterns.
I will define audience design for conversational AI and discuss how user analyses of interactions and socio-linguistically informed theoretical approaches can be used to support audience design.
arXiv Detail & Related papers (2022-06-21T19:21:30Z) - Mental Health Assessment for the Chatbots [39.081479891611664]
We argue that it should have a healthy mental tendency in order to avoid the negative psychological impact on them.
We establish several mental health assessment dimensions for chatbots and introduce the questionnaire-based mental health assessment methods.
arXiv Detail & Related papers (2022-01-14T10:38:59Z) - CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning [60.348822346249854]
This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
arXiv Detail & Related papers (2021-10-08T07:44:47Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z) - CASS: Towards Building a Social-Support Chatbot for Online Health
Community [67.45813419121603]
The CASS architecture is based on advanced neural network algorithms.
It can handle new inputs from users and generate a variety of responses to them.
With a follow-up field experiment, CASS is proven useful in supporting individual members who seek emotional support.
arXiv Detail & Related papers (2021-01-04T05:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.