Development and Evaluation of Three Chatbots for Postpartum Mood and
Anxiety Disorders
- URL: http://arxiv.org/abs/2308.07407v1
- Date: Mon, 14 Aug 2023 18:52:03 GMT
- Title: Development and Evaluation of Three Chatbots for Postpartum Mood and
Anxiety Disorders
- Authors: Xuewen Yao, Miriam Mikhelson, S. Craig Watkins, Eunsol Choi, Edison
Thomaz, Kaya de Barbaro
- Abstract summary: We develop three chatbots to provide context-specific empathetic support to postpartum caregivers.
We present and evaluate the performance of our chatbots using both machine-based metrics and human-based questionnaires.
We conclude by discussing practical benefits of rule-based vs. generative models for supporting individuals with mental health challenges.
- Score: 31.018188794627378
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In collaboration with Postpartum Support International (PSI), a non-profit
organization dedicated to supporting caregivers with postpartum mood and
anxiety disorders, we developed three chatbots to provide context-specific
empathetic support to postpartum caregivers, leveraging both rule-based and
generative models. We present and evaluate the performance of our chatbots
using both machine-based metrics and human-based questionnaires. Overall, our
rule-based model achieves the best performance, with outputs that are close to
ground truth reference and contain the highest levels of empathy. Human users
prefer the rule-based chatbot over the generative chatbot for its
context-specific and human-like replies. Our generative chatbot also produced
empathetic responses and was described by human users as engaging. However,
limitations in the training dataset often result in confusing or nonsensical
responses. We conclude by discussing practical benefits of rule-based vs.
generative models for supporting individuals with mental health challenges. In
light of the recent surge of ChatGPT and BARD, we also discuss the
possibilities and pitfalls of large language models for digital mental
healthcare.
Related papers
- The Typing Cure: Experiences with Large Language Model Chatbots for
Mental Health Support [35.61580610996628]
People experiencing severe distress increasingly use Large Language Model (LLM) chatbots as mental health support tools.
This study builds on interviews with 21 individuals from globally diverse backgrounds to analyze how users create unique support roles.
We introduce the concept of therapeutic alignment, or aligning AI with therapeutic values for mental health contexts.
arXiv Detail & Related papers (2024-01-25T18:08:53Z) - From Words and Exercises to Wellness: Farsi Chatbot for Self-Attachment Technique [1.7592522344393486]
We develop a voice-capable bot in Farsi to guide users through Self-Attachment (SAT)
We collect a dataset of over 6,000 utterances and develop a novel sentiment-analysis module that classifies user sentiment into 12 classes, with accuracy above 92%.
Our platform was engaging to most users (75%), 72% felt better after the interactions, and 74% were satisfied with the SAT Teacher's performance.
arXiv Detail & Related papers (2023-10-13T19:09:31Z) - LLM-empowered Chatbots for Psychiatrist and Patient Simulation:
Application and Evaluation [18.98839299694749]
This work focuses on exploring the potential of ChatGPT in powering chatbots for psychiatrist and patient simulation.
We collaborate with psychiatrists to identify objectives and iteratively develop the dialogue system to closely align with real-world scenarios.
In the evaluation experiments, we recruit real psychiatrists and patients to engage in diagnostic conversations with the chatbots, collecting their ratings for assessment.
arXiv Detail & Related papers (2023-05-23T02:25:01Z) - ChatGPT for Us: Preserving Data Privacy in ChatGPT via Dialogue Text
Ambiguation to Expand Mental Health Care Delivery [52.73936514734762]
ChatGPT has gained popularity for its ability to generate human-like dialogue.
Data-sensitive domains face challenges in using ChatGPT due to privacy and data-ownership concerns.
We propose a text ambiguation framework that preserves user privacy.
arXiv Detail & Related papers (2023-05-19T02:09:52Z) - A Categorical Archive of ChatGPT Failures [47.64219291655723]
ChatGPT, developed by OpenAI, has been trained using massive amounts of data and simulates human conversation.
It has garnered significant attention due to its ability to effectively answer a broad range of human inquiries.
However, a comprehensive analysis of ChatGPT's failures is lacking, which is the focus of this study.
arXiv Detail & Related papers (2023-02-06T04:21:59Z) - EmpBot: A T5-based Empathetic Chatbot focusing on Sentiments [75.11753644302385]
Empathetic conversational agents should not only understand what is being discussed, but also acknowledge the implied feelings of the conversation partner.
We propose a method based on a transformer pretrained language model (T5)
We evaluate our model on the EmpatheticDialogues dataset using both automated metrics and human evaluation.
arXiv Detail & Related papers (2021-10-30T19:04:48Z) - CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning [60.348822346249854]
This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
arXiv Detail & Related papers (2021-10-08T07:44:47Z) - An Evaluation of Generative Pre-Training Model-based Therapy Chatbot for
Caregivers [5.2116528363639985]
Generative-based approaches, such as the OpenAI GPT models, could allow for more dynamic conversations in therapy contexts.
We built a chatbots using the GPT-2 model and fine-tuned it with 306 therapy session transcripts between family caregivers of individuals with dementia and therapists conducting Problem Solving Therapy.
Results showed that the fine-tuned model created more non-word outputs than the pre-trained model.
arXiv Detail & Related papers (2021-07-28T01:01:08Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.