LLM-empowered Chatbots for Psychiatrist and Patient Simulation:
Application and Evaluation
- URL: http://arxiv.org/abs/2305.13614v1
- Date: Tue, 23 May 2023 02:25:01 GMT
- Title: LLM-empowered Chatbots for Psychiatrist and Patient Simulation:
Application and Evaluation
- Authors: Siyuan Chen, Mengyue Wu, Kenny Q. Zhu, Kunyao Lan, Zhiling Zhang,
Lyuchun Cui
- Abstract summary: This work focuses on exploring the potential of ChatGPT in powering chatbots for psychiatrist and patient simulation.
We collaborate with psychiatrists to identify objectives and iteratively develop the dialogue system to closely align with real-world scenarios.
In the evaluation experiments, we recruit real psychiatrists and patients to engage in diagnostic conversations with the chatbots, collecting their ratings for assessment.
- Score: 18.98839299694749
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Empowering chatbots in the field of mental health is receiving increasing
amount of attention, while there still lacks exploration in developing and
evaluating chatbots in psychiatric outpatient scenarios. In this work, we focus
on exploring the potential of ChatGPT in powering chatbots for psychiatrist and
patient simulation. We collaborate with psychiatrists to identify objectives
and iteratively develop the dialogue system to closely align with real-world
scenarios. In the evaluation experiments, we recruit real psychiatrists and
patients to engage in diagnostic conversations with the chatbots, collecting
their ratings for assessment. Our findings demonstrate the feasibility of using
ChatGPT-powered chatbots in psychiatric scenarios and explore the impact of
prompt designs on chatbot behavior and user experience.
Related papers
- Prompt Engineering a Schizophrenia Chatbot: Utilizing a Multi-Agent Approach for Enhanced Compliance with Prompt Instructions [0.0699049312989311]
Patients with schizophrenia often present with cognitive impairments that may hinder their ability to learn about their condition.
While Large Language Models (LLMs) have the potential to make topical mental health information more accessible and engaging, their black-box nature raises concerns about ethics and safety.
arXiv Detail & Related papers (2024-10-10T09:49:24Z) - LLM Roleplay: Simulating Human-Chatbot Interaction [52.03241266241294]
We propose a goal-oriented, persona-based method to automatically generate diverse multi-turn dialogues simulating human-chatbot interaction.
Our method can simulate human-chatbot dialogues with a high indistinguishability rate.
arXiv Detail & Related papers (2024-07-04T14:49:46Z) - Enhancing Depression-Diagnosis-Oriented Chat with Psychological State Tracking [27.96718892323191]
Depression-diagnosis-oriented chat aims to guide patients in self-expression to collect key symptoms for depression detection.
Recent work focuses on combining task-oriented dialogue and chitchat to simulate the interview-based depression diagnosis.
No explicit framework has been explored to guide the dialogue, which results in some useless communications.
arXiv Detail & Related papers (2024-03-12T07:17:01Z) - The Typing Cure: Experiences with Large Language Model Chatbots for
Mental Health Support [35.61580610996628]
People experiencing severe distress increasingly use Large Language Model (LLM) chatbots as mental health support tools.
This study builds on interviews with 21 individuals from globally diverse backgrounds to analyze how users create unique support roles.
We introduce the concept of therapeutic alignment, or aligning AI with therapeutic values for mental health contexts.
arXiv Detail & Related papers (2024-01-25T18:08:53Z) - Development and Evaluation of Three Chatbots for Postpartum Mood and
Anxiety Disorders [31.018188794627378]
We develop three chatbots to provide context-specific empathetic support to postpartum caregivers.
We present and evaluate the performance of our chatbots using both machine-based metrics and human-based questionnaires.
We conclude by discussing practical benefits of rule-based vs. generative models for supporting individuals with mental health challenges.
arXiv Detail & Related papers (2023-08-14T18:52:03Z) - Neural Generation Meets Real People: Building a Social, Informative
Open-Domain Dialogue Agent [65.68144111226626]
Chirpy Cardinal aims to be both informative and conversational.
We let both the user and bot take turns driving the conversation.
Chirpy Cardinal placed second out of nine bots in the Alexa Prize Socialbot Grand Challenge.
arXiv Detail & Related papers (2022-07-25T09:57:23Z) - Mental Health Assessment for the Chatbots [39.081479891611664]
We argue that it should have a healthy mental tendency in order to avoid the negative psychological impact on them.
We establish several mental health assessment dimensions for chatbots and introduce the questionnaire-based mental health assessment methods.
arXiv Detail & Related papers (2022-01-14T10:38:59Z) - EmpBot: A T5-based Empathetic Chatbot focusing on Sentiments [75.11753644302385]
Empathetic conversational agents should not only understand what is being discussed, but also acknowledge the implied feelings of the conversation partner.
We propose a method based on a transformer pretrained language model (T5)
We evaluate our model on the EmpatheticDialogues dataset using both automated metrics and human evaluation.
arXiv Detail & Related papers (2021-10-30T19:04:48Z) - CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning [60.348822346249854]
This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
arXiv Detail & Related papers (2021-10-08T07:44:47Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.