Multi-Level Feedback Generation with Large Language Models for Empowering Novice Peer Counselors
- URL: http://arxiv.org/abs/2403.15482v1
- Date: Thu, 21 Mar 2024 04:23:56 GMT
- Title: Multi-Level Feedback Generation with Large Language Models for Empowering Novice Peer Counselors
- Authors: Alicja Chaszczewicz, Raj Sanjay Shah, Ryan Louie, Bruce A Arnow, Robert Kraut, Diyi Yang,
- Abstract summary: Existing mechanisms of providing feedback largely rely on human supervision.
Our work aims to leverage large language models to provide contextualized and multi-level feedback to empower peer counselors.
- Score: 43.42054421125617
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Realistic practice and tailored feedback are key processes for training peer counselors with clinical skills. However, existing mechanisms of providing feedback largely rely on human supervision. Peer counselors often lack mechanisms to receive detailed feedback from experienced mentors, making it difficult for them to support the large number of people with mental health issues who use peer counseling. Our work aims to leverage large language models to provide contextualized and multi-level feedback to empower peer counselors, especially novices, at scale. To achieve this, we co-design with a group of senior psychotherapy supervisors to develop a multi-level feedback taxonomy, and then construct a publicly available dataset with comprehensive feedback annotations of 400 emotional support conversations. We further design a self-improvement method on top of large language models to enhance the automatic generation of feedback. Via qualitative and quantitative evaluation with domain experts, we demonstrate that our method minimizes the risk of potentially harmful and low-quality feedback generation which is desirable in such high-stakes scenarios.
Related papers
- Cactus: Towards Psychological Counseling Conversations using Cognitive Behavioral Theory [24.937025825501998]
We create a multi-turn dialogue dataset that emulates real-life interactions using the goal-oriented and structured approach of Cognitive Behavioral Therapy (CBT)
We benchmark against established psychological criteria used to evaluate real counseling sessions, ensuring alignment with expert evaluations.
Experimental results demonstrate that Camel, a model trained with Cactus, outperforms other models in counseling skills, highlighting its effectiveness and potential as a counseling agent.
arXiv Detail & Related papers (2024-07-03T13:41:31Z) - Optimizing Psychological Counseling with Instruction-Tuned Large Language Models [9.19192059750618]
This paper explores the application of large language models (LLMs) in psychological counseling.
We present a method for instruction tuning LLMs with specialized prompts to enhance their performance in providing empathetic, relevant, and supportive responses.
arXiv Detail & Related papers (2024-06-19T15:13:07Z) - CPsyCoun: A Report-based Multi-turn Dialogue Reconstruction and Evaluation Framework for Chinese Psychological Counseling [27.193022503592342]
We propose CPsyCoun, a report-based multi-turn dialogue reconstruction and evaluation framework for Chinese psychological counseling.
To fully exploit psychological counseling reports, a two-phase approach is devised to construct high-quality dialogues.
A comprehensive evaluation benchmark is developed for the effective automatic evaluation of multi-turn psychological consultations.
arXiv Detail & Related papers (2024-05-26T05:18:00Z) - K-ESConv: Knowledge Injection for Emotional Support Dialogue Systems via
Prompt Learning [83.19215082550163]
We propose K-ESConv, a novel prompt learning based knowledge injection method for emotional support dialogue system.
We evaluate our model on an emotional support dataset ESConv, where the model retrieves and incorporates knowledge from external professional emotional Q&A forum.
arXiv Detail & Related papers (2023-12-16T08:10:10Z) - Enhancing Psychological Counseling with Large Language Model: A
Multifaceted Decision-Support System for Non-Professionals [31.01304974679576]
This paper introduces a novel model constructed on the foundation of large language models to assist non-professionals in providing psychological interventions on online user discourses.
A comprehensive study was conducted involving ten professional psychological counselors of varying expertise, evaluating the system.
The findings affirm that our system is capable of analyzing patients' issues with relative accuracy and proffering professional-level strategies recommendations.
arXiv Detail & Related papers (2023-08-29T10:20:53Z) - ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate [57.71597869337909]
We build a multi-agent referee team called ChatEval to autonomously discuss and evaluate the quality of generated responses from different models.
Our analysis shows that ChatEval transcends mere textual scoring, offering a human-mimicking evaluation process for reliable assessments.
arXiv Detail & Related papers (2023-08-14T15:13:04Z) - Continually Improving Extractive QA via Human Feedback [59.49549491725224]
We study continually improving an extractive question answering (QA) system via human user feedback.
We conduct experiments involving thousands of user interactions under diverse setups to broaden the understanding of learning from feedback over time.
arXiv Detail & Related papers (2023-05-21T14:35:32Z) - Helping the Helper: Supporting Peer Counselors via AI-Empowered Practice
and Feedback [40.065280357381035]
CARE is an interactive AI-based tool to empower peer counselors through automatic suggestion generation.
During the practical training stage, CARE helps diagnose which specific counseling strategies are most suitable in the given context.
CARE especially helps novice counselors respond better in challenging situations.
arXiv Detail & Related papers (2023-05-15T19:48:59Z) - Perspectives on Incorporating Expert Feedback into Model Updates [46.99664744930785]
We devise a taxonomy to match expert feedback types with practitioner updates.
A practitioner may receive feedback from an expert at the observation- or domain-level.
We review existing work from ML and human-computer interaction to describe this feedback-update taxonomy.
arXiv Detail & Related papers (2022-05-13T21:46:55Z) - Facial Feedback for Reinforcement Learning: A Case Study and Offline
Analysis Using the TAMER Framework [51.237191651923666]
We investigate the potential of agent learning from trainers' facial expressions via interpreting them as evaluative feedback.
With designed CNN-RNN model, our analysis shows that telling trainers to use facial expressions and competition can improve the accuracies for estimating positive and negative feedback.
Our results with a simulation experiment show that learning solely from predicted feedback based on facial expressions is possible.
arXiv Detail & Related papers (2020-01-23T17:50:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.