A Dual-Prompting for Interpretable Mental Health Language Models
- URL: http://arxiv.org/abs/2402.14854v1
- Date: Tue, 20 Feb 2024 06:18:02 GMT
- Title: A Dual-Prompting for Interpretable Mental Health Language Models
- Authors: Hyolim Jeon, Dongje Yoo, Daeun Lee, Sejung Son, Seungbae Kim, Jinyoung
Han
- Abstract summary: The CLPsych 2024 Shared Task aims to enhance the interpretability of Large Language Models (LLMs)
We propose a dual-prompting approach: (i) Knowledge-aware evidence extraction by leveraging the expert identity and a suicide dictionary with a mental health-specific LLM; and (ii) summarization by employing an LLM-based consistency evaluator.
- Score: 11.33857985668663
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the increasing demand for AI-based mental health monitoring tools,
their practical utility for clinicians is limited by the lack of
interpretability.The CLPsych 2024 Shared Task (Chim et al., 2024) aims to
enhance the interpretability of Large Language Models (LLMs), particularly in
mental health analysis, by providing evidence of suicidality through linguistic
content. We propose a dual-prompting approach: (i) Knowledge-aware evidence
extraction by leveraging the expert identity and a suicide dictionary with a
mental health-specific LLM; and (ii) Evidence summarization by employing an
LLM-based consistency evaluator. Comprehensive experiments demonstrate the
effectiveness of combining domain-specific information, revealing performance
improvements and the approach's potential to aid clinicians in assessing mental
state progression.
Related papers
- SouLLMate: An Application Enhancing Diverse Mental Health Support with Adaptive LLMs, Prompt Engineering, and RAG Techniques [9.146311285410631]
Mental health issues significantly impact individuals' daily lives, yet many do not receive the help they need even with available online resources.
This study aims to provide diverse, accessible, stigma-free, personalized, and real-time mental health support through cutting-edge AI technologies.
arXiv Detail & Related papers (2024-10-17T22:04:32Z) - CBT-Bench: Evaluating Large Language Models on Assisting Cognitive Behavior Therapy [67.23830698947637]
We propose a new benchmark, CBT-BENCH, for the systematic evaluation of cognitive behavioral therapy (CBT) assistance.
We include three levels of tasks in CBT-BENCH: I: Basic CBT knowledge acquisition, with the task of multiple-choice questions; II: Cognitive model understanding, with the tasks of cognitive distortion classification, primary core belief classification, and fine-grained core belief classification; III: Therapeutic response generation, with the task of generating responses to patient speech in CBT therapy sessions.
Experimental results indicate that while LLMs perform well in reciting CBT knowledge, they fall short in complex real-world scenarios
arXiv Detail & Related papers (2024-10-17T04:52:57Z) - SouLLMate: An Adaptive LLM-Driven System for Advanced Mental Health Support and Assessment, Based on a Systematic Application Survey [9.146311285410631]
Mental health issues significantly impact individuals' daily lives, yet many do not receive the help they need even with available online resources.
This study aims to provide accessible, stigma-free, personalized, and real-time mental health support through cutting-edge AI technologies.
arXiv Detail & Related papers (2024-10-06T17:11:29Z) - LLM Questionnaire Completion for Automatic Psychiatric Assessment [49.1574468325115]
We employ a Large Language Model (LLM) to convert unstructured psychological interviews into structured questionnaires spanning various psychiatric and personality domains.
The obtained answers are coded as features, which are used to predict standardized psychiatric measures of depression (PHQ-8) and PTSD (PCL-C)
arXiv Detail & Related papers (2024-06-09T09:03:11Z) - Automating PTSD Diagnostics in Clinical Interviews: Leveraging Large Language Models for Trauma Assessments [7.219693607724636]
We aim to tackle this shortage by integrating a customized large language model (LLM) into the workflow.
We collect 411 clinician-administered diagnostic interviews and devise a novel approach to obtain high-quality data.
We build a comprehensive framework to automate PTSD diagnostic assessments based on interview contents.
arXiv Detail & Related papers (2024-05-18T05:04:18Z) - Large Language Models in Mental Health Care: a Scoping Review [28.635427491110484]
The integration of large language models (LLMs) in mental health care is an emerging field.
There is a need to systematically review the application outcomes and delineate the advantages and limitations in clinical settings.
This review aims to provide a comprehensive overview of the use of LLMs in mental health care, assessing their efficacy, challenges, and potential for future applications.
arXiv Detail & Related papers (2024-01-01T17:35:52Z) - Empowering Psychotherapy with Large Language Models: Cognitive
Distortion Detection through Diagnosis of Thought Prompting [82.64015366154884]
We study the task of cognitive distortion detection and propose the Diagnosis of Thought (DoT) prompting.
DoT performs diagnosis on the patient's speech via three stages: subjectivity assessment to separate the facts and the thoughts; contrastive reasoning to elicit the reasoning processes supporting and contradicting the thoughts; and schema analysis to summarize the cognition schemas.
Experiments demonstrate that DoT obtains significant improvements over ChatGPT for cognitive distortion detection, while generating high-quality rationales approved by human experts.
arXiv Detail & Related papers (2023-10-11T02:47:21Z) - Towards Mitigating Hallucination in Large Language Models via
Self-Reflection [63.2543947174318]
Large language models (LLMs) have shown promise for generative and knowledge-intensive tasks including question-answering (QA) tasks.
This paper analyses the phenomenon of hallucination in medical generative QA systems using widely adopted LLMs and datasets.
arXiv Detail & Related papers (2023-10-10T03:05:44Z) - Process Knowledge-infused Learning for Clinician-friendly Explanations [14.405002816231477]
Language models can assess mental health using social media data.
They do not compare posts against clinicians' diagnostic processes.
It's challenging to explain language model outputs using concepts that the clinician can understand.
arXiv Detail & Related papers (2023-06-16T13:08:17Z) - Self-Verification Improves Few-Shot Clinical Information Extraction [73.6905567014859]
Large language models (LLMs) have shown the potential to accelerate clinical curation via few-shot in-context learning.
They still struggle with issues regarding accuracy and interpretability, especially in mission-critical domains such as health.
Here, we explore a general mitigation framework using self-verification, which leverages the LLM to provide provenance for its own extraction and check its own outputs.
arXiv Detail & Related papers (2023-05-30T22:05:11Z) - Benchmarking Automated Clinical Language Simplification: Dataset,
Algorithm, and Evaluation [48.87254340298189]
We construct a new dataset named MedLane to support the development and evaluation of automated clinical language simplification approaches.
We propose a new model called DECLARE that follows the human annotation procedure and achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-12-04T06:09:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.