A Dual-Prompting for Interpretable Mental Health Language Models
- URL: http://arxiv.org/abs/2402.14854v1
- Date: Tue, 20 Feb 2024 06:18:02 GMT
- Title: A Dual-Prompting for Interpretable Mental Health Language Models
- Authors: Hyolim Jeon, Dongje Yoo, Daeun Lee, Sejung Son, Seungbae Kim, Jinyoung
Han
- Abstract summary: The CLPsych 2024 Shared Task aims to enhance the interpretability of Large Language Models (LLMs)
We propose a dual-prompting approach: (i) Knowledge-aware evidence extraction by leveraging the expert identity and a suicide dictionary with a mental health-specific LLM; and (ii) summarization by employing an LLM-based consistency evaluator.
- Score: 11.33857985668663
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the increasing demand for AI-based mental health monitoring tools,
their practical utility for clinicians is limited by the lack of
interpretability.The CLPsych 2024 Shared Task (Chim et al., 2024) aims to
enhance the interpretability of Large Language Models (LLMs), particularly in
mental health analysis, by providing evidence of suicidality through linguistic
content. We propose a dual-prompting approach: (i) Knowledge-aware evidence
extraction by leveraging the expert identity and a suicide dictionary with a
mental health-specific LLM; and (ii) Evidence summarization by employing an
LLM-based consistency evaluator. Comprehensive experiments demonstrate the
effectiveness of combining domain-specific information, revealing performance
improvements and the approach's potential to aid clinicians in assessing mental
state progression.
Related papers
- LLM Questionnaire Completion for Automatic Psychiatric Assessment [49.1574468325115]
We employ a Large Language Model (LLM) to convert unstructured psychological interviews into structured questionnaires spanning various psychiatric and personality domains.
The obtained answers are coded as features, which are used to predict standardized psychiatric measures of depression (PHQ-8) and PTSD (PCL-C)
arXiv Detail & Related papers (2024-06-09T09:03:11Z) - Automating PTSD Diagnostics in Clinical Interviews: Leveraging Large Language Models for Trauma Assessments [7.219693607724636]
We aim to tackle this shortage by integrating a customized large language model (LLM) into the workflow.
We collect 411 clinician-administered diagnostic interviews and devise a novel approach to obtain high-quality data.
We build a comprehensive framework to automate PTSD diagnostic assessments based on interview contents.
arXiv Detail & Related papers (2024-05-18T05:04:18Z) - LLM Agents for Psychology: A Study on Gamified Assessments [71.08193163042107]
Psychological measurement is essential for mental health, self-understanding, and personal development.
PsychoGAT (Psychological Game AgenTs) achieves statistically significant excellence in psychometric metrics such as reliability, convergent validity, and discriminant validity.
arXiv Detail & Related papers (2024-02-19T18:00:30Z) - Large Language Model for Mental Health: A Systematic Review [2.9429776664692526]
Large language models (LLMs) have attracted significant attention for potential applications in digital health.
This systematic review focuses on their strengths and limitations in early screening, digital interventions, and clinical applications.
arXiv Detail & Related papers (2024-02-19T17:58:41Z) - Large Language Models in Mental Health Care: a Scoping Review [29.247717845238228]
The growing use of large language models (LLMs) stimulates a need for a comprehensive review of their applications and outcomes in mental health care.
This scoping review aims to critically analyze the existing development and applications of LLMs in mental health care.
Key challenges include data availability and reliability, nuanced handling of mental states, and effective evaluation methods.
arXiv Detail & Related papers (2024-01-01T17:35:52Z) - Evaluating the Efficacy of Interactive Language Therapy Based on LLM for
High-Functioning Autistic Adolescent Psychological Counseling [1.1780706927049207]
This study investigates the efficacy of Large Language Models (LLMs) in interactive language therapy for high-functioning autistic adolescents.
LLMs present a novel opportunity to augment traditional psychological counseling methods.
arXiv Detail & Related papers (2023-11-12T07:55:39Z) - Empowering Psychotherapy with Large Language Models: Cognitive
Distortion Detection through Diagnosis of Thought Prompting [82.64015366154884]
We study the task of cognitive distortion detection and propose the Diagnosis of Thought (DoT) prompting.
DoT performs diagnosis on the patient's speech via three stages: subjectivity assessment to separate the facts and the thoughts; contrastive reasoning to elicit the reasoning processes supporting and contradicting the thoughts; and schema analysis to summarize the cognition schemas.
Experiments demonstrate that DoT obtains significant improvements over ChatGPT for cognitive distortion detection, while generating high-quality rationales approved by human experts.
arXiv Detail & Related papers (2023-10-11T02:47:21Z) - Towards Mitigating Hallucination in Large Language Models via
Self-Reflection [63.2543947174318]
Large language models (LLMs) have shown promise for generative and knowledge-intensive tasks including question-answering (QA) tasks.
This paper analyses the phenomenon of hallucination in medical generative QA systems using widely adopted LLMs and datasets.
arXiv Detail & Related papers (2023-10-10T03:05:44Z) - Process Knowledge-infused Learning for Clinician-friendly Explanations [14.405002816231477]
Language models can assess mental health using social media data.
They do not compare posts against clinicians' diagnostic processes.
It's challenging to explain language model outputs using concepts that the clinician can understand.
arXiv Detail & Related papers (2023-06-16T13:08:17Z) - Self-Verification Improves Few-Shot Clinical Information Extraction [73.6905567014859]
Large language models (LLMs) have shown the potential to accelerate clinical curation via few-shot in-context learning.
They still struggle with issues regarding accuracy and interpretability, especially in mission-critical domains such as health.
Here, we explore a general mitigation framework using self-verification, which leverages the LLM to provide provenance for its own extraction and check its own outputs.
arXiv Detail & Related papers (2023-05-30T22:05:11Z) - Benchmarking Automated Clinical Language Simplification: Dataset,
Algorithm, and Evaluation [48.87254340298189]
We construct a new dataset named MedLane to support the development and evaluation of automated clinical language simplification approaches.
We propose a new model called DECLARE that follows the human annotation procedure and achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-12-04T06:09:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.