Explainable Depression Detection in Clinical Interviews with Personalized Retrieval-Augmented Generation
- URL: http://arxiv.org/abs/2503.01315v1
- Date: Mon, 03 Mar 2025 08:59:34 GMT
- Title: Explainable Depression Detection in Clinical Interviews with Personalized Retrieval-Augmented Generation
- Authors: Linhai Zhang, Ziyang Gao, Deyu Zhou, Yulan He,
- Abstract summary: Depression is a widespread mental health disorder, and clinical interviews are the gold standard for assessment.<n>Current systems mainly employ black-box neural networks, which lack interpretability.<n>We propose RED, a Retrieval-augmented generation framework for Explainable depression Detection.
- Score: 32.163466666512996
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Depression is a widespread mental health disorder, and clinical interviews are the gold standard for assessment. However, their reliance on scarce professionals highlights the need for automated detection. Current systems mainly employ black-box neural networks, which lack interpretability, which is crucial in mental health contexts. Some attempts to improve interpretability use post-hoc LLM generation but suffer from hallucination. To address these limitations, we propose RED, a Retrieval-augmented generation framework for Explainable depression Detection. RED retrieves evidence from clinical interview transcripts, providing explanations for predictions. Traditional query-based retrieval systems use a one-size-fits-all approach, which may not be optimal for depression detection, as user backgrounds and situations vary. We introduce a personalized query generation module that combines standard queries with user-specific background inferred by LLMs, tailoring retrieval to individual contexts. Additionally, to enhance LLM performance in social intelligence, we augment LLMs by retrieving relevant knowledge from a social intelligence datastore using an event-centric retriever. Experimental results on the real-world benchmark demonstrate RED's effectiveness compared to neural networks and LLM-based baselines.
Related papers
- MLlm-DR: Towards Explainable Depression Recognition with MultiModal Large Language Models [28.873959594226605]
Automated depression diagnosis aims to analyze multimodal information from interview videos to predict participants' depression scores.<n>Previous studies often lack clear explanations of how these scores were determined, limiting their adoption in clinical practice.<n>We propose a novel multimodal large language model (MLlm-DR) that can understand multimodal information inputs and supports explainable depression diagnosis.
arXiv Detail & Related papers (2025-07-08T01:56:39Z) - Interpretable Depression Detection from Social Media Text Using LLM-Derived Embeddings [0.44865923696339866]
Accurate and interpretable detection of depressive language in social media is useful for early interventions of mental health conditions.<n>We investigate the performance of large language models (LLMs) and traditional machine learning classifiers across three classification tasks involving social media data.
arXiv Detail & Related papers (2025-06-07T01:19:45Z) - Large Language Models for Depression Recognition in Spoken Language Integrating Psychological Knowledge [38.52606718906513]
Depression is a growing concern gaining attention in both public discourse and AI research.<n>We present to the best of our knowledge, the first application of large language models to multimodal depression detection.<n>We propose a novel strategy for incorporating psychological knowledge into LLMs to enhance diagnostic performance.
arXiv Detail & Related papers (2025-05-28T20:53:05Z) - "It Listens Better Than My Therapist": Exploring Social Media Discourse on LLMs as Mental Health Tool [1.223779595809275]
Large language models (LLMs) offer new capabilities in conversational fluency, empathy simulation, and availability.
This study explores how users engage with LLMs as mental health tools by analyzing over 10,000 TikTok comments.
Results show that nearly 20% of comments reflect personal use, with these users expressing overwhelmingly positive attitudes.
arXiv Detail & Related papers (2025-04-14T17:37:32Z) - Medical large language models are easily distracted [0.8211696054238238]
Large language models (LLMs) have the potential to transform medicine, but real-world clinical scenarios contain extraneous information that can hinder performance.
We developed MedDistractQA, a benchmark using USMLE-style questions embedded with simulated real-world distractions.
arXiv Detail & Related papers (2025-04-01T21:34:01Z) - LlaMADRS: Prompting Large Language Models for Interview-Based Depression Assessment [75.44934940580112]
This study introduces LlaMADRS, a novel framework leveraging open-source Large Language Models (LLMs) to automate depression severity assessment.<n>We employ a zero-shot prompting strategy with carefully designed cues to guide the model in interpreting and scoring transcribed clinical interviews.<n>Our approach, tested on 236 real-world interviews, demonstrates strong correlations with clinician assessments.
arXiv Detail & Related papers (2025-01-07T08:49:04Z) - LLM Questionnaire Completion for Automatic Psychiatric Assessment [49.1574468325115]
We employ a Large Language Model (LLM) to convert unstructured psychological interviews into structured questionnaires spanning various psychiatric and personality domains.
The obtained answers are coded as features, which are used to predict standardized psychiatric measures of depression (PHQ-8) and PTSD (PCL-C)
arXiv Detail & Related papers (2024-06-09T09:03:11Z) - Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus [99.33091772494751]
Large Language Models (LLMs) have gained significant popularity for their impressive performance across diverse fields.
LLMs are prone to hallucinate untruthful or nonsensical outputs that fail to meet user expectations.
We propose a novel reference-free, uncertainty-based method for detecting hallucinations in LLMs.
arXiv Detail & Related papers (2023-11-22T08:39:17Z) - Rethinking Large Language Models in Mental Health Applications [42.21805311812548]
Large Language Models (LLMs) have become valuable assets in mental health.
This paper offers a perspective on using LLMs in mental health applications.
arXiv Detail & Related papers (2023-11-19T08:40:01Z) - Explainable Depression Symptom Detection in Social Media [2.677715367737641]
We propose using transformer-based architectures to detect and explain the appearance of depressive symptom markers in the users' writings.
Our natural language explanations enable clinicians to interpret the models' decisions based on validated symptoms.
arXiv Detail & Related papers (2023-10-20T17:05:27Z) - Self-RAG: Learning to Retrieve, Generate, and Critique through
Self-Reflection [74.51523859064802]
We introduce a new framework called Self-Reflective Retrieval-Augmented Generation (Self-RAG)
Self-RAG enhances an LM's quality and factuality through retrieval and self-reflection.
It significantly outperforms state-of-the-art LLMs and retrieval-augmented models on a diverse set of tasks.
arXiv Detail & Related papers (2023-10-17T18:18:32Z) - Towards Mitigating Hallucination in Large Language Models via
Self-Reflection [63.2543947174318]
Large language models (LLMs) have shown promise for generative and knowledge-intensive tasks including question-answering (QA) tasks.
This paper analyses the phenomenon of hallucination in medical generative QA systems using widely adopted LLMs and datasets.
arXiv Detail & Related papers (2023-10-10T03:05:44Z) - Self-Verification Improves Few-Shot Clinical Information Extraction [73.6905567014859]
Large language models (LLMs) have shown the potential to accelerate clinical curation via few-shot in-context learning.
They still struggle with issues regarding accuracy and interpretability, especially in mission-critical domains such as health.
Here, we explore a general mitigation framework using self-verification, which leverages the LLM to provide provenance for its own extraction and check its own outputs.
arXiv Detail & Related papers (2023-05-30T22:05:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.