LLM Questionnaire Completion for Automatic Psychiatric Assessment
- URL: http://arxiv.org/abs/2406.06636v1
- Date: Sun, 9 Jun 2024 09:03:11 GMT
- Title: LLM Questionnaire Completion for Automatic Psychiatric Assessment
- Authors: Gony Rosenman, Lior Wolf, Talma Hendler,
- Abstract summary: We employ a Large Language Model (LLM) to convert unstructured psychological interviews into structured questionnaires spanning various psychiatric and personality domains.
The obtained answers are coded as features, which are used to predict standardized psychiatric measures of depression (PHQ-8) and PTSD (PCL-C)
- Score: 49.1574468325115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We employ a Large Language Model (LLM) to convert unstructured psychological interviews into structured questionnaires spanning various psychiatric and personality domains. The LLM is prompted to answer these questionnaires by impersonating the interviewee. The obtained answers are coded as features, which are used to predict standardized psychiatric measures of depression (PHQ-8) and PTSD (PCL-C), using a Random Forest regressor. Our approach is shown to enhance diagnostic accuracy compared to multiple baselines. It thus establishes a novel framework for interpreting unstructured psychological interviews, bridging the gap between narrative-driven and data-driven approaches for mental health assessment.
Related papers
- CBT-Bench: Evaluating Large Language Models on Assisting Cognitive Behavior Therapy [67.23830698947637]
We propose a new benchmark, CBT-BENCH, for the systematic evaluation of cognitive behavioral therapy (CBT) assistance.
We include three levels of tasks in CBT-BENCH: I: Basic CBT knowledge acquisition, with the task of multiple-choice questions; II: Cognitive model understanding, with the tasks of cognitive distortion classification, primary core belief classification, and fine-grained core belief classification; III: Therapeutic response generation, with the task of generating responses to patient speech in CBT therapy sessions.
Experimental results indicate that while LLMs perform well in reciting CBT knowledge, they fall short in complex real-world scenarios
arXiv Detail & Related papers (2024-10-17T04:52:57Z) - Depression Diagnosis Dialogue Simulation: Self-improving Psychiatrist with Tertiary Memory [35.41386783586689]
This paper introduces the Agent Mental Clinic (AMC), a self-improving conversational agent system designed to enhance depression diagnosis through simulated dialogues between patient and psychiatrist agents.
We design a psychiatrist agent consisting of a tertiary memory structure, a dialogue control and a memory sampling module, fully leveraging the skills reflected by the psychiatrist agent, achieving great accuracy on depression risk and suicide risk diagnosis via conversation.
arXiv Detail & Related papers (2024-09-20T14:25:08Z) - HiQuE: Hierarchical Question Embedding Network for Multimodal Depression Detection [11.984035389013426]
HiQuE is a novel depression detection framework that leverages the hierarchical relationship between primary and follow-up questions in clinical interviews.
We conduct extensive experiments on the widely-used clinical interview data, DAIC-WOZ, where our model outperforms other state-of-the-art multimodal depression detection models.
arXiv Detail & Related papers (2024-08-07T09:23:01Z) - Quantifying AI Psychology: A Psychometrics Benchmark for Large Language Models [57.518784855080334]
Large Language Models (LLMs) have demonstrated exceptional task-solving capabilities, increasingly adopting roles akin to human-like assistants.
This paper presents a framework for investigating psychology dimension in LLMs, including psychological identification, assessment dataset curation, and assessment with results validation.
We introduce a comprehensive psychometrics benchmark for LLMs that covers six psychological dimensions: personality, values, emotion, theory of mind, motivation, and intelligence.
arXiv Detail & Related papers (2024-06-25T16:09:08Z) - Aligning Large Language Models for Enhancing Psychiatric Interviews through Symptom Delineation and Summarization [13.77580842967173]
This research contributes to the nascent field of applying Large Language Models to psychiatric interviews.
We analyze counseling data from North Korean defectors with traumatic events and mental health issues.
Our experimental results show that appropriately prompted LLMs can achieve high performance on both the symptom delineation task and the summarization task.
arXiv Detail & Related papers (2024-03-26T06:50:04Z) - Enhancing Depression-Diagnosis-Oriented Chat with Psychological State Tracking [27.96718892323191]
Depression-diagnosis-oriented chat aims to guide patients in self-expression to collect key symptoms for depression detection.
Recent work focuses on combining task-oriented dialogue and chitchat to simulate the interview-based depression diagnosis.
No explicit framework has been explored to guide the dialogue, which results in some useless communications.
arXiv Detail & Related papers (2024-03-12T07:17:01Z) - COMPASS: Computational Mapping of Patient-Therapist Alliance Strategies with Language Modeling [14.04866656172336]
We present a novel framework to infer the therapeutic working alliance from the natural language used in psychotherapy sessions.
Our approach utilizes advanced large language models (LLMs) to analyze transcripts of psychotherapy sessions and compare them with distributed representations of statements in the working alliance inventory.
arXiv Detail & Related papers (2024-02-22T16:56:44Z) - PsychoGAT: A Novel Psychological Measurement Paradigm through Interactive Fiction Games with LLM Agents [68.50571379012621]
Psychological measurement is essential for mental health, self-understanding, and personal development.
PsychoGAT (Psychological Game AgenTs) achieves statistically significant excellence in psychometric metrics such as reliability, convergent validity, and discriminant validity.
arXiv Detail & Related papers (2024-02-19T18:00:30Z) - PsyCoT: Psychological Questionnaire as Powerful Chain-of-Thought for
Personality Detection [50.66968526809069]
We propose a novel personality detection method, called PsyCoT, which mimics the way individuals complete psychological questionnaires in a multi-turn dialogue manner.
Our experiments demonstrate that PsyCoT significantly improves the performance and robustness of GPT-3.5 in personality detection.
arXiv Detail & Related papers (2023-10-31T08:23:33Z) - Self-supervised Answer Retrieval on Clinical Notes [68.87777592015402]
We introduce CAPR, a rule-based self-supervision objective for training Transformer language models for domain-specific passage matching.
We apply our objective in four Transformer-based architectures: Contextual Document Vectors, Bi-, Poly- and Cross-encoders.
We report that CAPR outperforms strong baselines in the retrieval of domain-specific passages and effectively generalizes across rule-based and human-labeled passages.
arXiv Detail & Related papers (2021-08-02T10:42:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.