Large Language Model for Mental Health: A Systematic Review
- URL: http://arxiv.org/abs/2403.15401v3
- Date: Mon, 12 Aug 2024 21:46:16 GMT
- Title: Large Language Model for Mental Health: A Systematic Review
- Authors: Zhijun Guo, Alvina Lai, Johan Hilge Thygesen, Joseph Farrington, Thomas Keen, Kezhi Li,
- Abstract summary: Large language models (LLMs) have attracted significant attention for potential applications in digital health.
This systematic review focuses on their strengths and limitations in early screening, digital interventions, and clinical applications.
- Score: 2.9429776664692526
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) have attracted significant attention for potential applications in digital health, while their application in mental health is subject to ongoing debate. This systematic review aims to evaluate the usage of LLMs in mental health, focusing on their strengths and limitations in early screening, digital interventions, and clinical applications. Adhering to PRISMA guidelines, we searched PubMed, IEEE Xplore, Scopus, JMIR, and ACM using keywords: 'mental health OR mental illness OR mental disorder OR psychiatry' AND 'large language models'. We included articles published between January 1, 2017, and April 30, 2024, excluding non-English articles. 30 articles were evaluated, which included research on mental health conditions and suicidal ideation detection through text (n=15), usage of LLMs for mental health conversational agents (CAs) (n=7), and other applications and evaluations of LLMs in mental health (n=18). LLMs exhibit substantial effectiveness in detecting mental health issues and providing accessible, de-stigmatized eHealth services. However, the current risks associated with the clinical use might surpass their benefits. The study identifies several significant issues: the lack of multilingual datasets annotated by experts, concerns about the accuracy and reliability of the content generated, challenges in interpretability due to the 'black box' nature of LLMs, and persistent ethical dilemmas. These include the lack of a clear ethical framework, concerns about data privacy, and the potential for over-reliance on LLMs by both therapists and patients, which could compromise traditional medical practice. Despite these issues, the rapid development of LLMs underscores their potential as new clinical aids, emphasizing the need for continued research and development in this area.
Related papers
- Leveraging LLMs for Translating and Classifying Mental Health Data [3.0382033111760585]
This study focuses on the detection of depression severity in Greek through user-generated posts which are automatically translated from English.
Our results show that GPT3.5-turbo is not very successful in identifying the severity of depression in English, and it has a varying performance in Greek as well.
arXiv Detail & Related papers (2024-10-16T19:30:11Z) - CliMedBench: A Large-Scale Chinese Benchmark for Evaluating Medical Large Language Models in Clinical Scenarios [50.032101237019205]
CliMedBench is a comprehensive benchmark with 14 expert-guided core clinical scenarios.
The reliability of this benchmark has been confirmed in several ways.
arXiv Detail & Related papers (2024-10-04T15:15:36Z) - Severity Prediction in Mental Health: LLM-based Creation, Analysis,
Evaluation of a Novel Multilingual Dataset [3.4146360486107987]
Large Language Models (LLMs) are increasingly integrated into various medical fields, including mental health support systems.
We present a novel multilingual adaptation of widely-used mental health datasets, translated from English into six languages.
This dataset enables a comprehensive evaluation of LLM performance in detecting mental health conditions and assessing their severity across multiple languages.
arXiv Detail & Related papers (2024-09-25T22:14:34Z) - LLM Questionnaire Completion for Automatic Psychiatric Assessment [49.1574468325115]
We employ a Large Language Model (LLM) to convert unstructured psychological interviews into structured questionnaires spanning various psychiatric and personality domains.
The obtained answers are coded as features, which are used to predict standardized psychiatric measures of depression (PHQ-8) and PTSD (PCL-C)
arXiv Detail & Related papers (2024-06-09T09:03:11Z) - Large Language Models in Mental Health Care: a Scoping Review [28.635427491110484]
The integration of large language models (LLMs) in mental health care is an emerging field.
There is a need to systematically review the application outcomes and delineate the advantages and limitations in clinical settings.
This review aims to provide a comprehensive overview of the use of LLMs in mental health care, assessing their efficacy, challenges, and potential for future applications.
arXiv Detail & Related papers (2024-01-01T17:35:52Z) - Challenges of Large Language Models for Mental Health Counseling [4.604003661048267]
The global mental health crisis is looming with a rapid increase in mental disorders, limited resources, and the social stigma of seeking treatment.
The application of large language models (LLMs) in the mental health domain raises concerns regarding the accuracy, effectiveness, and reliability of the information provided.
This paper investigates the major challenges associated with the development of LLMs for psychological counseling, including model hallucination, interpretability, bias, privacy, and clinical effectiveness.
arXiv Detail & Related papers (2023-11-23T08:56:41Z) - Rethinking Large Language Models in Mental Health Applications [42.21805311812548]
Large Language Models (LLMs) have become valuable assets in mental health.
This paper offers a perspective on using LLMs in mental health applications.
arXiv Detail & Related papers (2023-11-19T08:40:01Z) - A Survey of Large Language Models in Medicine: Progress, Application, and Challenge [85.09998659355038]
Large language models (LLMs) have received substantial attention due to their capabilities for understanding and generating human language.
This review aims to provide a detailed overview of the development and deployment of LLMs in medicine.
arXiv Detail & Related papers (2023-11-09T02:55:58Z) - Benefits and Harms of Large Language Models in Digital Mental Health [40.02859683420844]
Large language models (LLMs) show promise in leading digital mental health to uncharted territory.
This article presents contemporary perspectives on the opportunities and risks posed by LLMs in the design, development, and implementation of digital mental health tools.
arXiv Detail & Related papers (2023-11-07T14:11:10Z) - Towards Mitigating Hallucination in Large Language Models via
Self-Reflection [63.2543947174318]
Large language models (LLMs) have shown promise for generative and knowledge-intensive tasks including question-answering (QA) tasks.
This paper analyses the phenomenon of hallucination in medical generative QA systems using widely adopted LLMs and datasets.
arXiv Detail & Related papers (2023-10-10T03:05:44Z) - Redefining Digital Health Interfaces with Large Language Models [69.02059202720073]
Large Language Models (LLMs) have emerged as general-purpose models with the ability to process complex information.
We show how LLMs can provide a novel interface between clinicians and digital technologies.
We develop a new prognostic tool using automated machine learning.
arXiv Detail & Related papers (2023-10-05T14:18:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.