Severity Prediction in Mental Health: LLM-based Creation, Analysis,
Evaluation of a Novel Multilingual Dataset
- URL: http://arxiv.org/abs/2409.17397v1
- Date: Wed, 25 Sep 2024 22:14:34 GMT
- Title: Severity Prediction in Mental Health: LLM-based Creation, Analysis,
Evaluation of a Novel Multilingual Dataset
- Authors: Konstantinos Skianis, John Pavlopoulos, A. Seza Do\u{g}ru\"oz
- Abstract summary: Large Language Models (LLMs) are increasingly integrated into various medical fields, including mental health support systems.
We present a novel multilingual adaptation of widely-used mental health datasets, translated from English into six languages.
This dataset enables a comprehensive evaluation of LLM performance in detecting mental health conditions and assessing their severity across multiple languages.
- Score: 3.4146360486107987
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) are increasingly integrated into various medical
fields, including mental health support systems. However, there is a gap in
research regarding the effectiveness of LLMs in non-English mental health
support applications. To address this problem, we present a novel multilingual
adaptation of widely-used mental health datasets, translated from English into
six languages (Greek, Turkish, French, Portuguese, German, and Finnish). This
dataset enables a comprehensive evaluation of LLM performance in detecting
mental health conditions and assessing their severity across multiple
languages. By experimenting with GPT and Llama, we observe considerable
variability in performance across languages, despite being evaluated on the
same translated dataset. This inconsistency underscores the complexities
inherent in multilingual mental health support, where language-specific nuances
and mental health data coverage can affect the accuracy of the models. Through
comprehensive error analysis, we emphasize the risks of relying exclusively on
large language models (LLMs) in medical settings (e.g., their potential to
contribute to misdiagnoses). Moreover, our proposed approach offers significant
cost savings for multilingual tasks, presenting a major advantage for
broad-scale implementation.
Related papers
- From Text to Multimodality: Exploring the Evolution and Impact of Large Language Models in Medical Practice [11.196196955468992]
Large Language Models (LLMs) have rapidly evolved from text-based systems to multimodal platforms.
We examine the current landscape of MLLMs in healthcare, analyzing their applications across clinical decision support, medical imaging, patient engagement, and research.
arXiv Detail & Related papers (2024-09-14T02:35:29Z) - A Survey on Large Language Models with Multilingualism: Recent Advances and New Frontiers [48.314619377988436]
The rapid development of Large Language Models (LLMs) demonstrates remarkable multilingual capabilities in natural language processing.
Despite the breakthroughs of LLMs, the investigation into the multilingual scenario remains insufficient.
This survey aims to help the research community address multilingual problems and provide a comprehensive understanding of the core concepts, key techniques, and latest developments in multilingual natural language processing based on LLMs.
arXiv Detail & Related papers (2024-05-17T17:47:39Z) - The Power of Question Translation Training in Multilingual Reasoning: Broadened Scope and Deepened Insights [108.40766216456413]
We propose a question alignment framework to bridge the gap between large language models' English and non-English performance.
Experiment results show it can boost multilingual performance across diverse reasoning scenarios, model families, and sizes.
We analyze representation space, generated response and data scales, and reveal how question translation training strengthens language alignment within LLMs.
arXiv Detail & Related papers (2024-05-02T14:49:50Z) - Large Language Model for Mental Health: A Systematic Review [2.9429776664692526]
Large language models (LLMs) have attracted significant attention for potential applications in digital health.
This systematic review focuses on their strengths and limitations in early screening, digital interventions, and clinical applications.
arXiv Detail & Related papers (2024-02-19T17:58:41Z) - Quantifying the Dialect Gap and its Correlates Across Languages [69.18461982439031]
This work will lay the foundation for furthering the field of dialectal NLP by laying out evident disparities and identifying possible pathways for addressing them through mindful data collection.
arXiv Detail & Related papers (2023-10-23T17:42:01Z) - Better to Ask in English: Cross-Lingual Evaluation of Large Language
Models for Healthcare Queries [31.82249599013959]
Large language models (LLMs) are transforming the ways the general public accesses and consumes information.
LLMs demonstrate impressive language understanding and generation proficiencies, but concerns regarding their safety remain paramount.
It remains unclear how these LLMs perform in the context of non-English languages.
arXiv Detail & Related papers (2023-10-19T20:02:40Z) - Evaluating Large Language Models for Radiology Natural Language
Processing [68.98847776913381]
The rise of large language models (LLMs) has marked a pivotal shift in the field of natural language processing (NLP)
This study seeks to bridge this gap by critically evaluating thirty two LLMs in interpreting radiology reports.
arXiv Detail & Related papers (2023-07-25T17:57:18Z) - Med-UniC: Unifying Cross-Lingual Medical Vision-Language Pre-Training by
Diminishing Bias [38.26934474189853]
Unifying Cross-Lingual Medical Vision-Language Pre-Training (Med-UniC) designed to integrate multimodal medical data from English and Spanish.
Med-UniC reaches superior performance across 5 medical image tasks and 10 datasets encompassing over 30 diseases.
arXiv Detail & Related papers (2023-05-31T14:28:19Z) - Are Large Language Models Ready for Healthcare? A Comparative Study on
Clinical Language Understanding [12.128991867050487]
Large language models (LLMs) have made significant progress in various domains, including healthcare.
In this study, we evaluate state-of-the-art LLMs within the realm of clinical language understanding tasks.
arXiv Detail & Related papers (2023-04-09T16:31:47Z) - Few-Shot Cross-lingual Transfer for Coarse-grained De-identification of
Code-Mixed Clinical Texts [56.72488923420374]
Pre-trained language models (LMs) have shown great potential for cross-lingual transfer in low-resource settings.
We show the few-shot cross-lingual transfer property of LMs for named recognition (NER) and apply it to solve a low-resource and real-world challenge of code-mixed (Spanish-Catalan) clinical notes de-identification in the stroke.
arXiv Detail & Related papers (2022-04-10T21:46:52Z) - AM2iCo: Evaluating Word Meaning in Context across Low-ResourceLanguages
with Adversarial Examples [51.048234591165155]
We present AM2iCo, Adversarial and Multilingual Meaning in Context.
It aims to faithfully assess the ability of state-of-the-art (SotA) representation models to understand the identity of word meaning in cross-lingual contexts.
Results reveal that current SotA pretrained encoders substantially lag behind human performance.
arXiv Detail & Related papers (2021-04-17T20:23:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.