MentaLLaMA: Interpretable Mental Health Analysis on Social Media with
Large Language Models
- URL: http://arxiv.org/abs/2309.13567v3
- Date: Sun, 4 Feb 2024 02:47:49 GMT
- Title: MentaLLaMA: Interpretable Mental Health Analysis on Social Media with
Large Language Models
- Authors: Kailai Yang, Tianlin Zhang, Ziyan Kuang, Qianqian Xie, Jimin Huang,
Sophia Ananiadou
- Abstract summary: We build the first multi-task and multi-source interpretable mental health instruction dataset on social media, with 105K data samples.
We use expert-written few-shot prompts and collected labels to prompt ChatGPT and obtain explanations from its responses.
Based on the IMHI dataset and LLaMA2 foundation models, we train MentalLLaMA, the first open-source LLM series for interpretable mental health analysis.
- Score: 28.62967557368565
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the development of web technology, social media texts are becoming a
rich source for automatic mental health analysis. As traditional discriminative
methods bear the problem of low interpretability, the recent large language
models have been explored for interpretable mental health analysis on social
media, which aims to provide detailed explanations along with predictions. The
results show that ChatGPT can generate approaching-human explanations for its
correct classifications. However, LLMs still achieve unsatisfactory
classification performance in a zero-shot/few-shot manner. Domain-specific
finetuning is an effective solution, but faces 2 challenges: 1) lack of
high-quality training data. 2) no open-source LLMs for interpretable mental
health analysis were released to lower the finetuning cost. To alleviate these
problems, we build the first multi-task and multi-source interpretable mental
health instruction (IMHI) dataset on social media, with 105K data samples. The
raw social media data are collected from 10 existing sources covering 8 mental
health analysis tasks. We use expert-written few-shot prompts and collected
labels to prompt ChatGPT and obtain explanations from its responses. To ensure
the reliability of the explanations, we perform strict automatic and human
evaluations on the correctness, consistency, and quality of generated data.
Based on the IMHI dataset and LLaMA2 foundation models, we train MentalLLaMA,
the first open-source LLM series for interpretable mental health analysis with
instruction-following capability. We also evaluate the performance of
MentalLLaMA on the IMHI evaluation benchmark with 10 test sets, where their
correctness for making predictions and the quality of explanations are
examined. The results show that MentalLLaMA approaches state-of-the-art
discriminative methods in correctness and generates high-quality explanations.
Related papers
- WellDunn: On the Robustness and Explainability of Language Models and Large Language Models in Identifying Wellness Dimensions [46.60244609728416]
Language Models (LMs) are being proposed for mental health applications where the heightened risk of adverse outcomes means predictive performance may not be a sufficient litmus test of a model's utility in clinical practice.
We introduce an evaluation design that focuses on the veracity and explainability of LMs in identifying Wellness Dimensions (WD)
We reveal four surprising results about LMs/LLMs.
arXiv Detail & Related papers (2024-06-17T19:50:40Z) - LLM Questionnaire Completion for Automatic Psychiatric Assessment [49.1574468325115]
We employ a Large Language Model (LLM) to convert unstructured psychological interviews into structured questionnaires spanning various psychiatric and personality domains.
The obtained answers are coded as features, which are used to predict standardized psychiatric measures of depression (PHQ-8) and PTSD (PCL-C)
arXiv Detail & Related papers (2024-06-09T09:03:11Z) - Adapting Mental Health Prediction Tasks for Cross-lingual Learning via Meta-Training and In-context Learning with Large Language Model [3.3590922002216193]
We use model-agnostic meta-learning and leveraging large language models (LLMs) to address this gap.
We first apply a meta-learning model with self-supervision, which results in improved model initialisation for rapid adaptation and cross-lingual transfer.
In parallel, we use LLMs' in-context learning capabilities to assess their performance accuracy across the Swahili mental health prediction tasks.
arXiv Detail & Related papers (2024-04-13T17:11:35Z) - Zero-shot Explainable Mental Health Analysis on Social Media by Incorporating Mental Scales [23.94585145560042]
Mental Analysis by Incorporating Mental Scales (MAIMS) is inspired by the psychological assessment practice of using scales to evaluate mental states.
First, the patient completes mental scales, and second, the psychologist interprets the collected information from the mental scales and makes informed decisions.
arXiv Detail & Related papers (2024-02-09T09:44:06Z) - InfoLossQA: Characterizing and Recovering Information Loss in Text Simplification [60.10193972862099]
This work proposes a framework to characterize and recover simplification-induced information loss in form of question-and-answer pairs.
QA pairs are designed to help readers deepen their knowledge of a text.
arXiv Detail & Related papers (2024-01-29T19:00:01Z) - Explainable Depression Symptom Detection in Social Media [2.433983268807517]
We propose using transformer-based architectures to detect and explain the appearance of depressive symptom markers in the users' writings.
Our natural language explanations enable clinicians to interpret the models' decisions based on validated symptoms.
arXiv Detail & Related papers (2023-10-20T17:05:27Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - Towards Interpretable Mental Health Analysis with Large Language Models [27.776003210275608]
We evaluate the mental health analysis and emotional reasoning ability of large language models (LLMs) on 11 datasets across 5 tasks.
Based on prompts, we explore LLMs for interpretable mental health analysis by instructing them to generate explanations for each of their decisions.
We convey strict human evaluations to assess the quality of the generated explanations, leading to a novel dataset with 163 human-assessed explanations.
arXiv Detail & Related papers (2023-04-06T19:53:59Z) - Auditing Algorithmic Fairness in Machine Learning for Health with
Severity-Based LOGAN [70.76142503046782]
We propose supplementing machine learning-based (ML) healthcare tools for bias with SLOGAN, an automatic tool for capturing local biases in a clinical prediction task.
LOGAN adapts an existing tool, LOcal Group biAs detectioN, by contextualizing group bias detection in patient illness severity and past medical history.
On average, SLOGAN identifies larger fairness disparities in over 75% of patient groups than LOGAN while maintaining clustering quality.
arXiv Detail & Related papers (2022-11-16T08:04:12Z) - MET: Multimodal Perception of Engagement for Telehealth [52.54282887530756]
We present MET, a learning-based algorithm for perceiving a human's level of engagement from videos.
We release a new dataset, MEDICA, for mental health patient engagement detection.
arXiv Detail & Related papers (2020-11-17T15:18:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.