Explainable Depression Symptom Detection in Social Media
- URL: http://arxiv.org/abs/2310.13664v2
- Date: Mon, 23 Oct 2023 08:31:50 GMT
- Title: Explainable Depression Symptom Detection in Social Media
- Authors: Eliseo Bao Souto, Anxo P\'erez and Javier Parapar
- Abstract summary: We propose using transformer-based architectures to detect and explain the appearance of depressive symptom markers in the users' writings.
Our natural language explanations enable clinicians to interpret the models' decisions based on validated symptoms.
- Score: 2.433983268807517
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Users of social platforms often perceive these sites as supportive spaces to
post about their mental health issues. Those conversations contain important
traces about individuals' health risks. Recently, researchers have exploited
this online information to construct mental health detection models, which aim
to identify users at risk on platforms like Twitter, Reddit or Facebook. Most
of these models are centred on achieving good classification results, ignoring
the explainability and interpretability of the decisions. Recent research has
pointed out the importance of using clinical markers, such as the use of
symptoms, to improve trust in the computational models by health professionals.
In this paper, we propose using transformer-based architectures to detect and
explain the appearance of depressive symptom markers in the users' writings. We
present two approaches: i) train a model to classify, and another one to
explain the classifier's decision separately and ii) unify the two tasks
simultaneously using a single model. Additionally, for this latter manner, we
also investigated the performance of recent conversational LLMs when using
in-context learning. Our natural language explanations enable clinicians to
interpret the models' decisions based on validated symptoms, enhancing trust in
the automated process. We evaluate our approach using recent symptom-based
datasets, employing both offline and expert-in-the-loop metrics to assess the
quality of the explanations generated by our models. The experimental results
show that it is possible to achieve good classification results while
generating interpretable symptom-based explanations.
Related papers
- Decoding Susceptibility: Modeling Misbelief to Misinformation Through a
Computational Approach [63.67533153887132]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Sensitivity, Performance, Robustness: Deconstructing the Effect of
Sociodemographic Prompting [64.80538055623842]
sociodemographic prompting is a technique that steers the output of prompt-based models towards answers that humans with specific sociodemographic profiles would give.
We show that sociodemographic information affects model predictions and can be beneficial for improving zero-shot learning in subjective NLP tasks.
arXiv Detail & Related papers (2023-09-13T15:42:06Z) - Process Knowledge-infused Learning for Clinician-friendly Explanations [14.405002816231477]
Language models can assess mental health using social media data.
They do not compare posts against clinicians' diagnostic processes.
It's challenging to explain language model outputs using concepts that the clinician can understand.
arXiv Detail & Related papers (2023-06-16T13:08:17Z) - A Simple and Flexible Modeling for Mental Disorder Detection by Learning
from Clinical Questionnaires [0.2580765958706853]
We propose a novel approach that captures the semantic meanings directly from the text and compares them to symptom-related descriptions.
Our detailed analysis shows that the proposed model is effective at leveraging domain knowledge, transferable to other mental disorders, and providing interpretable detection results.
arXiv Detail & Related papers (2023-06-05T15:23:55Z) - Towards Trustable Skin Cancer Diagnosis via Rewriting Model's Decision [12.306688233127312]
We introduce a human-in-the-loop framework in the model training process.
Our method can automatically discover confounding factors.
It is capable of learning confounding concepts using easily obtained concept exemplars.
arXiv Detail & Related papers (2023-03-02T01:02:18Z) - Navigating the Grey Area: How Expressions of Uncertainty and
Overconfidence Affect Language Models [74.07684768317705]
LMs are highly sensitive to markers of certainty in prompts, with accuies varying more than 80%.
We find that expressions of high certainty result in a decrease in accuracy as compared to low expressions; similarly, factive verbs hurt performance, while evidentials benefit performance.
These associations may suggest that LMs is based on observed language use, rather than truly reflecting uncertainty.
arXiv Detail & Related papers (2023-02-26T23:46:29Z) - Predicting mental health using social media: A roadmap for future
development [0.0]
Mental disorders such as depression and suicidal ideation affect more than 300 million people over the world.
On social media, mental disorder symptoms can be observed, and automated approaches are increasingly capable of detecting them.
This research offers a roadmap for analysis, where mental state detection can be based on machine learning techniques.
arXiv Detail & Related papers (2023-01-25T08:08:29Z) - Semantic Similarity Models for Depression Severity Estimation [53.72188878602294]
This paper presents an efficient semantic pipeline to study depression severity in individuals based on their social media writings.
We use test user sentences for producing semantic rankings over an index of representative training sentences corresponding to depressive symptoms and severity levels.
We evaluate our methods on two Reddit-based benchmarks, achieving 30% improvement over state of the art in terms of measuring depression severity.
arXiv Detail & Related papers (2022-11-14T18:47:26Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Adapting Deep Learning Methods for Mental Health Prediction on Social
Media [10.102073937554488]
Mental health poses a significant challenge for an individual's well-being.
We tackle a challenge of detecting social media users' mental status through deep learning-based models.
In a binary classification task on predicting if a user suffers from one of nine different disorders, a hierarchical attention network outperforms previously set benchmarks for four of the disorders.
arXiv Detail & Related papers (2020-03-17T10:49:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.