Explainable Causal Analysis of Mental Health on Social Media Data
- URL: http://arxiv.org/abs/2210.08430v1
- Date: Sun, 16 Oct 2022 03:34:47 GMT
- Title: Explainable Causal Analysis of Mental Health on Social Media Data
- Authors: Chandni Saxena, Muskan Garg, Gunjan Saxena
- Abstract summary: Multi-class causal categorization for mental health issues on social media has a major challenge of wrong prediction.
Inconsistency among causal explanations/ inappropriate human-annotated inferences in the dataset.
In this work, we find the reason behind inconsistency in accuracy of multi-class causal categorization.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With recent developments in Social Computing, Natural Language Processing and
Clinical Psychology, the social NLP research community addresses the challenge
of automation in mental illness on social media. A recent extension to the
problem of multi-class classification of mental health issues is to identify
the cause behind the user's intention. However, multi-class causal
categorization for mental health issues on social media has a major challenge
of wrong prediction due to the overlapping problem of causal explanations.
There are two possible mitigation techniques to solve this problem: (i)
Inconsistency among causal explanations/ inappropriate human-annotated
inferences in the dataset, (ii) in-depth analysis of arguments and stances in
self-reported text using discourse analysis. In this research work, we
hypothesise that if there exists the inconsistency among F1 scores of different
classes, there must be inconsistency among corresponding causal explanations as
well. In this task, we fine tune the classifiers and find explanations for
multi-class causal categorization of mental illness on social media with LIME
and Integrated Gradient (IG) methods. We test our methods with CAMS dataset and
validate with annotated interpretations. A key contribution of this research
work is to find the reason behind inconsistency in accuracy of multi-class
causal categorization. The effectiveness of our methods is evident with the
results obtained having category-wise average scores of $81.29 \%$ and $0.906$
using cosine similarity and word mover's distance, respectively.
Related papers
- Perplexity Trap: PLM-Based Retrievers Overrate Low Perplexity Documents [64.43980129731587]
We propose a causal-inspired inference-time debiasing method called Causal Diagnosis and Correction (CDC)
CDC first diagnoses the bias effect of the perplexity and then separates the bias effect from the overall relevance score.
Experimental results across three domains demonstrate the superior debiasing effectiveness.
arXiv Detail & Related papers (2025-03-11T17:59:00Z) - Causal Micro-Narratives [62.47217054314046]
We present a novel approach to classify causal micro-narratives from text.
These narratives are sentence-level explanations of the cause(s) and/or effect(s) of a target subject.
arXiv Detail & Related papers (2024-10-07T17:55:10Z) - CausalGym: Benchmarking causal interpretability methods on linguistic
tasks [52.61917615039112]
We use CausalGym to benchmark the ability of interpretability methods to causally affect model behaviour.
We study the pythia models (14M--6.9B) and assess the causal efficacy of a wide range of interpretability methods.
We find that DAS outperforms the other methods, and so we use it to study the learning trajectory of two difficult linguistic phenomena.
arXiv Detail & Related papers (2024-02-19T21:35:56Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - MentaLLaMA: Interpretable Mental Health Analysis on Social Media with
Large Language Models [28.62967557368565]
We build the first multi-task and multi-source interpretable mental health instruction dataset on social media, with 105K data samples.
We use expert-written few-shot prompts and collected labels to prompt ChatGPT and obtain explanations from its responses.
Based on the IMHI dataset and LLaMA2 foundation models, we train MentalLLaMA, the first open-source LLM series for interpretable mental health analysis.
arXiv Detail & Related papers (2023-09-24T06:46:08Z) - A Causal Framework for Decomposing Spurious Variations [68.12191782657437]
We develop tools for decomposing spurious variations in Markovian and Semi-Markovian models.
We prove the first results that allow a non-parametric decomposition of spurious effects.
The described approach has several applications, ranging from explainable and fair AI to questions in epidemiology and medicine.
arXiv Detail & Related papers (2023-06-08T09:40:28Z) - Multi-class Categorization of Reasons behind Mental Disturbance in Long
Texts [0.0]
We use Longformer to handle the problem of finding causal indicators behind mental illness in self-reported text.
Experiments show that Longformer achieves new state-of-the-art results on M-CAMS, a publicly available dataset with 62% F1-score.
We believe our work facilitates causal analysis of depression and suicide risk on social media data, and shows potential for application on other mental health conditions.
arXiv Detail & Related papers (2023-04-08T22:44:32Z) - NLP as a Lens for Causal Analysis and Perception Mining to Infer Mental
Health on Social Media [10.342474142256842]
We argue that more consequential and explainable research is required for optimal impact on clinical psychology practice and personalized mental healthcare.
Within the scope of Natural Language Processing (NLP), we explore critical areas of inquiry associated with Causal analysis and Perception mining.
We advocate for a more explainable approach toward modeling computational psychology problems through the lens of language.
arXiv Detail & Related papers (2023-01-26T09:26:01Z) - Causal Categorization of Mental Health Posts using Transformers [0.0]
Existing research in mental health analysis revolves around the cross-sectional studies to classify users' intent on social media.
For in-depth analysis, we investigate existing classifiers to solve the problem of causal categorization.
We use transformer models and demonstrate the efficacy of a pre-trained transfer learning on "CAMS" dataset.
arXiv Detail & Related papers (2023-01-06T16:37:48Z) - Aggression and "hate speech" in communication of media users: analysis
of control capabilities [50.591267188664666]
Authors studied the possibilities of mutual influence of users in new media.
They found a high level of aggression and hate speech when discussing an urgent social problem - measures for COVID-19 fighting.
Results can be useful for developing media content in a modern digital environment.
arXiv Detail & Related papers (2022-08-25T15:53:32Z) - CAMS: An Annotated Corpus for Causal Analysis of Mental Health Issues in
Social Media Posts [17.853932382843222]
We introduce a new dataset for Causal Analysis of Mental health issues in Social media posts (CAMS)
Our contributions for causal analysis are two-fold: causal interpretation and causal categorization.
We present experimental results of models learned from CAMS dataset and demonstrate that a classic Logistic Regression model outperforms the next best (CNN-LSTM) model by 4.9% accuracy.
arXiv Detail & Related papers (2022-07-11T07:38:18Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.