DASentimental: Detecting depression, anxiety and stress in texts via
emotional recall, cognitive networks and machine learning
- URL: http://arxiv.org/abs/2110.13710v1
- Date: Tue, 26 Oct 2021 13:58:46 GMT
- Title: DASentimental: Detecting depression, anxiety and stress in texts via
emotional recall, cognitive networks and machine learning
- Authors: Asra Fatima, Li Ying, Thomas Hills and Massimo Stella
- Abstract summary: This project proposes a semi-supervised machine learning model (DASentimental) to extract depression, anxiety and stress from written text.
We train the model to spot how sequences of recalled emotion words by $N=200$ individuals correlated with responses to the Depression Anxiety Stress Scale (DASS-21)
We find that semantic distances between recalled emotions and the dyad "sad-happy" are crucial features for estimating depression levels but are less important for anxiety and stress.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most current affect scales and sentiment analysis on written text focus on
quantifying valence (sentiment) -- the most primary dimension of emotion.
However, emotions are broader and more complex than valence. Distinguishing
negative emotions of similar valence could be important in contexts such as
mental health. This project proposes a semi-supervised machine learning model
(DASentimental) to extract depression, anxiety and stress from written text.
First, we trained the model to spot how sequences of recalled emotion words by
$N=200$ individuals correlated with their responses to the Depression Anxiety
Stress Scale (DASS-21). Within the framework of cognitive network science, we
model every list of recalled emotions as a walk over a networked mental
representation of semantic memory, with emotions connected according to free
associations in people's memory. Among several tested machine learning
approaches, we find that a multilayer perceptron neural network trained on word
sequences and semantic network distances can achieve state-of-art,
cross-validated predictions for depression ($R = 0.7$), anxiety ($R = 0.44$)
and stress ($R = 0.52$). Though limited by sample size, this first-of-its-kind
approach enables quantitative explorations of key semantic dimensions behind
DAS levels. We find that semantic distances between recalled emotions and the
dyad "sad-happy" are crucial features for estimating depression levels but are
less important for anxiety and stress. We also find that semantic distance of
recalls from "fear" can boost the prediction of anxiety but it becomes
redundant when the "sad-happy" dyad is considered. Adopting DASentimental as a
semi-supervised learning tool to estimate DAS in text, we apply it to a dataset
of 142 suicide notes. We conclude by discussing key directions for future
research enabled by artificial intelligence detecting stress, anxiety and
depression.
Related papers
- Measuring Non-Typical Emotions for Mental Health: A Survey of Computational Approaches [57.486040830365646]
Stress and depression impact the engagement in daily tasks, highlighting the need to understand their interplay.
This survey is the first to simultaneously explore computational methods for analyzing stress, depression, and engagement.
arXiv Detail & Related papers (2024-03-09T11:16:09Z) - Emotion Granularity from Text: An Aggregate-Level Indicator of Mental Health [25.166884750592175]
In psychology, variation in the ability of individuals to differentiate between emotion concepts is called emotion granularity.
High emotion granularity has been linked with better mental and physical health.
Low emotion granularity has been linked with maladaptive emotion regulation strategies and poor health outcomes.
arXiv Detail & Related papers (2024-03-04T18:12:10Z) - DepressionEmo: A novel dataset for multilabel classification of
depression emotions [6.26397257917403]
DepressionEmo is a dataset designed to detect 8 emotions associated with depression by 6037 examples of long Reddit user posts.
This dataset was created through a majority vote over inputs by zero-shot classifications from pre-trained models.
We provide several text classification methods classified into two groups: machine learning methods such as SVM, XGBoost, and Light GBM; and deep learning methods such as BERT, GAN-BERT, and BART.
arXiv Detail & Related papers (2024-01-09T16:25:31Z) - Seeking Subjectivity in Visual Emotion Distribution Learning [93.96205258496697]
Visual Emotion Analysis (VEA) aims to predict people's emotions towards different visual stimuli.
Existing methods often predict visual emotion distribution in a unified network, neglecting the inherent subjectivity in its crowd voting process.
We propose a novel textitSubjectivity Appraise-and-Match Network (SAMNet) to investigate the subjectivity in visual emotion distribution.
arXiv Detail & Related papers (2022-07-25T02:20:03Z) - Accurate Emotion Strength Assessment for Seen and Unseen Speech Based on
Data-Driven Deep Learning [70.30713251031052]
We propose a data-driven deep learning model, i.e. StrengthNet, to improve the generalization of emotion strength assessment for seen and unseen speech.
Experiments show that the predicted emotion strength of the proposed StrengthNet is highly correlated with ground truth scores for both seen and unseen speech.
arXiv Detail & Related papers (2022-06-15T01:25:32Z) - Interpretability of Fine-grained Classification of Sadness and
Depression [0.0]
Depression is a longer term mental illness which impairs social, occupational, and other vital regions of functioning.
Most of the open sourced data on the web deal with sadness as a part of depression, as the difference in severity of both is huge.
In this paper, we aim to highlight the difference between the two and highlight how interpretable our models are to distinctly label sadness and depression.
arXiv Detail & Related papers (2022-03-20T02:34:51Z) - Emotion Intensity and its Control for Emotional Voice Conversion [77.05097999561298]
Emotional voice conversion (EVC) seeks to convert the emotional state of an utterance while preserving the linguistic content and speaker identity.
In this paper, we aim to explicitly characterize and control the intensity of emotion.
We propose to disentangle the speaker style from linguistic content and encode the speaker style into a style embedding in a continuous space that forms the prototype of emotion embedding.
arXiv Detail & Related papers (2022-01-10T02:11:25Z) - Perspective-taking and Pragmatics for Generating Empathetic Responses
Focused on Emotion Causes [50.569762345799354]
We argue that two issues must be tackled at the same time: (i) identifying which word is the cause for the other's emotion from his or her utterance and (ii) reflecting those specific words in the response generation.
Taking inspiration from social cognition, we leverage a generative estimator to infer emotion cause words from utterances with no word-level label.
arXiv Detail & Related papers (2021-09-18T04:22:49Z) - A Circular-Structured Representation for Visual Emotion Distribution
Learning [82.89776298753661]
We propose a well-grounded circular-structured representation to utilize the prior knowledge for visual emotion distribution learning.
To be specific, we first construct an Emotion Circle to unify any emotional state within it.
On the proposed Emotion Circle, each emotion distribution is represented with an emotion vector, which is defined with three attributes.
arXiv Detail & Related papers (2021-06-23T14:53:27Z) - Basic and Depression Specific Emotion Identification in Tweets:
Multi-label Classification Experiments [1.7699344561127386]
We present empirical analysis on basic and depression specific multi-emotion mining in Tweets.
We choose our basic emotions from a hybrid emotion model consisting of the common emotions from four highly regarded psychological models of emotions.
We augment that emotion model with new emotion categories because of their importance in the analysis of depression.
arXiv Detail & Related papers (2021-05-26T07:13:50Z) - Emo-CNN for Perceiving Stress from Audio Signals: A Brain Chemistry
Approach [2.4087148947930634]
We propose an approach that models human stress from audio signals.
Emo-CNN consistently and significantly outperforms the popular existing methods.
Lovheim's cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space.
arXiv Detail & Related papers (2020-01-08T01:01:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.