Basic and Depression Specific Emotion Identification in Tweets:
Multi-label Classification Experiments
- URL: http://arxiv.org/abs/2105.12364v1
- Date: Wed, 26 May 2021 07:13:50 GMT
- Title: Basic and Depression Specific Emotion Identification in Tweets:
Multi-label Classification Experiments
- Authors: Nawshad Farruque, Chenyang Huang, Osmar Zaiane, Randy Goebel
- Abstract summary: We present empirical analysis on basic and depression specific multi-emotion mining in Tweets.
We choose our basic emotions from a hybrid emotion model consisting of the common emotions from four highly regarded psychological models of emotions.
We augment that emotion model with new emotion categories because of their importance in the analysis of depression.
- Score: 1.7699344561127386
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we present empirical analysis on basic and depression specific
multi-emotion mining in Tweets with the help of state of the art multi-label
classifiers. We choose our basic emotions from a hybrid emotion model
consisting of the common emotions from four highly regarded psychological
models of emotions. Moreover, we augment that emotion model with new emotion
categories because of their importance in the analysis of depression. Most of
those additional emotions have not been used in previous emotion mining
research. Our experimental analyses show that a cost sensitive RankSVM
algorithm and a Deep Learning model are both robust, measured by both Macro
F-measures and Micro F-measures. This suggests that these algorithms are
superior in addressing the widely known data imbalance problem in multi-label
learning. Moreover, our application of Deep Learning performs the best, giving
it an edge in modeling deep semantic features of our extended emotional
categories.
Related papers
- MEMO-Bench: A Multiple Benchmark for Text-to-Image and Multimodal Large Language Models on Human Emotion Analysis [53.012111671763776]
This study introduces MEMO-Bench, a comprehensive benchmark consisting of 7,145 portraits, each depicting one of six different emotions.
Results demonstrate that existing T2I models are more effective at generating positive emotions than negative ones.
Although MLLMs show a certain degree of effectiveness in distinguishing and recognizing human emotions, they fall short of human-level accuracy.
arXiv Detail & Related papers (2024-11-18T02:09:48Z) - TONE: A 3-Tiered ONtology for Emotion analysis [9.227164881235947]
Emotions have played an important part in many sectors, including psychology, medicine, mental health, computer science, and so on.
Emotions can be classified using the following two methods: (1) The supervised method's efficiency is strongly dependent on the size and domain of the data collected.
We create an emotion-based ontology that effectively creates an emotional hierarchy based on Dr. Gerrod Parrot's group of emotions.
arXiv Detail & Related papers (2024-01-11T04:23:08Z) - Language Models (Mostly) Do Not Consider Emotion Triggers When Predicting Emotion [87.18073195745914]
We investigate how well human-annotated emotion triggers correlate with features deemed salient in their prediction of emotions.
Using EmoTrigger, we evaluate the ability of large language models to identify emotion triggers.
Our analysis reveals that emotion triggers are largely not considered salient features for emotion prediction models, instead there is intricate interplay between various features and the task of emotion detection.
arXiv Detail & Related papers (2023-11-16T06:20:13Z) - Multi-view Multi-label Fine-grained Emotion Decoding from Human Brain
Activity [9.446422699647625]
Decoding emotional states from human brain activity plays an important role in brain-computer interfaces.
Existing emotion decoding methods still have two main limitations.
We propose a novel multi-view multi-label hybrid model for fine-grained emotion decoding.
arXiv Detail & Related papers (2022-10-26T05:56:54Z) - Uncovering the Limits of Text-based Emotion Detection [0.0]
We consider the two largest corpora for emotion classification: GoEmotions, with 58k messages labelled by readers, and Vent, with 33M writer-labelled messages.
We design a benchmark and evaluate several feature spaces and learning algorithms, including two simple yet novel models on top of BERT.
arXiv Detail & Related papers (2021-09-04T16:40:06Z) - Emotion Recognition from Multiple Modalities: Fundamentals and
Methodologies [106.62835060095532]
We discuss several key aspects of multi-modal emotion recognition (MER)
We begin with a brief introduction on widely used emotion representation models and affective modalities.
We then summarize existing emotion annotation strategies and corresponding computational tasks.
Finally, we outline several real-world applications and discuss some future directions.
arXiv Detail & Related papers (2021-08-18T21:55:20Z) - A Circular-Structured Representation for Visual Emotion Distribution
Learning [82.89776298753661]
We propose a well-grounded circular-structured representation to utilize the prior knowledge for visual emotion distribution learning.
To be specific, we first construct an Emotion Circle to unify any emotional state within it.
On the proposed Emotion Circle, each emotion distribution is represented with an emotion vector, which is defined with three attributes.
arXiv Detail & Related papers (2021-06-23T14:53:27Z) - Enhancing Cognitive Models of Emotions with Representation Learning [58.2386408470585]
We present a novel deep learning-based framework to generate embedding representations of fine-grained emotions.
Our framework integrates a contextualized embedding encoder with a multi-head probing model.
Our model is evaluated on the Empathetic Dialogue dataset and shows the state-of-the-art result for classifying 32 emotions.
arXiv Detail & Related papers (2021-04-20T16:55:15Z) - Modality-Transferable Emotion Embeddings for Low-Resource Multimodal
Emotion Recognition [55.44502358463217]
We propose a modality-transferable model with emotion embeddings to tackle the aforementioned issues.
Our model achieves state-of-the-art performance on most of the emotion categories.
Our model also outperforms existing baselines in the zero-shot and few-shot scenarios for unseen emotions.
arXiv Detail & Related papers (2020-09-21T06:10:39Z) - Emo-CNN for Perceiving Stress from Audio Signals: A Brain Chemistry
Approach [2.4087148947930634]
We propose an approach that models human stress from audio signals.
Emo-CNN consistently and significantly outperforms the popular existing methods.
Lovheim's cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space.
arXiv Detail & Related papers (2020-01-08T01:01:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.