Understanding the Role of Affect Dimensions in Detecting Emotions from
Tweets: A Multi-task Approach
- URL: http://arxiv.org/abs/2105.03983v1
- Date: Sun, 9 May 2021 18:07:04 GMT
- Title: Understanding the Role of Affect Dimensions in Detecting Emotions from
Tweets: A Multi-task Approach
- Authors: Rajdeep Mukherjee, Atharva Naik, Sriyash Poddar, Soham Dasgupta, Niloy
Ganguly
- Abstract summary: We propose VADEC, a framework that exploits the correlation between the categorical and dimensional models of emotion representation for better subjectivity analysis.
We jointly train multi-label emotion classification and multi-dimensional emotion regression, thereby utilizing the inter-relatedness between the tasks.
- Score: 14.725717500450623
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose VADEC, a multi-task framework that exploits the correlation
between the categorical and dimensional models of emotion representation for
better subjectivity analysis. Focusing primarily on the effective detection of
emotions from tweets, we jointly train multi-label emotion classification and
multi-dimensional emotion regression, thereby utilizing the inter-relatedness
between the tasks. Co-training especially helps in improving the performance of
the classification task as we outperform the strongest baselines with 3.4%,
11%, and 3.9% gains in Jaccard Accuracy, Macro-F1, and Micro-F1 scores
respectively on the AIT dataset. We also achieve state-of-the-art results with
11.3% gains averaged over six different metrics on the SenWave dataset. For the
regression task, VADEC, when trained with SenWave, achieves 7.6% and 16.5%
gains in Pearson Correlation scores over the current state-of-the-art on the
EMOBANK dataset for the Valence (V) and Dominance (D) affect dimensions
respectively. We conclude our work with a case study on COVID-19 tweets posted
by Indians that further helps in establishing the efficacy of our proposed
solution.
Related papers
- Attention Isn't All You Need for Emotion Recognition:Domain Features Outperform Transformers on the EAV Dataset [0.2538209532048867]
We implement three model categories: baseline transformers (M1), novel factorized attention mechanisms (M2), and improved CNN baselines (M3)<n>Experiments show that sophisticated attention mechanisms consistently underperform on small datasets.
arXiv Detail & Related papers (2026-01-07T18:22:01Z) - Hierarchical Adaptive Expert for Multimodal Sentiment Analysis [5.755715236558973]
Multimodal sentiment analysis has emerged as a critical tool for understanding human emotions across diverse communication channels.
We propose the Hierarchical Adaptive Expert for Multimodal Sentiment Analysis (HAEMSA), a novel framework that combines evolutionary optimization, cross-modal knowledge transfer, and multi-task learning.
Extensive experiments demonstrate HAEMSA's superior performance across multiple benchmark datasets.
arXiv Detail & Related papers (2025-03-25T09:52:08Z) - ICONS: Influence Consensus for Vision-Language Data Selection [39.454024810266176]
Training vision-language models via instruction often relies on large mixtures of data spanning diverse tasks and domains.<n>Existing methods typically rely on task-agnostics to estimate data importance or focus on optimizing single tasks in isolation.<n>We introduce ICONS, a gradient-based Influence CONsensus approach for vision-language data Selection.
arXiv Detail & Related papers (2024-12-31T21:33:38Z) - An Exploration of Self-Supervised Mutual Information Alignment for Multi-Task Settings [0.0]
Self-Supervised Alignment with Mutual Information (SAMI) uses conditional mutual information to encourage the connection between behavioral preferences and model responses.
We conduct two experiments exploring SAMI in multi-task settings.
One iteration of SAMI has a 57% win rate against DPO, with significant variation in performance between task categories.
arXiv Detail & Related papers (2024-10-02T16:15:04Z) - Self-Training with Pseudo-Label Scorer for Aspect Sentiment Quad Prediction [54.23208041792073]
Aspect Sentiment Quad Prediction (ASQP) aims to predict all quads (aspect term, aspect category, opinion term, sentiment polarity) for a given review.
A key challenge in the ASQP task is the scarcity of labeled data, which limits the performance of existing methods.
We propose a self-training framework with a pseudo-label scorer, wherein a scorer assesses the match between reviews and their pseudo-labels.
arXiv Detail & Related papers (2024-06-26T05:30:21Z) - The 6th Affective Behavior Analysis in-the-wild (ABAW) Competition [53.718777420180395]
This paper describes the 6th Affective Behavior Analysis in-the-wild (ABAW) Competition.
The 6th ABAW Competition addresses contemporary challenges in understanding human emotions and behaviors.
arXiv Detail & Related papers (2024-02-29T16:49:38Z) - Deep Imbalanced Learning for Multimodal Emotion Recognition in
Conversations [15.705757672984662]
Multimodal Emotion Recognition in Conversations (MERC) is a significant development direction for machine intelligence.
Many data in MERC naturally exhibit an imbalanced distribution of emotion categories, and researchers ignore the negative impact of imbalanced data on emotion recognition.
We propose the Class Boundary Enhanced Representation Learning (CBERL) model to address the imbalanced distribution of emotion categories in raw data.
We have conducted extensive experiments on the IEMOCAP and MELD benchmark datasets, and the results show that CBERL has achieved a certain performance improvement in the effectiveness of emotion recognition.
arXiv Detail & Related papers (2023-12-11T12:35:17Z) - Tool-Augmented Reward Modeling [58.381678612409]
We propose a tool-augmented preference modeling approach, named Themis, to address limitations by empowering RMs with access to external environments.
Our study delves into the integration of external tools into RMs, enabling them to interact with diverse external sources.
In human evaluations, RLHF trained with Themis attains an average win rate of 32% when compared to baselines.
arXiv Detail & Related papers (2023-10-02T09:47:40Z) - Emotional Reaction Intensity Estimation Based on Multimodal Data [24.353102762289545]
This paper introduces our method for the Emotional Reaction Intensity (ERI) Estimation Challenge.
Based on the multimodal data provided by the originazers, we extract acoustic and visual features with different pretrained models.
arXiv Detail & Related papers (2023-03-16T09:14:47Z) - Incorporating Emotions into Health Mention Classification Task on Social
Media [70.23889100356091]
We present a framework for health mention classification that incorporates affective features.
We evaluate our approach on 5 HMC-related datasets from different social media platforms.
Our results indicate that HMC models infused with emotional knowledge are an effective alternative.
arXiv Detail & Related papers (2022-12-09T18:38:41Z) - FAF: A novel multimodal emotion recognition approach integrating face,
body and text [13.485538135494153]
We develop a large multimodal emotion dataset, named "HED" dataset, to facilitate the emotion recognition task.
To promote recognition accuracy, "Feature After Feature" framework was used to explore crucial emotional information.
We employ various benchmarks to evaluate the "HED" dataset and compare the performance with our method.
arXiv Detail & Related papers (2022-11-20T14:43:36Z) - Learning from Label Relationships in Human Affect [13.592112044121683]
We introduce a novel relational loss for multilabel regression and ordinal problems that regularises learning and leads to better generalisation.
We evaluate the proposed methodology on both continuous affect and schizophrenia severity estimation problems.
arXiv Detail & Related papers (2022-07-12T15:00:54Z) - ERNIE-SPARSE: Learning Hierarchical Efficient Transformer Through
Regularized Self-Attention [48.697458429460184]
Two factors, information bottleneck sensitivity and inconsistency between different attention topologies, could affect the performance of the Sparse Transformer.
This paper proposes a well-designed model named ERNIE-Sparse.
It consists of two distinctive parts: (i) Hierarchical Sparse Transformer (HST) to sequentially unify local and global information, and (ii) Self-Attention Regularization (SAR) to minimize the distance for transformers with different attention topologies.
arXiv Detail & Related papers (2022-03-23T08:47:01Z) - DAPPER: Label-Free Performance Estimation after Personalization for
Heterogeneous Mobile Sensing [95.18236298557721]
We present DAPPER (Domain AdaPtation Performance EstimatoR) that estimates the adaptation performance in a target domain with unlabeled target data.
Our evaluation with four real-world sensing datasets compared against six baselines shows that DAPPER outperforms the state-of-the-art baseline by 39.8% in estimation accuracy.
arXiv Detail & Related papers (2021-11-22T08:49:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.