Modeling Label Semantics for Predicting Emotional Reactions
- URL: http://arxiv.org/abs/2006.05489v2
- Date: Sun, 28 Jun 2020 23:47:04 GMT
- Title: Modeling Label Semantics for Predicting Emotional Reactions
- Authors: Radhika Gaonkar, Heeyoung Kwon, Mohaddeseh Bastan, Niranjan
Balasubramanian, Nathanael Chambers
- Abstract summary: Predicting how events induce emotions in the characters of a story is typically seen as a standard multi-label classification task.
We propose that the semantics of emotion labels can guide a model's attention when representing the input story.
We explicitly model label classes via label embeddings, and add mechanisms that track label-label correlations both during training and inference.
- Score: 21.388457946558976
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Predicting how events induce emotions in the characters of a story is
typically seen as a standard multi-label classification task, which usually
treats labels as anonymous classes to predict. They ignore information that may
be conveyed by the emotion labels themselves. We propose that the semantics of
emotion labels can guide a model's attention when representing the input story.
Further, we observe that the emotions evoked by an event are often related: an
event that evokes joy is unlikely to also evoke sadness. In this work, we
explicitly model label classes via label embeddings, and add mechanisms that
track label-label correlations both during training and inference. We also
introduce a new semi-supervision strategy that regularizes for the correlations
on unlabeled data. Our empirical evaluations show that modeling label semantics
yields consistent benefits, and we advance the state-of-the-art on an emotion
inference task.
Related papers
- The Whole Is Bigger Than the Sum of Its Parts: Modeling Individual Annotators to Capture Emotional Variability [7.1394038985662664]
Emotion expression and perception are nuanced, complex, and highly subjective processes.
Most speech emotion recognition tasks address this by averaging annotator labels as ground truth.
Previous work has attempted to learn distributions to capture emotion variability, but these methods also lose information about the individual annotators.
We introduce a novel method to create distributions from continuous model outputs that permit the learning of emotion distributions during model training.
arXiv Detail & Related papers (2024-08-21T19:24:06Z) - LanSER: Language-Model Supported Speech Emotion Recognition [25.597250907836152]
We present LanSER, a method that enables the use of unlabeled data by inferring weak emotion labels via pre-trained large language models.
For inferring weak labels constrained to a taxonomy, we use a textual entailment approach that selects an emotion label with the highest entailment score for a speech transcript extracted via automatic speech recognition.
Our experimental results show that models pre-trained on large datasets with this weak supervision outperform other baseline models on standard SER datasets when fine-tuned, and show improved label efficiency.
arXiv Detail & Related papers (2023-09-07T19:21:08Z) - Leveraging Label Information for Multimodal Emotion Recognition [22.318092635089464]
Multimodal emotion recognition (MER) aims to detect the emotional status of a given expression by combining the speech and text information.
We propose a novel approach for MER by leveraging label information.
We devise a novel label-guided attentive fusion module to fuse the label-aware text and speech representations for emotion classification.
arXiv Detail & Related papers (2023-09-05T10:26:32Z) - Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations [91.67511167969934]
imprecise label learning (ILL) is a framework for the unification of learning with various imprecise label configurations.
We demonstrate that ILL can seamlessly adapt to partial label learning, semi-supervised learning, noisy label learning, and, more importantly, a mixture of these settings.
arXiv Detail & Related papers (2023-05-22T04:50:28Z) - Bridging the Gap between Model Explanations in Partially Annotated
Multi-label Classification [85.76130799062379]
We study how false negative labels affect the model's explanation.
We propose to boost the attribution scores of the model trained with partial labels to make its explanation resemble that of the model trained with full labels.
arXiv Detail & Related papers (2023-04-04T14:00:59Z) - Unifying the Discrete and Continuous Emotion labels for Speech Emotion
Recognition [28.881092401807894]
In paralinguistic analysis for emotion detection from speech, emotions have been identified with discrete or dimensional (continuous-valued) labels.
We propose a model to jointly predict continuous and discrete emotional attributes.
arXiv Detail & Related papers (2022-10-29T16:12:31Z) - Leveraging Label Correlations in a Multi-label Setting: A Case Study in
Emotion [0.0]
We exploit label correlations in multi-label emotion recognition models to improve emotion detection.
We demonstrate state-of-the-art performance across Spanish, English, and Arabic in SemEval 2018 Task 1 E-c using monolingual BERT-based models.
arXiv Detail & Related papers (2022-10-28T02:27:18Z) - Acknowledging the Unknown for Multi-label Learning with Single Positive
Labels [65.5889334964149]
Traditionally, all unannotated labels are assumed as negative labels in single positive multi-label learning (SPML)
We propose entropy-maximization (EM) loss to maximize the entropy of predicted probabilities for all unannotated labels.
Considering the positive-negative label imbalance of unannotated labels, we propose asymmetric pseudo-labeling (APL) with asymmetric-tolerance strategies and a self-paced procedure to provide more precise supervision.
arXiv Detail & Related papers (2022-03-30T11:43:59Z) - Label Distribution Amendment with Emotional Semantic Correlations for
Facial Expression Recognition [69.18918567657757]
We propose a new method that amends the label distribution of each facial image by leveraging correlations among expressions in the semantic space.
By comparing semantic and task class-relation graphs of each image, the confidence of its label distribution is evaluated.
Experimental results demonstrate the proposed method is more effective than compared state-of-the-art methods.
arXiv Detail & Related papers (2021-07-23T07:46:14Z) - A Study on the Autoregressive and non-Autoregressive Multi-label
Learning [77.11075863067131]
We propose a self-attention based variational encoder-model to extract the label-label and label-feature dependencies jointly.
Our model can therefore be used to predict all labels in parallel while still including both label-label and label-feature dependencies.
arXiv Detail & Related papers (2020-12-03T05:41:44Z) - Exploiting Context for Robustness to Label Noise in Active Learning [47.341705184013804]
We address the problems of how a system can identify which of the queried labels are wrong and how a multi-class active learning system can be adapted to minimize the negative impact of label noise.
We construct a graphical representation of the unlabeled data to encode these relationships and obtain new beliefs on the graph when noisy labels are available.
This is demonstrated in three different applications: scene classification, activity classification, and document classification.
arXiv Detail & Related papers (2020-10-18T18:59:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.