Label Distribution Amendment with Emotional Semantic Correlations for
Facial Expression Recognition
- URL: http://arxiv.org/abs/2107.11061v1
- Date: Fri, 23 Jul 2021 07:46:14 GMT
- Title: Label Distribution Amendment with Emotional Semantic Correlations for
Facial Expression Recognition
- Authors: Shasha Mao, Guanghui Shi, Licheng Jiao, Shuiping Gou, Yangyang Li, Lin
Xiong, Boxin Shi
- Abstract summary: We propose a new method that amends the label distribution of each facial image by leveraging correlations among expressions in the semantic space.
By comparing semantic and task class-relation graphs of each image, the confidence of its label distribution is evaluated.
Experimental results demonstrate the proposed method is more effective than compared state-of-the-art methods.
- Score: 69.18918567657757
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: By utilizing label distribution learning, a probability distribution is
assigned for a facial image to express a compound emotion, which effectively
improves the problem of label uncertainties and noises occurred in one-hot
labels. In practice, it is observed that correlations among emotions are
inherently different, such as surprised and happy emotions are more possibly
synchronized than surprised and neutral. It indicates the correlation may be
crucial for obtaining a reliable label distribution. Based on this, we propose
a new method that amends the label distribution of each facial image by
leveraging correlations among expressions in the semantic space. Inspired by
inherently diverse correlations among word2vecs, the topological information
among facial expressions is firstly explored in the semantic space, and each
image is embedded into the semantic space. Specially, a class-relation graph is
constructed to transfer the semantic correlation among expressions into the
task space. By comparing semantic and task class-relation graphs of each image,
the confidence of its label distribution is evaluated. Based on the confidence,
the label distribution is amended by enhancing samples with higher confidence
and weakening samples with lower confidence. Experimental results demonstrate
the proposed method is more effective than compared state-of-the-art methods.
Related papers
- Conjuring Semantic Similarity [59.18714889874088]
The semantic similarity between two textual expressions measures the distance between their latent'meaning'
We propose a novel approach whereby the semantic similarity among textual expressions is based not on other expressions they can be rephrased as, but rather based on the imagery they evoke.
Our method contributes a novel perspective on semantic similarity that not only aligns with human-annotated scores, but also opens up new avenues for the evaluation of text-conditioned generative models.
arXiv Detail & Related papers (2024-10-21T18:51:34Z) - The Whole Is Bigger Than the Sum of Its Parts: Modeling Individual Annotators to Capture Emotional Variability [7.1394038985662664]
Emotion expression and perception are nuanced, complex, and highly subjective processes.
Most speech emotion recognition tasks address this by averaging annotator labels as ground truth.
Previous work has attempted to learn distributions to capture emotion variability, but these methods also lose information about the individual annotators.
We introduce a novel method to create distributions from continuous model outputs that permit the learning of emotion distributions during model training.
arXiv Detail & Related papers (2024-08-21T19:24:06Z) - Improved Text Emotion Prediction Using Combined Valence and Arousal Ordinal Classification [37.823815777259036]
We introduce a method for categorizing emotions from text, which acknowledges and differentiates between the diversified similarities and distinctions of various emotions.
Our approach not only preserves high accuracy in emotion prediction but also significantly reduces the magnitude of errors in cases of misclassification.
arXiv Detail & Related papers (2024-04-02T10:06:30Z) - Handling Ambiguity in Emotion: From Out-of-Domain Detection to
Distribution Estimation [45.53789836426869]
The subjective perception of emotion leads to inconsistent labels from human annotators.
This paper investigates three methods to handle ambiguous emotion.
We show that incorporating utterances without majority-agreed labels as an additional class in the classifier reduces the classification performance of the other emotion classes.
We also propose detecting utterances with ambiguous emotions as out-of-domain samples by quantifying the uncertainty in emotion classification using evidential deep learning.
arXiv Detail & Related papers (2024-02-20T09:53:38Z) - Unifying the Discrete and Continuous Emotion labels for Speech Emotion
Recognition [28.881092401807894]
In paralinguistic analysis for emotion detection from speech, emotions have been identified with discrete or dimensional (continuous-valued) labels.
We propose a model to jointly predict continuous and discrete emotional attributes.
arXiv Detail & Related papers (2022-10-29T16:12:31Z) - Label Uncertainty Modeling and Prediction for Speech Emotion Recognition
using t-Distributions [15.16865739526702]
We propose to model the label distribution using a Student's t-distribution.
We derive the corresponding Kullback-Leibler divergence based loss function and use it to train an estimator for the distribution of emotion labels.
Results reveal that our t-distribution based approach improves over the Gaussian approach with state-of-the-art uncertainty modeling results.
arXiv Detail & Related papers (2022-07-25T12:38:20Z) - A Theory-Driven Self-Labeling Refinement Method for Contrastive
Representation Learning [111.05365744744437]
Unsupervised contrastive learning labels crops of the same image as positives, and other image crops as negatives.
In this work, we first prove that for contrastive learning, inaccurate label assignment heavily impairs its generalization for semantic instance discrimination.
Inspired by this theory, we propose a novel self-labeling refinement approach for contrastive learning.
arXiv Detail & Related papers (2021-06-28T14:24:52Z) - Is Label Smoothing Truly Incompatible with Knowledge Distillation: An
Empirical Study [59.95267695402516]
This work aims to empirically clarify that label smoothing is incompatible with knowledge distillation.
We provide a novel connection on how label smoothing affects distributions of semantically similar and dissimilar classes.
We study its one-sidedness and imperfection of the incompatibility view through massive analyses, visualizations and comprehensive experiments.
arXiv Detail & Related papers (2021-04-01T17:59:12Z) - EmoGraph: Capturing Emotion Correlations using Graph Networks [71.53159402053392]
We propose EmoGraph that captures the dependencies among different emotions through graph networks.
EmoGraph outperforms strong baselines, especially for macro-F1.
An experiment illustrates the captured emotion correlations can also benefit a single-label classification task.
arXiv Detail & Related papers (2020-08-21T08:59:29Z) - Debiased Contrastive Learning [64.98602526764599]
We develop a debiased contrastive objective that corrects for the sampling of same-label datapoints.
Empirically, the proposed objective consistently outperforms the state-of-the-art for representation learning in vision, language, and reinforcement learning benchmarks.
arXiv Detail & Related papers (2020-07-01T04:25:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.