Fighting noise and imbalance in Action Unit detection problems
- URL: http://arxiv.org/abs/2303.02994v1
- Date: Mon, 6 Mar 2023 09:41:40 GMT
- Title: Fighting noise and imbalance in Action Unit detection problems
- Authors: Gauthier Tallec, Arnaud Dapogny and Kevin Bailly
- Abstract summary: Action Unit (AU) detection aims at automatically caracterizing facial expressions with the muscular activations they involve.
The available databases display limited face variability and are imbalanced toward neutral expressions.
We propose Robin Hood Label Smoothing (RHLS) to restrain label smoothing confidence reduction to the majority class.
- Score: 7.971065005161565
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Action Unit (AU) detection aims at automatically caracterizing facial
expressions with the muscular activations they involve. Its main interest is to
provide a low-level face representation that can be used to assist higher level
affective computing tasks learning. Yet, it is a challenging task. Indeed, the
available databases display limited face variability and are imbalanced toward
neutral expressions. Furthermore, as AU involve subtle face movements they are
difficult to annotate so that some of the few provided datapoints may be
mislabeled. In this work, we aim at exploiting label smoothing ability to
mitigate noisy examples impact by reducing confidence [1]. However, applying
label smoothing as it is may aggravate imbalance-based pre-existing
under-confidence issue and degrade performance. To circumvent this issue, we
propose Robin Hood Label Smoothing (RHLS). RHLS principle is to restrain label
smoothing confidence reduction to the majority class. In that extent, it
alleviates both the imbalance-based over-confidence issue and the negative
impact of noisy majority class examples. From an experimental standpoint, we
show that RHLS provides a free performance improvement in AU detection. In
particular, by applying it on top of a modern multi-task baseline we get
promising results on BP4D and outperform state-of-the-art methods on DISFA.
Related papers
- Active Negative Loss: A Robust Framework for Learning with Noisy Labels [26.853357479214004]
Noise-robust loss functions offer an effective solution for enhancing learning in the presence of label noise.
We introduce a novel loss function class, termed Normalized Negative Loss Functions (NNLFs), which serve as passive loss functions within the APL framework.
In non-symmetric noise scenarios, we propose an entropy-based regularization technique to mitigate the vulnerability to the label imbalance.
arXiv Detail & Related papers (2024-12-03T11:00:15Z) - Extreme Miscalibration and the Illusion of Adversarial Robustness [66.29268991629085]
Adversarial Training is often used to increase model robustness.
We show that this observed gain in robustness is an illusion of robustness (IOR)
We urge the NLP community to incorporate test-time temperature scaling into their robustness evaluations.
arXiv Detail & Related papers (2024-02-27T13:49:12Z) - SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised
Learning [101.86916775218403]
This paper revisits the popular pseudo-labeling methods via a unified sample weighting formulation.
We propose SoftMatch to overcome the trade-off by maintaining both high quantity and high quality of pseudo-labels during training.
In experiments, SoftMatch shows substantial improvements across a wide variety of benchmarks, including image, text, and imbalanced classification.
arXiv Detail & Related papers (2023-01-26T03:53:25Z) - Combating Uncertainty and Class Imbalance in Facial Expression
Recognition [4.306007841758853]
We propose a framework based on Resnet and Attention to solve the above problems.
Our method surpasses most basic methods in terms of accuracy on facial expression data sets.
arXiv Detail & Related papers (2022-12-15T12:09:02Z) - Low-Mid Adversarial Perturbation against Unauthorized Face Recognition
System [20.979192130022334]
We propose a novel solution referred to as emphlow frequency adversarial perturbation (LFAP)
This method conditions the source model to leverage low-frequency characteristics through adversarial training.
We also introduce an improved emphlow-mid frequency adversarial perturbation (LMFAP) that incorporates mid-frequency components for an additive benefit.
arXiv Detail & Related papers (2022-06-19T14:15:49Z) - Scale-Equivalent Distillation for Semi-Supervised Object Detection [57.59525453301374]
Recent Semi-Supervised Object Detection (SS-OD) methods are mainly based on self-training, generating hard pseudo-labels by a teacher model on unlabeled data as supervisory signals.
We analyze the challenges these methods meet with the empirical experiment results.
We introduce a novel approach, Scale-Equivalent Distillation (SED), which is a simple yet effective end-to-end knowledge distillation framework robust to large object size variance and class imbalance.
arXiv Detail & Related papers (2022-03-23T07:33:37Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Meta Auxiliary Learning for Facial Action Unit Detection [84.22521265124806]
We consider learning AU detection and facial expression recognition in a multi-task manner.
The performance of the AU detection task cannot be always enhanced due to the negative transfer in the multi-task scenario.
We propose a Meta Learning method (MAL) that automatically selects highly related FE samples by learning adaptative weights for the training FE samples in a meta learning manner.
arXiv Detail & Related papers (2021-05-14T02:28:40Z) - Semantic Neighborhood-Aware Deep Facial Expression Recognition [14.219890078312536]
A novel method is proposed to formulate semantic perturbation and select unreliable samples during training.
Experiments show the effectiveness of the proposed method and state-of-the-art results are reported.
arXiv Detail & Related papers (2020-04-27T11:48:17Z) - Suppressing Uncertainties for Large-Scale Facial Expression Recognition [81.51495681011404]
This paper proposes a simple yet efficient Self-Cure Network (SCN) which suppresses the uncertainties efficiently and prevents deep networks from over-fitting uncertain facial images.
Results on public benchmarks demonstrate that our SCN outperforms current state-of-the-art methods with textbf88.14% on RAF-DB, textbf60.23% on AffectNet, and textbf89.35% on FERPlus.
arXiv Detail & Related papers (2020-02-24T17:24:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.