Dynamic Adaptive Threshold based Learning for Noisy Annotations Robust
Facial Expression Recognition
- URL: http://arxiv.org/abs/2208.10221v1
- Date: Mon, 22 Aug 2022 12:02:41 GMT
- Title: Dynamic Adaptive Threshold based Learning for Noisy Annotations Robust
Facial Expression Recognition
- Authors: Darshan Gera, Naveen Siva Kumar Badveeti, Bobbili Veerendra Raj Kumar
and S Balasubramanian
- Abstract summary: We propose a dynamic FER learning framework (DNFER) to handle noisy annotations.
Specifically, DNFER is based on supervised training using selected clean samples and unsupervised consistent training using all the samples.
We demonstrate the robustness of DNFER on both synthetic as well as on real noisy annotated FER datasets like RAFDB, FERPlus, SFEW and AffectNet.
- Score: 3.823356975862006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The real-world facial expression recognition (FER) datasets suffer from noisy
annotations due to crowd-sourcing, ambiguity in expressions, the subjectivity
of annotators and inter-class similarity. However, the recent deep networks
have strong capacity to memorize the noisy annotations leading to corrupted
feature embedding and poor generalization. To handle noisy annotations, we
propose a dynamic FER learning framework (DNFER) in which clean samples are
selected based on dynamic class specific threshold during training.
Specifically, DNFER is based on supervised training using selected clean
samples and unsupervised consistent training using all the samples. During
training, the mean posterior class probabilities of each mini-batch is used as
dynamic class-specific threshold to select the clean samples for supervised
training. This threshold is independent of noise rate and does not need any
clean data unlike other methods. In addition, to learn from all samples, the
posterior distributions between weakly-augmented image and strongly-augmented
image are aligned using an unsupervised consistency loss. We demonstrate the
robustness of DNFER on both synthetic as well as on real noisy annotated FER
datasets like RAFDB, FERPlus, SFEW and AffectNet.
Related papers
- Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - FedDiv: Collaborative Noise Filtering for Federated Learning with Noisy
Labels [99.70895640578816]
Federated learning with noisy labels (F-LNL) aims at seeking an optimal server model via collaborative distributed learning.
We present FedDiv to tackle the challenges of F-LNL. Specifically, we propose a global noise filter called Federated Noise Filter.
arXiv Detail & Related papers (2023-12-19T15:46:47Z) - Combating Label Noise With A General Surrogate Model For Sample
Selection [84.61367781175984]
We propose to leverage the vision-language surrogate model CLIP to filter noisy samples automatically.
We validate the effectiveness of our proposed method on both real-world and synthetic noisy datasets.
arXiv Detail & Related papers (2023-10-16T14:43:27Z) - ASM: Adaptive Sample Mining for In-The-Wild Facial Expression
Recognition [19.846612021056565]
We introduce a novel approach called Adaptive Sample Mining to address ambiguity and noise within each expression category.
Our method can effectively mine both ambiguity and noise, and outperform SOTA methods on both synthetic noisy and original datasets.
arXiv Detail & Related papers (2023-10-09T11:18:22Z) - Understanding and Mitigating the Label Noise in Pre-training on
Downstream Tasks [91.15120211190519]
This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks.
We propose a light-weight black-box tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise.
arXiv Detail & Related papers (2023-09-29T06:18:15Z) - Learning from Noisy Labels with Coarse-to-Fine Sample Credibility
Modeling [22.62790706276081]
Training deep neural network (DNN) with noisy labels is practically challenging.
Previous efforts tend to handle part or full data in a unified denoising flow.
We propose a coarse-to-fine robust learning method called CREMA to handle noisy data in a divide-and-conquer manner.
arXiv Detail & Related papers (2022-08-23T02:06:38Z) - Context-based Virtual Adversarial Training for Text Classification with
Noisy Labels [1.9508698179748525]
We propose context-based virtual adversarial training (ConVAT) to prevent a text classifier from overfitting to noisy labels.
Unlike the previous works, the proposed method performs the adversarial training at the context level rather than the inputs.
We conduct extensive experiments on four text classification datasets with two types of label noises.
arXiv Detail & Related papers (2022-05-29T14:19:49Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Multi-Objective Interpolation Training for Robustness to Label Noise [17.264550056296915]
We show that standard supervised contrastive learning degrades in the presence of label noise.
We propose a novel label noise detection method that exploits the robust feature representations learned via contrastive learning.
Experiments on synthetic and real-world noise benchmarks demonstrate that MOIT/MOIT+ achieves state-of-the-art results.
arXiv Detail & Related papers (2020-12-08T15:01:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.