Class adaptive threshold and negative class guided noisy annotation
robust Facial Expression Recognition
- URL: http://arxiv.org/abs/2305.01884v1
- Date: Wed, 3 May 2023 04:28:49 GMT
- Title: Class adaptive threshold and negative class guided noisy annotation
robust Facial Expression Recognition
- Authors: Darshan Gera, Badveeti Naveen Siva Kumar, Bobbili Veerendra Raj Kumar,
S Balasubramanian
- Abstract summary: noisy annotations are present in datasets inherently because the labeling is subjective to the annotator, clarity of the image, etc.
Recent works use sample selection methods to solve this noisy annotation problem in FER.
In our work, we use a dynamic adaptive threshold to separate confident samples from non-confident ones so that our learning won't be hampered due to non-confident samples.
- Score: 3.823356975862006
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The hindering problem in facial expression recognition (FER) is the presence
of inaccurate annotations referred to as noisy annotations in the datasets.
These noisy annotations are present in the datasets inherently because the
labeling is subjective to the annotator, clarity of the image, etc. Recent
works use sample selection methods to solve this noisy annotation problem in
FER. In our work, we use a dynamic adaptive threshold to separate confident
samples from non-confident ones so that our learning won't be hampered due to
non-confident samples. Instead of discarding the non-confident samples, we
impose consistency in the negative classes of those non-confident samples to
guide the model to learn better in the positive class. Since FER datasets
usually come with 7 or 8 classes, we can correctly guess a negative class by
85% probability even by choosing randomly. By learning "which class a sample
doesn't belong to", the model can learn "which class it belongs to" in a better
manner. We demonstrate proposed framework's effectiveness using quantitative as
well as qualitative results. Our method performs better than the baseline by a
margin of 4% to 28% on RAFDB and 3.3% to 31.4% on FERPlus for various levels of
synthetic noisy labels in the aforementioned datasets.
Related papers
- Learning with Confidence: Training Better Classifiers from Soft Labels [0.0]
In supervised machine learning, models are typically trained using data with hard labels, i.e., definite assignments of class membership.
We investigate whether incorporating label uncertainty, represented as discrete probability distributions over the class labels, improves the predictive performance of classification models.
arXiv Detail & Related papers (2024-09-24T13:12:29Z) - Learning with Imbalanced Noisy Data by Preventing Bias in Sample
Selection [82.43311784594384]
Real-world datasets contain not only noisy labels but also class imbalance.
We propose a simple yet effective method to address noisy labels in imbalanced datasets.
arXiv Detail & Related papers (2024-02-17T10:34:53Z) - Combating Label Noise With A General Surrogate Model For Sample Selection [77.45468386115306]
We propose to leverage the vision-language surrogate model CLIP to filter noisy samples automatically.
We validate the effectiveness of our proposed method on both real-world and synthetic noisy datasets.
arXiv Detail & Related papers (2023-10-16T14:43:27Z) - Neighborhood-Regularized Self-Training for Learning with Few Labels [21.7848889781112]
One drawback of self-training is that it is vulnerable to the label noise from incorrect pseudo labels.
We develop a neighborhood-based sample selection approach to tackle the issue of noisy pseudo labels.
Our proposed data selection strategy reduces the noise of pseudo labels by 36.8% and saves 57.3% of the time when compared with the best baseline.
arXiv Detail & Related papers (2023-01-10T00:07:33Z) - Learning to Detect Noisy Labels Using Model-Based Features [16.681748918518075]
We propose Selection-Enhanced Noisy label Training (SENT)
SENT does not rely on meta learning while having the flexibility of being data-driven.
It improves performance over strong baselines under the settings of self-training and label corruption.
arXiv Detail & Related papers (2022-12-28T10:12:13Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - NorMatch: Matching Normalizing Flows with Discriminative Classifiers for
Semi-Supervised Learning [8.749830466953584]
Semi-Supervised Learning (SSL) aims to learn a model using a tiny labeled set and massive amounts of unlabeled data.
In this work we introduce a new framework for SSL named NorMatch.
We demonstrate, through numerical and visual results, that NorMatch achieves state-of-the-art performance on several datasets.
arXiv Detail & Related papers (2022-11-17T15:39:18Z) - Dynamic Adaptive Threshold based Learning for Noisy Annotations Robust
Facial Expression Recognition [3.823356975862006]
We propose a dynamic FER learning framework (DNFER) to handle noisy annotations.
Specifically, DNFER is based on supervised training using selected clean samples and unsupervised consistent training using all the samples.
We demonstrate the robustness of DNFER on both synthetic as well as on real noisy annotated FER datasets like RAFDB, FERPlus, SFEW and AffectNet.
arXiv Detail & Related papers (2022-08-22T12:02:41Z) - UNICON: Combating Label Noise Through Uniform Selection and Contrastive
Learning [89.56465237941013]
We propose UNICON, a simple yet effective sample selection method which is robust to high label noise.
We obtain an 11.4% improvement over the current state-of-the-art on CIFAR100 dataset with a 90% noise rate.
arXiv Detail & Related papers (2022-03-28T07:36:36Z) - Active Learning by Feature Mixing [52.16150629234465]
We propose a novel method for batch active learning called ALFA-Mix.
We identify unlabelled instances with sufficiently-distinct features by seeking inconsistencies in predictions.
We show that inconsistencies in these predictions help discovering features that the model is unable to recognise in the unlabelled instances.
arXiv Detail & Related papers (2022-03-14T12:20:54Z) - Exploiting Sample Uncertainty for Domain Adaptive Person
Re-Identification [137.9939571408506]
We estimate and exploit the credibility of the assigned pseudo-label of each sample to alleviate the influence of noisy labels.
Our uncertainty-guided optimization brings significant improvement and achieves the state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2020-12-16T04:09:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.