ASM: Adaptive Sample Mining for In-The-Wild Facial Expression
Recognition
- URL: http://arxiv.org/abs/2310.05618v1
- Date: Mon, 9 Oct 2023 11:18:22 GMT
- Title: ASM: Adaptive Sample Mining for In-The-Wild Facial Expression
Recognition
- Authors: Ziyang Zhang, Xiao Sun, Liuwei An, Meng Wang
- Abstract summary: We introduce a novel approach called Adaptive Sample Mining to address ambiguity and noise within each expression category.
Our method can effectively mine both ambiguity and noise, and outperform SOTA methods on both synthetic noisy and original datasets.
- Score: 19.846612021056565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given the similarity between facial expression categories, the presence of
compound facial expressions, and the subjectivity of annotators, facial
expression recognition (FER) datasets often suffer from ambiguity and noisy
labels. Ambiguous expressions are challenging to differentiate from expressions
with noisy labels, which hurt the robustness of FER models. Furthermore, the
difficulty of recognition varies across different expression categories,
rendering a uniform approach unfair for all expressions. In this paper, we
introduce a novel approach called Adaptive Sample Mining (ASM) to dynamically
address ambiguity and noise within each expression category. First, the
Adaptive Threshold Learning module generates two thresholds, namely the clean
and noisy thresholds, for each category. These thresholds are based on the mean
class probabilities at each training epoch. Next, the Sample Mining module
partitions the dataset into three subsets: clean, ambiguity, and noise, by
comparing the sample confidence with the clean and noisy thresholds. Finally,
the Tri-Regularization module employs a mutual learning strategy for the
ambiguity subset to enhance discrimination ability, and an unsupervised
learning strategy for the noise subset to mitigate the impact of noisy labels.
Extensive experiments prove that our method can effectively mine both ambiguity
and noise, and outperform SOTA methods on both synthetic noisy and original
datasets. The supplement material is available at
https://github.com/zzzzzzyang/ASM.
Related papers
- Robust Learning under Hybrid Noise [24.36707245704713]
We propose a novel unified learning framework called "Feature and Label Recovery" (FLR) to combat the hybrid noise from the perspective of data recovery.
arXiv Detail & Related papers (2024-07-04T16:13:25Z) - Multi-threshold Deep Metric Learning for Facial Expression Recognition [60.26967776920412]
We present the multi-threshold deep metric learning technique, which avoids the difficult threshold validation.
We find that each threshold of the triplet loss intrinsically determines a distinctive distribution of inter-class variations.
It makes the embedding layer, which is composed of a set of slices, a more informative and discriminative feature.
arXiv Detail & Related papers (2024-06-24T08:27:31Z) - Learning Confident Classifiers in the Presence of Label Noise [5.829762367794509]
This paper proposes a probabilistic model for noisy observations that allows us to build a confident classification and segmentation models.
Our experiments show that our algorithm outperforms state-of-the-art solutions for the considered classification and segmentation problems.
arXiv Detail & Related papers (2023-01-02T04:27:25Z) - On Robust Learning from Noisy Labels: A Permutation Layer Approach [53.798757734297986]
This paper introduces a permutation layer learning approach termed PermLL to dynamically calibrate the training process of a deep neural network (DNN)
We provide two variants of PermLL in this paper: one applies the permutation layer to the model's prediction, while the other applies it directly to the given noisy label.
We validate PermLL experimentally and show that it achieves state-of-the-art performance on both real and synthetic datasets.
arXiv Detail & Related papers (2022-11-29T03:01:48Z) - Category-Adaptive Label Discovery and Noise Rejection for Multi-label
Image Recognition with Partial Positive Labels [78.88007892742438]
Training multi-label models with partial positive labels (MLR-PPL) attracts increasing attention.
Previous works regard unknown labels as negative and adopt traditional MLR algorithms.
We propose to explore semantic correlation among different images to facilitate the MLR-PPL task.
arXiv Detail & Related papers (2022-11-15T02:11:20Z) - Learning from Noisy Labels with Coarse-to-Fine Sample Credibility
Modeling [22.62790706276081]
Training deep neural network (DNN) with noisy labels is practically challenging.
Previous efforts tend to handle part or full data in a unified denoising flow.
We propose a coarse-to-fine robust learning method called CREMA to handle noisy data in a divide-and-conquer manner.
arXiv Detail & Related papers (2022-08-23T02:06:38Z) - Dynamic Adaptive Threshold based Learning for Noisy Annotations Robust
Facial Expression Recognition [3.823356975862006]
We propose a dynamic FER learning framework (DNFER) to handle noisy annotations.
Specifically, DNFER is based on supervised training using selected clean samples and unsupervised consistent training using all the samples.
We demonstrate the robustness of DNFER on both synthetic as well as on real noisy annotated FER datasets like RAFDB, FERPlus, SFEW and AffectNet.
arXiv Detail & Related papers (2022-08-22T12:02:41Z) - Treatment Learning Causal Transformer for Noisy Image Classification [62.639851972495094]
In this work, we incorporate this binary information of "existence of noise" as treatment into image classification tasks to improve prediction accuracy.
Motivated from causal variational inference, we propose a transformer-based architecture, that uses a latent generative model to estimate robust feature representations for noise image classification.
We also create new noisy image datasets incorporating a wide range of noise factors for performance benchmarking.
arXiv Detail & Related papers (2022-03-29T13:07:53Z) - Training Classifiers that are Universally Robust to All Label Noise
Levels [91.13870793906968]
Deep neural networks are prone to overfitting in the presence of label noise.
We propose a distillation-based framework that incorporates a new subcategory of Positive-Unlabeled learning.
Our framework generally outperforms at medium to high noise levels.
arXiv Detail & Related papers (2021-05-27T13:49:31Z) - Tackling Instance-Dependent Label Noise via a Universal Probabilistic
Model [80.91927573604438]
This paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances.
Experiments on datasets with both synthetic and real-world label noise verify that the proposed method yields significant improvements on robustness.
arXiv Detail & Related papers (2021-01-14T05:43:51Z) - Multi-Objective Interpolation Training for Robustness to Label Noise [17.264550056296915]
We show that standard supervised contrastive learning degrades in the presence of label noise.
We propose a novel label noise detection method that exploits the robust feature representations learned via contrastive learning.
Experiments on synthetic and real-world noise benchmarks demonstrate that MOIT/MOIT+ achieves state-of-the-art results.
arXiv Detail & Related papers (2020-12-08T15:01:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.