Uncertainty-aware Label Distribution Learning for Facial Expression
Recognition
- URL: http://arxiv.org/abs/2209.10448v1
- Date: Wed, 21 Sep 2022 15:48:41 GMT
- Title: Uncertainty-aware Label Distribution Learning for Facial Expression
Recognition
- Authors: Nhat Le, Khanh Nguyen, Quang Tran, Erman Tjiputra, Bac Le, Anh Nguyen
- Abstract summary: We propose a new uncertainty-aware label distribution learning method to improve the robustness of deep models against uncertainty and ambiguity.
Our method can be easily integrated into a deep network to obtain more training supervision and improve recognition accuracy.
- Score: 13.321770808076398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite significant progress over the past few years, ambiguity is still a
key challenge in Facial Expression Recognition (FER). It can lead to noisy and
inconsistent annotation, which hinders the performance of deep learning models
in real-world scenarios. In this paper, we propose a new uncertainty-aware
label distribution learning method to improve the robustness of deep models
against uncertainty and ambiguity. We leverage neighborhood information in the
valence-arousal space to adaptively construct emotion distributions for
training samples. We also consider the uncertainty of provided labels when
incorporating them into the label distributions. Our method can be easily
integrated into a deep network to obtain more training supervision and improve
recognition accuracy. Intensive experiments on several datasets under various
noisy and ambiguous settings show that our method achieves competitive results
and outperforms recent state-of-the-art approaches. Our code and models are
available at https://github.com/minhnhatvt/label-distribution-learning-fer-tf.
Related papers
- Continuous Contrastive Learning for Long-Tailed Semi-Supervised Recognition [50.61991746981703]
Current state-of-the-art LTSSL approaches rely on high-quality pseudo-labels for large-scale unlabeled data.
This paper introduces a novel probabilistic framework that unifies various recent proposals in long-tail learning.
We introduce a continuous contrastive learning method, CCL, extending our framework to unlabeled data using reliable and smoothed pseudo-labels.
arXiv Detail & Related papers (2024-10-08T15:06:10Z) - Self-Knowledge Distillation for Learning Ambiguity [11.755814660833549]
Recent language models often over-confidently predict a single label without consideration for its correctness.
We propose a novel self-knowledge distillation method that enables models to learn label distributions more accurately.
We validate our method on diverse NLU benchmark datasets and the experimental results demonstrate its effectiveness in producing better label distributions.
arXiv Detail & Related papers (2024-06-14T05:11:32Z) - Uncertainty-aware self-training with expectation maximization basis transformation [9.7527450662978]
We propose a new self-training framework to combine uncertainty information of both model and dataset.
Specifically, we propose to use Expectation-Maximization (EM) to smooth the labels and comprehensively estimate the uncertainty information.
arXiv Detail & Related papers (2024-05-02T11:01:31Z) - Virtual Category Learning: A Semi-Supervised Learning Method for Dense
Prediction with Extremely Limited Labels [63.16824565919966]
This paper proposes to use confusing samples proactively without label correction.
A Virtual Category (VC) is assigned to each confusing sample in such a way that it can safely contribute to the model optimisation.
Our intriguing findings highlight the usage of VC learning in dense vision tasks.
arXiv Detail & Related papers (2023-12-02T16:23:52Z) - Robust Representation Learning for Unreliable Partial Label Learning [86.909511808373]
Partial Label Learning (PLL) is a type of weakly supervised learning where each training instance is assigned a set of candidate labels, but only one label is the ground-truth.
This is known as Unreliable Partial Label Learning (UPLL) that introduces an additional complexity due to the inherent unreliability and ambiguity of partial labels.
We propose the Unreliability-Robust Representation Learning framework (URRL) that leverages unreliability-robust contrastive learning to help the model fortify against unreliable partial labels effectively.
arXiv Detail & Related papers (2023-08-31T13:37:28Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised
Learning [101.86916775218403]
This paper revisits the popular pseudo-labeling methods via a unified sample weighting formulation.
We propose SoftMatch to overcome the trade-off by maintaining both high quantity and high quality of pseudo-labels during training.
In experiments, SoftMatch shows substantial improvements across a wide variety of benchmarks, including image, text, and imbalanced classification.
arXiv Detail & Related papers (2023-01-26T03:53:25Z) - Multi-View Knowledge Distillation from Crowd Annotations for
Out-of-Domain Generalization [53.24606510691877]
We propose new methods for acquiring soft-labels from crowd-annotations by aggregating the distributions produced by existing methods.
We demonstrate that these aggregation methods lead to the most consistent performance across four NLP tasks on out-of-domain test sets.
arXiv Detail & Related papers (2022-12-19T12:40:18Z) - Uncertain Facial Expression Recognition via Multi-task Assisted
Correction [43.02119884581332]
We propose a novel method of multi-task assisted correction in addressing uncertain facial expression recognition called MTAC.
Specifically, a confidence estimation block and a weighted regularization module are applied to highlight solid samples and suppress uncertain samples in every batch.
Experiments on RAF-DB, AffectNet, and AffWild2 datasets demonstrate that the MTAC obtains substantial improvements over baselines when facing synthetic and real uncertainties.
arXiv Detail & Related papers (2022-12-14T10:28:08Z) - Going Beyond One-Hot Encoding in Classification: Can Human Uncertainty
Improve Model Performance? [14.610038284393166]
We show that label uncertainty is explicitly embedded into the training process via distributional labels.
The incorporation of label uncertainty helps the model to generalize better to unseen data and increases model performance.
Similar to existing calibration methods, the distributional labels lead to better-calibrated probabilities, which in turn yield more certain and trustworthy predictions.
arXiv Detail & Related papers (2022-05-30T17:19:11Z) - Credal Self-Supervised Learning [0.0]
We show how to let the learner generate "pseudo-supervision" for unlabeled instances.
In combination with consistency regularization, pseudo-labeling has shown promising performance in various domains.
We compare our methodology to state-of-the-art self-supervision approaches.
arXiv Detail & Related papers (2021-06-22T15:19:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.