Data Uncertainty Learning in Face Recognition
- URL: http://arxiv.org/abs/2003.11339v1
- Date: Wed, 25 Mar 2020 11:40:38 GMT
- Title: Data Uncertainty Learning in Face Recognition
- Authors: Jie Chang, Zhonghao Lan, Changmao Cheng, Yichen Wei
- Abstract summary: Uncertainty is important for noisy images, but seldom explored for face recognition.
It is unclear how uncertainty affects feature learning.
This work applies data uncertainty learning to face recognition.
- Score: 23.74716810099911
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling data uncertainty is important for noisy images, but seldom explored
for face recognition. The pioneer work, PFE, considers uncertainty by modeling
each face image embedding as a Gaussian distribution. It is quite effective.
However, it uses fixed feature (mean of the Gaussian) from an existing model.
It only estimates the variance and relies on an ad-hoc and costly metric. Thus,
it is not easy to use. It is unclear how uncertainty affects feature learning.
This work applies data uncertainty learning to face recognition, such that
the feature (mean) and uncertainty (variance) are learnt simultaneously, for
the first time. Two learning methods are proposed. They are easy to use and
outperform existing deterministic methods as well as PFE on challenging
unconstrained scenarios. We also provide insightful analysis on how
incorporating uncertainty estimation helps reducing the adverse effects of
noisy samples and affects the feature learning.
Related papers
- Uncertainty Quantification in Stereo Matching [61.73532883992135]
We propose a new framework for stereo matching and its uncertainty quantification.
We adopt Bayes risk as a measure of uncertainty and estimate data and model uncertainty separately.
We apply our uncertainty method to improve prediction accuracy by selecting data points with small uncertainties.
arXiv Detail & Related papers (2024-12-24T23:28:20Z) - Uncertainty for Active Learning on Graphs [70.44714133412592]
Uncertainty Sampling is an Active Learning strategy that aims to improve the data efficiency of machine learning models.
We benchmark Uncertainty Sampling beyond predictive uncertainty and highlight a significant performance gap to other Active Learning strategies.
We develop ground-truth Bayesian uncertainty estimates in terms of the data generating process and prove their effectiveness in guiding Uncertainty Sampling toward optimal queries.
arXiv Detail & Related papers (2024-05-02T16:50:47Z) - dugMatting: Decomposed-Uncertainty-Guided Matting [83.71273621169404]
We propose a decomposed-uncertainty-guided matting algorithm, which explores the explicitly decomposed uncertainties to efficiently and effectively improve the results.
The proposed matting framework relieves the requirement for users to determine the interaction areas by using simple and efficient labeling.
arXiv Detail & Related papers (2023-06-02T11:19:50Z) - Improving Training and Inference of Face Recognition Models via Random
Temperature Scaling [45.33976405587231]
Random Temperature Scaling (RTS) is proposed to learn a reliable face recognition algorithm.
RTS can achieve top performance on both the face recognition and out-of-distribution detection tasks.
The proposed module is light-weight and only adds negligible cost to the model.
arXiv Detail & Related papers (2022-12-02T08:00:03Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - A Geometric Method for Improved Uncertainty Estimation in Real-time [13.588210692213568]
Post-hoc model calibrations can improve models' uncertainty estimations without the need for retraining.
Our work puts forward a geometric-based approach for uncertainty estimation.
We show that our method yields better uncertainty estimations than recently proposed approaches.
arXiv Detail & Related papers (2022-06-23T09:18:05Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Self-Paced Uncertainty Estimation for One-shot Person Re-Identification [9.17071384578203]
We propose a novel Self-Paced Uncertainty Estimation Network (SPUE-Net) for one-shot Person Re-ID.
By introducing a self-paced sampling strategy, our method can estimate the pseudo-labels of unlabeled samples iteratively to expand the labeled samples.
In addition, we apply a Co-operative learning method of local uncertainty estimation combined with determinacy estimation to achieve better hidden space feature mining.
arXiv Detail & Related papers (2021-04-19T09:20:30Z) - Low-Regret Active learning [64.36270166907788]
We develop an online learning algorithm for identifying unlabeled data points that are most informative for training.
At the core of our work is an efficient algorithm for sleeping experts that is tailored to achieve low regret on predictable (easy) instances.
arXiv Detail & Related papers (2021-04-06T22:53:45Z) - Do Not Forget to Attend to Uncertainty while Mitigating Catastrophic
Forgetting [29.196246255389664]
One of the major limitations of deep learning models is that they face catastrophic forgetting in an incremental learning scenario.
We consider a Bayesian formulation to obtain the data and model uncertainties.
We also incorporate self-attention framework to address the incremental learning problem.
arXiv Detail & Related papers (2021-02-03T06:54:52Z) - Deep Learning based Uncertainty Decomposition for Real-time Control [9.067368638784355]
We propose a novel method for detecting the absence of training data using deep learning.
We show its advantages over existing approaches on synthetic and real-world datasets.
We further demonstrate the practicality of this uncertainty estimate in deploying online data-efficient control on a simulated quadcopter.
arXiv Detail & Related papers (2020-10-06T10:46:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.