Improving Training and Inference of Face Recognition Models via Random
Temperature Scaling
- URL: http://arxiv.org/abs/2212.01015v1
- Date: Fri, 2 Dec 2022 08:00:03 GMT
- Title: Improving Training and Inference of Face Recognition Models via Random
Temperature Scaling
- Authors: Lei Shang, Mouxiao Huang, Wu Shi, Yuchen Liu, Yang Liu, Fei Wang,
Baigui Sun, Xuansong Xie, Yu Qiao
- Abstract summary: Random Temperature Scaling (RTS) is proposed to learn a reliable face recognition algorithm.
RTS can achieve top performance on both the face recognition and out-of-distribution detection tasks.
The proposed module is light-weight and only adds negligible cost to the model.
- Score: 45.33976405587231
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data uncertainty is commonly observed in the images for face recognition
(FR). However, deep learning algorithms often make predictions with high
confidence even for uncertain or irrelevant inputs. Intuitively, FR algorithms
can benefit from both the estimation of uncertainty and the detection of
out-of-distribution (OOD) samples. Taking a probabilistic view of the current
classification model, the temperature scalar is exactly the scale of
uncertainty noise implicitly added in the softmax function. Meanwhile, the
uncertainty of images in a dataset should follow a prior distribution. Based on
the observation, a unified framework for uncertainty modeling and FR, Random
Temperature Scaling (RTS), is proposed to learn a reliable FR algorithm. The
benefits of RTS are two-fold. (1) In the training phase, it can adjust the
learning strength of clean and noisy samples for stability and accuracy. (2) In
the test phase, it can provide a score of confidence to detect uncertain,
low-quality and even OOD samples, without training on extra labels. Extensive
experiments on FR benchmarks demonstrate that the magnitude of variance in RTS,
which serves as an OOD detection metric, is closely related to the uncertainty
of the input image. RTS can achieve top performance on both the FR and OOD
detection tasks. Moreover, the model trained with RTS can perform robustly on
datasets with noise. The proposed module is light-weight and only adds
negligible computation cost to the model.
Related papers
- Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Uncertainty-aware GAN with Adaptive Loss for Robust MRI Image
Enhancement [3.222802562733787]
Conditional generative adversarial networks (GANs) have shown improved performance in learning photo-realistic image-to-image mappings.
This paper proposes a GAN-based framework that (i)models an adaptive loss function for robustness to OOD-noisy data and (ii)estimates the per-voxel uncertainty in the predictions.
We demonstrate our method on two key applications in medical imaging: (i)undersampled magnetic resonance imaging (MRI) reconstruction (ii)MRI modality propagation.
arXiv Detail & Related papers (2021-10-07T11:29:03Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Know Your Limits: Uncertainty Estimation with ReLU Classifiers Fails at
Reliable OOD Detection [0.0]
This paper gives a theoretical explanation for said experimental findings and illustrates it on synthetic data.
We prove that such techniques are not able to reliably identify OOD samples in a classification setting.
arXiv Detail & Related papers (2020-12-09T21:35:55Z) - Revisiting One-vs-All Classifiers for Predictive Uncertainty and
Out-of-Distribution Detection in Neural Networks [22.34227625637843]
We investigate how the parametrization of the probabilities in discriminative classifiers affects the uncertainty estimates.
We show that one-vs-all formulations can improve calibration on image classification tasks.
arXiv Detail & Related papers (2020-07-10T01:55:02Z) - Discriminative Jackknife: Quantifying Uncertainty in Deep Learning via
Higher-Order Influence Functions [121.10450359856242]
We develop a frequentist procedure that utilizes influence functions of a model's loss functional to construct a jackknife (or leave-one-out) estimator of predictive confidence intervals.
The DJ satisfies (1) and (2), is applicable to a wide range of deep learning models, is easy to implement, and can be applied in a post-hoc fashion without interfering with model training or compromising its accuracy.
arXiv Detail & Related papers (2020-06-29T13:36:52Z) - Data Uncertainty Learning in Face Recognition [23.74716810099911]
Uncertainty is important for noisy images, but seldom explored for face recognition.
It is unclear how uncertainty affects feature learning.
This work applies data uncertainty learning to face recognition.
arXiv Detail & Related papers (2020-03-25T11:40:38Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.