Uncertainty-Estimation with Normalized Logits for Out-of-Distribution
Detection
- URL: http://arxiv.org/abs/2302.07608v1
- Date: Wed, 15 Feb 2023 11:57:09 GMT
- Title: Uncertainty-Estimation with Normalized Logits for Out-of-Distribution
Detection
- Authors: Mouxiao Huang, Yu Qiao
- Abstract summary: Uncertainty-Estimation with Normalized Logits (UE-NL) is a robust learning method for OOD detection.
UE-NL treats every ID sample equally by predicting the uncertainty score of input data.
It is more robust to noisy ID data that may be misjudged as OOD data by other methods.
- Score: 35.539218522504605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection is critical for preventing deep learning
models from making incorrect predictions to ensure the safety of artificial
intelligence systems. Especially in safety-critical applications such as
medical diagnosis and autonomous driving, the cost of incorrect decisions is
usually unbearable. However, neural networks often suffer from the
overconfidence issue, making high confidence for OOD data which are never seen
during training process and may be irrelevant to training data, namely
in-distribution (ID) data. Determining the reliability of the prediction is
still a difficult and challenging task. In this work, we propose
Uncertainty-Estimation with Normalized Logits (UE-NL), a robust learning method
for OOD detection, which has three main benefits. (1) Neural networks with
UE-NL treat every ID sample equally by predicting the uncertainty score of
input data and the uncertainty is added into softmax function to adjust the
learning strength of easy and hard samples during training phase, making the
model learn robustly and accurately. (2) UE-NL enforces a constant vector norm
on the logits to decouple the effect of the increasing output norm from
optimization process, which causes the overconfidence issue to some extent. (3)
UE-NL provides a new metric, the magnitude of uncertainty score, to detect OOD
data. Experiments demonstrate that UE-NL achieves top performance on common OOD
benchmarks and is more robust to noisy ID data that may be misjudged as OOD
data by other methods.
Related papers
- Distilling the Unknown to Unveil Certainty [66.29929319664167]
Out-of-distribution (OOD) detection is essential in identifying test samples that deviate from the in-distribution (ID) data upon which a standard network is trained.
This paper introduces OOD knowledge distillation, a pioneering learning framework applicable whether or not training ID data is available.
arXiv Detail & Related papers (2023-11-14T08:05:02Z) - Out-of-distribution Detection Learning with Unreliable
Out-of-distribution Sources [73.28967478098107]
Out-of-distribution (OOD) detection discerns OOD data where the predictor cannot make valid predictions as in-distribution (ID) data.
It is typically hard to collect real out-of-distribution (OOD) data for training a predictor capable of discerning OOD patterns.
We propose a data generation-based learning method named Auxiliary Task-based OOD Learning (ATOL) that can relieve the mistaken OOD generation.
arXiv Detail & Related papers (2023-11-06T16:26:52Z) - Out-of-Distribution Detection with Hilbert-Schmidt Independence
Optimization [114.43504951058796]
Outlier detection tasks have been playing a critical role in AI safety.
Deep neural network classifiers usually tend to incorrectly classify out-of-distribution (OOD) inputs into in-distribution classes with high confidence.
We propose an alternative probabilistic paradigm that is both practically useful and theoretically viable for the OOD detection tasks.
arXiv Detail & Related papers (2022-09-26T15:59:55Z) - Improving Out-of-Distribution Detection via Epistemic Uncertainty
Adversarial Training [29.4569172720654]
We develop a simple adversarial training scheme that incorporates an attack of the uncertainty predicted by the dropout ensemble.
We demonstrate this method improves OOD detection performance on standard data (i.e., not adversarially crafted), and improves the standardized partial AUC from near-random guessing performance to $geq 0.75$.
arXiv Detail & Related papers (2022-09-05T14:32:19Z) - Augmenting Softmax Information for Selective Classification with
Out-of-Distribution Data [7.221206118679026]
We show that existing post-hoc methods perform quite differently compared to when evaluated only on OOD detection.
We propose a novel method for SCOD, Softmax Information Retaining Combination (SIRC), that augments softmax-based confidence scores with feature-agnostic information.
Experiments on a wide variety of ImageNet-scale datasets and convolutional neural network architectures show that SIRC is able to consistently match or outperform the baseline for SCOD.
arXiv Detail & Related papers (2022-07-15T14:39:57Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Out-of-Distribution Detection for Automotive Perception [58.34808836642603]
Neural networks (NNs) are widely used for object classification in autonomous driving.
NNs can fail on input data not well represented by the training dataset, known as out-of-distribution (OOD) data.
This paper presents a method for determining whether inputs are OOD, which does not require OOD data during training and does not increase the computational cost of inference.
arXiv Detail & Related papers (2020-11-03T01:46:35Z) - Uncertainty-Based Out-of-Distribution Classification in Deep
Reinforcement Learning [17.10036674236381]
Wrong predictions for out-of-distribution data can cause safety critical situations in machine learning systems.
We propose a framework for uncertainty-based OOD classification: UBOOD.
We show that UBOOD produces reliable classification results when combined with ensemble-based estimators.
arXiv Detail & Related papers (2019-12-31T09:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.