Provable Guarantees for Understanding Out-of-distribution Detection
- URL: http://arxiv.org/abs/2112.00787v1
- Date: Wed, 1 Dec 2021 19:18:43 GMT
- Title: Provable Guarantees for Understanding Out-of-distribution Detection
- Authors: Peyman Morteza and Yixuan Li
- Abstract summary: We develop an analytical framework that characterizes and unifies the theoretical understanding for OOD detection.
Our framework motivates a novel OOD detection method for neural networks, GEM, which demonstrates both theoretical and empirical superiority.
- Score: 13.36367318623728
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection is important for deploying machine
learning models in the real world, where test data from shifted distributions
can naturally arise. While a plethora of algorithmic approaches have recently
emerged for OOD detection, a critical gap remains in theoretical understanding.
In this work, we develop an analytical framework that characterizes and unifies
the theoretical understanding for OOD detection. Our analytical framework
motivates a novel OOD detection method for neural networks, GEM, which
demonstrates both theoretical and empirical superiority. In particular, on
CIFAR-100 as in-distribution data, our method outperforms a competitive
baseline by 16.57% (FPR95). Lastly, we formally provide provable guarantees and
comprehensive analysis of our method, underpinning how various properties of
data distribution affect the performance of OOD detection.
Related papers
- OAL: Enhancing OOD Detection Using Latent Diffusion [5.357756138014614]
Outlier Aware Learning (OAL) framework synthesizes OOD training data directly in the latent space.
We introduce a mutual information-based contrastive learning approach that amplifies the distinction between In-Distribution (ID) and collected OOD features.
arXiv Detail & Related papers (2024-06-24T11:01:43Z) - Model-free Test Time Adaptation for Out-Of-Distribution Detection [62.49795078366206]
We propose a Non-Parametric Test Time textbfAdaptation framework for textbfDistribution textbfDetection (abbr)
abbr utilizes online test samples for model adaptation during testing, enhancing adaptability to changing data distributions.
We demonstrate the effectiveness of abbr through comprehensive experiments on multiple OOD detection benchmarks.
arXiv Detail & Related papers (2023-11-28T02:00:47Z) - Unsupervised Evaluation of Out-of-distribution Detection: A Data-centric
Perspective [55.45202687256175]
Out-of-distribution (OOD) detection methods assume that they have test ground truths, i.e., whether individual test samples are in-distribution (IND) or OOD.
In this paper, we are the first to introduce the unsupervised evaluation problem in OOD detection.
We propose three methods to compute Gscore as an unsupervised indicator of OOD detection performance.
arXiv Detail & Related papers (2023-02-16T13:34:35Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Igeood: An Information Geometry Approach to Out-of-Distribution
Detection [35.04325145919005]
We introduce Igeood, an effective method for detecting out-of-distribution (OOD) samples.
Igeood applies to any pre-trained neural network, works under various degrees of access to the machine learning model.
We show that Igeood outperforms competing state-of-the-art methods on a variety of network architectures and datasets.
arXiv Detail & Related papers (2022-03-15T11:26:35Z) - ReAct: Out-of-distribution Detection With Rectified Activations [20.792140933660075]
Out-of-distribution (OOD) detection has received much attention lately due to its practical importance.
One of the primary challenges is that models often produce highly confident predictions on OOD data.
We propose ReAct--a simple and effective technique for reducing model overconfidence on OOD data.
arXiv Detail & Related papers (2021-11-24T21:02:07Z) - On the Impact of Spurious Correlation for Out-of-distribution Detection [14.186776881154127]
We present a new formalization and model the data shifts by taking into account both the invariant and environmental features.
Our results suggest that the detection performance is severely worsened when the correlation between spurious features and labels is increased in the training set.
arXiv Detail & Related papers (2021-09-12T23:58:17Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z) - Probabilistic Trust Intervals for Out of Distribution Detection [8.35564578781252]
We propose a straightforward yet novel technique to enhance OOD detection in pre-trained networks without altering its original parameters.
Our approach defines probabilistic trust intervals for each network weight, determined using in-distribution data.
We evaluate our approach on MNIST, Fashion-MNIST, CIFAR-10, CIFAR-100 and CIFAR-10-C.
arXiv Detail & Related papers (2021-02-02T06:23:04Z) - ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining [51.19164318924997]
Adrial Training with informative Outlier Mining improves robustness of OOD detection.
ATOM achieves state-of-the-art performance under a broad family of classic and adversarial OOD evaluation tasks.
arXiv Detail & Related papers (2020-06-26T20:58:05Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.