Statistical Testing for Efficient Out of Distribution Detection in Deep
Neural Networks
- URL: http://arxiv.org/abs/2102.12967v1
- Date: Thu, 25 Feb 2021 16:14:47 GMT
- Title: Statistical Testing for Efficient Out of Distribution Detection in Deep
Neural Networks
- Authors: Matan Haroush, Tzivel Frostig, Ruth Heller and Daniel Soudry
- Abstract summary: This paper frames the Out Of Distribution (OOD) detection problem in Deep Neural Networks as a statistical hypothesis testing problem.
We build on this framework to suggest a novel OOD procedure based on low-order statistics.
Our method achieves comparable or better than state-of-the-art results on well-accepted OOD benchmarks without retraining the network parameters.
- Score: 26.0303701309125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Commonly, Deep Neural Networks (DNNs) generalize well on samples drawn from a
distribution similar to that of the training set. However, DNNs' predictions
are brittle and unreliable when the test samples are drawn from a dissimilar
distribution. This presents a major concern for deployment in real-world
applications, where such behavior may come at a great cost -- as in the case of
autonomous vehicles or healthcare applications.
This paper frames the Out Of Distribution (OOD) detection problem in DNN as a
statistical hypothesis testing problem. Unlike previous OOD detection
heuristics, our framework is guaranteed to maintain the false positive rate
(detecting OOD as in-distribution) for test data. We build on this framework to
suggest a novel OOD procedure based on low-order statistics. Our method
achieves comparable or better than state-of-the-art results on well-accepted
OOD benchmarks without retraining the network parameters -- and at a fraction
of the computational cost.
Related papers
- Detecting Out-of-Distribution Samples via Conditional Distribution
Entropy with Optimal Transport [20.421338676377587]
We argue that empirical probability distributions that incorporate geometric information from both training samples and test inputs can be highly beneficial for OOD detection.
Within the framework of optimal transport, we propose a novel score function known as the emphconditional distribution entropy to quantify the uncertainty of a test input being an OOD sample.
arXiv Detail & Related papers (2024-01-22T07:07:32Z) - Model-free Test Time Adaptation for Out-Of-Distribution Detection [62.49795078366206]
We propose a Non-Parametric Test Time textbfAdaptation framework for textbfDistribution textbfDetection (abbr)
abbr utilizes online test samples for model adaptation during testing, enhancing adaptability to changing data distributions.
We demonstrate the effectiveness of abbr through comprehensive experiments on multiple OOD detection benchmarks.
arXiv Detail & Related papers (2023-11-28T02:00:47Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - iDECODe: In-distribution Equivariance for Conformal Out-of-distribution
Detection [24.518698391381204]
Machine learning methods such as deep neural networks (DNNs) often generate incorrect predictions with high confidence.
We propose the new method iDECODe, leveraging in-distribution equivariance for conformal OOD detection.
We demonstrate the efficacy of iDECODe by experiments on image and audio datasets, obtaining state-of-the-art results.
arXiv Detail & Related papers (2022-01-07T05:21:40Z) - WOOD: Wasserstein-based Out-of-Distribution Detection [6.163329453024915]
Training data for deep-neural-network-based classifiers are usually assumed to be sampled from the same distribution.
When part of the test samples are drawn from a distribution that is far away from that of the training samples, the trained neural network has a tendency to make high confidence predictions for these OOD samples.
We propose a Wasserstein-based out-of-distribution detection (WOOD) method to overcome these challenges.
arXiv Detail & Related papers (2021-12-13T02:35:15Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z) - Label Smoothed Embedding Hypothesis for Out-of-Distribution Detection [72.35532598131176]
We propose an unsupervised method to detect OOD samples using a $k$-NN density estimate.
We leverage a recent insight about label smoothing, which we call the emphLabel Smoothed Embedding Hypothesis
We show that our proposal outperforms many OOD baselines and also provide new finite-sample high-probability statistical results.
arXiv Detail & Related papers (2021-02-09T21:04:44Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - An Algorithm for Out-Of-Distribution Attack to Neural Network Encoder [1.7305469511995404]
Out-Of-Distribution (OOD) samples do not follow the distribution of training set, and therefore the predicted class labels on OOD samples become meaningless.
We show that this type of method has no theoretical guarantee and is practically breakable by our OOD Attack algorithm.
We also show that Glow likelihood-based OOD detection is breakable as well.
arXiv Detail & Related papers (2020-09-17T02:10:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.