Understanding Failures in Out-of-Distribution Detection with Deep
Generative Models
- URL: http://arxiv.org/abs/2107.06908v1
- Date: Wed, 14 Jul 2021 18:00:11 GMT
- Title: Understanding Failures in Out-of-Distribution Detection with Deep
Generative Models
- Authors: Lily H. Zhang, Mark Goldstein, Rajesh Ranganath
- Abstract summary: We prove that no method can guarantee performance beyond random chance without assumptions on which out-distributions are relevant.
We highlight the consequences implied by assuming support overlap between in- and out-distributions.
Our results suggest that estimation error is a more plausible explanation than the misalignment between likelihood-based OOD detection and out-distributions of interest.
- Score: 22.11487118547924
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep generative models (DGMs) seem a natural fit for detecting
out-of-distribution (OOD) inputs, but such models have been shown to assign
higher probabilities or densities to OOD images than images from the training
distribution. In this work, we explain why this behavior should be attributed
to model misestimation. We first prove that no method can guarantee performance
beyond random chance without assumptions on which out-distributions are
relevant. We then interrogate the typical set hypothesis, the claim that
relevant out-distributions can lie in high likelihood regions of the data
distribution, and that OOD detection should be defined based on the data
distribution's typical set. We highlight the consequences implied by assuming
support overlap between in- and out-distributions, as well as the arbitrariness
of the typical set for OOD detection. Our results suggest that estimation error
is a more plausible explanation than the misalignment between likelihood-based
OOD detection and out-distributions of interest, and we illustrate how even
minimal estimation error can lead to OOD detection failures, yielding
implications for future work in deep generative modeling and OOD detection.
Related papers
- Semantic or Covariate? A Study on the Intractable Case of Out-of-Distribution Detection [70.57120710151105]
We provide a more precise definition of the Semantic Space for the ID distribution.
We also define the "Tractable OOD" setting which ensures the distinguishability of OOD and ID distributions.
arXiv Detail & Related papers (2024-11-18T03:09:39Z) - The Best of Both Worlds: On the Dilemma of Out-of-distribution Detection [75.65876949930258]
Out-of-distribution (OOD) detection is essential for model trustworthiness.
We show that the superior OOD detection performance of state-of-the-art methods is achieved by secretly sacrificing the OOD generalization ability.
arXiv Detail & Related papers (2024-10-12T07:02:04Z) - A Geometric Explanation of the Likelihood OOD Detection Paradox [19.205693812937422]
We show that high-likelihood regions will not be generated if they contain minimal probability mass.
We propose a method for OOD detection which pairs the likelihoods and LID estimates obtained from a pre-trained DGM.
arXiv Detail & Related papers (2024-03-27T18:02:49Z) - Model-free Test Time Adaptation for Out-Of-Distribution Detection [62.49795078366206]
We propose a Non-Parametric Test Time textbfAdaptation framework for textbfDistribution textbfDetection (abbr)
abbr utilizes online test samples for model adaptation during testing, enhancing adaptability to changing data distributions.
We demonstrate the effectiveness of abbr through comprehensive experiments on multiple OOD detection benchmarks.
arXiv Detail & Related papers (2023-11-28T02:00:47Z) - Robustness to Spurious Correlations Improves Semantic
Out-of-Distribution Detection [24.821151013905865]
Methods which utilize the outputs or feature representations of predictive models have emerged as promising approaches for out-of-distribution (OOD) detection of image inputs.
We provide a possible explanation for SN-OOD detection failures and propose nuisance-aware OOD detection to address them.
arXiv Detail & Related papers (2023-02-08T15:28:33Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Rethinking Out-of-Distribution Detection From a Human-Centric
Perspective [22.834986963880482]
Out-Of-Distribution (OOD) detection aims to ensure the reliability and safety of deep neural networks (DNNs) in real-world scenarios.
We propose a human-centric evaluation and conduct extensive experiments on 45 classifiers and 8 test datasets.
We find that the simple baseline OOD detection method can achieve comparable and even better performance than the recently proposed methods.
arXiv Detail & Related papers (2022-11-30T06:34:50Z) - A statistical theory of out-of-distribution detection [26.928175726673615]
We introduce a principled approach to detecting out-of-distribution data by exploiting a connection to data curation.
In data curation, we exclude ambiguous or difficult-to-classify input points from the dataset, and these excluded points are by definition OOD.
We can therefore obtain the likelihood for OOD points by using a principled generative model of data-curation.
arXiv Detail & Related papers (2021-02-24T12:35:43Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.