Uncertainty-based out-of-distribution detection requires suitable
function space priors
- URL: http://arxiv.org/abs/2110.06020v1
- Date: Tue, 12 Oct 2021 14:11:37 GMT
- Title: Uncertainty-based out-of-distribution detection requires suitable
function space priors
- Authors: Francesco D'Angelo and Christian Henning
- Abstract summary: We show that proper Bayesian inference with function space priors induced by neural networks does not necessarily lead to good OOD detection.
Desirable function space properties can be encoded in the prior in weight space, however, this currently only applies to a specified subset of the domain.
- Score: 1.90365714903665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The need to avoid confident predictions on unfamiliar data has sparked
interest in out-of-distribution (OOD) detection. It is widely assumed that
Bayesian neural networks (BNNs) are well suited for this task, as the endowed
epistemic uncertainty should lead to disagreement in predictions on outliers.
In this paper, we question this assumption and show that proper Bayesian
inference with function space priors induced by neural networks does not
necessarily lead to good OOD detection. To circumvent the use of approximate
inference, we start by studying the infinite-width case, where Bayesian
inference can be exact due to the correspondence with Gaussian processes.
Strikingly, the kernels induced under common architectural choices lead to
uncertainties that do not reflect the underlying data generating process and
are therefore unsuited for OOD detection. Importantly, we find this OOD
behavior to be consistent with the corresponding finite-width networks.
Desirable function space properties can be encoded in the prior in weight
space, however, this currently only applies to a specified subset of the domain
and thus does not inherently extend to OOD data. Finally, we argue that a
trade-off between generalization and OOD capabilities might render the
application of BNNs for OOD detection undesirable in practice. Overall, our
study discloses fundamental problems when naively using BNNs for OOD detection
and opens interesting avenues for future research.
Related papers
- Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Are Bayesian neural networks intrinsically good at out-of-distribution
detection? [4.297070083645049]
It is widely assumed that Bayesian neural networks (BNN) are well suited for out-of-distribution (OOD) detection.
In this paper, we provide empirical evidence that proper Bayesian inference with common neural network architectures does not necessarily lead to good OOD detection.
arXiv Detail & Related papers (2021-07-26T14:53:14Z) - Understanding Failures in Out-of-Distribution Detection with Deep
Generative Models [22.11487118547924]
We prove that no method can guarantee performance beyond random chance without assumptions on which out-distributions are relevant.
We highlight the consequences implied by assuming support overlap between in- and out-distributions.
Our results suggest that estimation error is a more plausible explanation than the misalignment between likelihood-based OOD detection and out-distributions of interest.
arXiv Detail & Related papers (2021-07-14T18:00:11Z) - On the Practicality of Deterministic Epistemic Uncertainty [106.06571981780591]
deterministic uncertainty methods (DUMs) achieve strong performance on detecting out-of-distribution data.
It remains unclear whether DUMs are well calibrated and can seamlessly scale to real-world applications.
arXiv Detail & Related papers (2021-07-01T17:59:07Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z) - Statistical Testing for Efficient Out of Distribution Detection in Deep
Neural Networks [26.0303701309125]
This paper frames the Out Of Distribution (OOD) detection problem in Deep Neural Networks as a statistical hypothesis testing problem.
We build on this framework to suggest a novel OOD procedure based on low-order statistics.
Our method achieves comparable or better than state-of-the-art results on well-accepted OOD benchmarks without retraining the network parameters.
arXiv Detail & Related papers (2021-02-25T16:14:47Z) - A statistical theory of out-of-distribution detection [26.928175726673615]
We introduce a principled approach to detecting out-of-distribution data by exploiting a connection to data curation.
In data curation, we exclude ambiguous or difficult-to-classify input points from the dataset, and these excluded points are by definition OOD.
We can therefore obtain the likelihood for OOD points by using a principled generative model of data-curation.
arXiv Detail & Related papers (2021-02-24T12:35:43Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Out-of-Distribution Detection for Automotive Perception [58.34808836642603]
Neural networks (NNs) are widely used for object classification in autonomous driving.
NNs can fail on input data not well represented by the training dataset, known as out-of-distribution (OOD) data.
This paper presents a method for determining whether inputs are OOD, which does not require OOD data during training and does not increase the computational cost of inference.
arXiv Detail & Related papers (2020-11-03T01:46:35Z) - Certifiably Adversarially Robust Detection of Out-of-Distribution Data [111.67388500330273]
We aim for certifiable worst case guarantees for OOD detection by enforcing low confidence at the OOD point.
We show that non-trivial bounds on the confidence for OOD data generalizing beyond the OOD dataset seen at training time are possible.
arXiv Detail & Related papers (2020-07-16T17:16:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.