A Closer Look at the Learnability of Out-of-Distribution (OOD) Detection
- URL: http://arxiv.org/abs/2501.08821v1
- Date: Wed, 15 Jan 2025 14:19:03 GMT
- Title: A Closer Look at the Learnability of Out-of-Distribution (OOD) Detection
- Authors: Konstantin Garov, Kamalika Chaudhuri,
- Abstract summary: We characterize under what conditions OOD detection is uniformly and non-uniformly learnable.<n>We show that in several cases, non-uniform learnability turns a number of negative results into positive.<n>In all cases where OOD detection is learnable, we provide concrete learning algorithms and a sample-complexity analysis.
- Score: 25.788559173418363
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning algorithms often encounter different or "out-of-distribution" (OOD) data at deployment time, and OOD detection is frequently employed to detect these examples. While it works reasonably well in practice, existing theoretical results on OOD detection are highly pessimistic. In this work, we take a closer look at this problem, and make a distinction between uniform and non-uniform learnability, following PAC learning theory. We characterize under what conditions OOD detection is uniformly and non-uniformly learnable, and we show that in several cases, non-uniform learnability turns a number of negative results into positive. In all cases where OOD detection is learnable, we provide concrete learning algorithms and a sample-complexity analysis.
Related papers
- The Best of Both Worlds: On the Dilemma of Out-of-distribution Detection [75.65876949930258]
Out-of-distribution (OOD) detection is essential for model trustworthiness.
We show that the superior OOD detection performance of state-of-the-art methods is achieved by secretly sacrificing the OOD generalization ability.
arXiv Detail & Related papers (2024-10-12T07:02:04Z) - On the Learnability of Out-of-distribution Detection [46.9442031620796]
This paper investigates the probably approximately correct (PAC) learning theory of OOD detection.
We prove several impossibility theorems for the learnability of OOD detection under some scenarios.
We then give several necessary and sufficient conditions to characterize the learnability of OOD detection in some practical scenarios.
arXiv Detail & Related papers (2024-04-07T08:17:48Z) - Unified Out-Of-Distribution Detection: A Model-Specific Perspective [31.68704233156108]
Out-of-distribution (OOD) detection aims to identify test examples that do not belong to the training distribution.
We present a novel, unifying framework to study OOD detection in a broader scope.
arXiv Detail & Related papers (2023-04-13T20:31:35Z) - Plugin estimators for selective classification with out-of-distribution
detection [67.28226919253214]
Real-world classifiers can benefit from abstaining from predicting on samples where they have low confidence.
These settings have been the subject of extensive but disjoint study in the selective classification (SC) and out-of-distribution (OOD) detection literature.
Recent work on selective classification with OOD detection has argued for the unified study of these problems.
We propose new plugin estimators for SCOD that are theoretically grounded, effective, and generalise existing approaches.
arXiv Detail & Related papers (2023-01-29T07:45:17Z) - Is Out-of-Distribution Detection Learnable? [45.377641783085046]
We investigate the probably approximately correct (PAC) learning theory of OOD detection.
We prove several impossibility theorems for the learnability of OOD detection under some scenarios.
We then give several necessary and sufficient conditions to characterize the learnability of OOD detection in some practical scenarios.
arXiv Detail & Related papers (2022-10-26T13:35:19Z) - Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD
Training Data Estimate a Combination of the Same Core Quantities [104.02531442035483]
The goal of this paper is to recognize common objectives as well as to identify the implicit scoring functions of different OOD detection methods.
We show that binary discrimination between in- and (different) out-distributions is equivalent to several distinct formulations of the OOD detection problem.
We also show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function.
arXiv Detail & Related papers (2022-06-20T16:32:49Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Contrastive Training for Improved Out-of-Distribution Detection [36.61315534166451]
This paper proposes and investigates the use of contrastive training to boost OOD detection performance.
We show in extensive experiments that contrastive training significantly helps OOD detection performance on a number of common benchmarks.
arXiv Detail & Related papers (2020-07-10T18:40:37Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.