Is Out-of-Distribution Detection Learnable?
- URL: http://arxiv.org/abs/2210.14707v1
- Date: Wed, 26 Oct 2022 13:35:19 GMT
- Title: Is Out-of-Distribution Detection Learnable?
- Authors: Zhen Fang, Yixuan Li, Jie Lu, Jiahua Dong, Bo Han, Feng Liu
- Abstract summary: We investigate the probably approximately correct (PAC) learning theory of OOD detection.
We prove several impossibility theorems for the learnability of OOD detection under some scenarios.
We then give several necessary and sufficient conditions to characterize the learnability of OOD detection in some practical scenarios.
- Score: 45.377641783085046
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Supervised learning aims to train a classifier under the assumption that
training and test data are from the same distribution. To ease the above
assumption, researchers have studied a more realistic setting:
out-of-distribution (OOD) detection, where test data may come from classes that
are unknown during training (i.e., OOD data). Due to the unavailability and
diversity of OOD data, good generalization ability is crucial for effective OOD
detection algorithms. To study the generalization of OOD detection, in this
paper, we investigate the probably approximately correct (PAC) learning theory
of OOD detection, which is proposed by researchers as an open problem. First,
we find a necessary condition for the learnability of OOD detection. Then,
using this condition, we prove several impossibility theorems for the
learnability of OOD detection under some scenarios. Although the impossibility
theorems are frustrating, we find that some conditions of these impossibility
theorems may not hold in some practical scenarios. Based on this observation,
we next give several necessary and sufficient conditions to characterize the
learnability of OOD detection in some practical scenarios. Lastly, we also
offer theoretical supports for several representative OOD detection works based
on our OOD theory.
Related papers
- A Closer Look at the Learnability of Out-of-Distribution (OOD) Detection [25.788559173418363]
We characterize under what conditions OOD detection is uniformly and non-uniformly learnable.
We show that in several cases, non-uniform learnability turns a number of negative results into positive.
In all cases where OOD detection is learnable, we provide concrete learning algorithms and a sample-complexity analysis.
arXiv Detail & Related papers (2025-01-15T14:19:03Z) - Semantic or Covariate? A Study on the Intractable Case of Out-of-Distribution Detection [70.57120710151105]
We provide a more precise definition of the Semantic Space for the ID distribution.
We also define the "Tractable OOD" setting which ensures the distinguishability of OOD and ID distributions.
arXiv Detail & Related papers (2024-11-18T03:09:39Z) - The Best of Both Worlds: On the Dilemma of Out-of-distribution Detection [75.65876949930258]
Out-of-distribution (OOD) detection is essential for model trustworthiness.
We show that the superior OOD detection performance of state-of-the-art methods is achieved by secretly sacrificing the OOD generalization ability.
arXiv Detail & Related papers (2024-10-12T07:02:04Z) - On the Learnability of Out-of-distribution Detection [46.9442031620796]
This paper investigates the probably approximately correct (PAC) learning theory of OOD detection.
We prove several impossibility theorems for the learnability of OOD detection under some scenarios.
We then give several necessary and sufficient conditions to characterize the learnability of OOD detection in some practical scenarios.
arXiv Detail & Related papers (2024-04-07T08:17:48Z) - Can Pre-trained Networks Detect Familiar Out-of-Distribution Data? [37.36999826208225]
We study the effect of PT-OOD on the OOD detection performance of pre-trained networks.
We find that the low linear separability of PT-OOD in the feature space heavily degrades the PT-OOD detection performance.
We propose a unique solution to large-scale pre-trained models: Leveraging powerful instance-by-instance discriminative representations of pre-trained models.
arXiv Detail & Related papers (2023-10-02T02:01:00Z) - Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for
Out-of-Domain Detection [28.810524375810736]
Out-of-distribution (OOD) detection is a critical task for reliable predictions over text.
Fine-tuning with pre-trained language models has been a de facto procedure to derive OOD detectors.
We show that using distance-based detection methods, pre-trained language models are near-perfect OOD detectors when the distribution shift involves a domain change.
arXiv Detail & Related papers (2023-05-22T17:42:44Z) - Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is
All You Need [52.88953913542445]
We find surprisingly that simply using reconstruction-based methods could boost the performance of OOD detection significantly.
We take Masked Image Modeling as a pretext task for our OOD detection framework (MOOD)
arXiv Detail & Related papers (2023-02-06T08:24:41Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.