Out-of-distribution Detection with Deep Nearest Neighbors
- URL: http://arxiv.org/abs/2204.06507v1
- Date: Wed, 13 Apr 2022 16:45:21 GMT
- Title: Out-of-distribution Detection with Deep Nearest Neighbors
- Authors: Yiyou Sun, Yifei Ming, Xiaojin Zhu, Yixuan Li
- Abstract summary: Out-of-distribution (OOD) detection is a critical task for deploying machine learning models in the open world.
In this paper, we explore the efficacy of non-parametric nearest-neighbor distance for OOD detection.
We demonstrate the effectiveness of nearest-neighbor-based OOD detection on several benchmarks and establish superior performance.
- Score: 33.71627349163909
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection is a critical task for deploying machine
learning models in the open world. Distance-based methods have demonstrated
promise, where testing samples are detected as OOD if they are relatively far
away from in-distribution (ID) data. However, prior methods impose a strong
distributional assumption of the underlying feature space, which may not always
hold. In this paper, we explore the efficacy of non-parametric nearest-neighbor
distance for OOD detection, which has been largely overlooked in the
literature. Unlike prior works, our method does not impose any distributional
assumption, hence providing stronger flexibility and generality. We demonstrate
the effectiveness of nearest-neighbor-based OOD detection on several benchmarks
and establish superior performance. Under the same model trained on
ImageNet-1k, our method substantially reduces the false positive rate
(FPR@TPR95) by 24.77% compared to a strong baseline SSD+, which uses a
parametric approach Mahalanobis distance in detection.
Related papers
- Look Around and Find Out: OOD Detection with Relative Angles [24.369626931550794]
We propose a novel angle-based metric for OOD detection that is computed relative to the in-distribution structure.
Our method achieves state-of-the-art performance on CIFAR-10 and ImageNet benchmarks, reducing FPR95 by 0.88% and 7.74% respectively.
arXiv Detail & Related papers (2024-10-06T15:36:07Z) - Resultant: Incremental Effectiveness on Likelihood for Unsupervised Out-of-Distribution Detection [63.93728560200819]
Unsupervised out-of-distribution (U-OOD) detection is to identify data samples with a detector trained solely on unlabeled in-distribution (ID) data.
Recent studies have developed various detectors based on DGMs to move beyond likelihood.
We apply two techniques for each direction, specifically post-hoc prior and dataset entropy-mutual calibration.
Experimental results demonstrate that the Resultant could be a new state-of-the-art U-OOD detector.
arXiv Detail & Related papers (2024-09-05T02:58:13Z) - How to Overcome Curse-of-Dimensionality for Out-of-Distribution
Detection? [29.668859994222238]
We propose a novel framework, Subspace Nearest Neighbor (SNN), for OOD detection.
In training, our method regularizes the model and its feature representation by leveraging the most relevant subset of dimensions.
Compared to the current best distance-based method, SNN reduces the average FPR95 by 15.96% on the CIFAR-100 benchmark.
arXiv Detail & Related papers (2023-12-22T06:04:09Z) - Fast Decision Boundary based Out-of-Distribution Detector [7.04686607977352]
Out-of-Distribution (OOD) detection is essential for the safe deployment of AI systems.
Existing feature space methods, while effective, often incur significant computational overhead.
We propose a computationally-efficient OOD detector without using auxiliary models.
arXiv Detail & Related papers (2023-12-15T19:50:32Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Model-free Test Time Adaptation for Out-Of-Distribution Detection [62.49795078366206]
We propose a Non-Parametric Test Time textbfAdaptation framework for textbfDistribution textbfDetection (abbr)
abbr utilizes online test samples for model adaptation during testing, enhancing adaptability to changing data distributions.
We demonstrate the effectiveness of abbr through comprehensive experiments on multiple OOD detection benchmarks.
arXiv Detail & Related papers (2023-11-28T02:00:47Z) - Beyond AUROC & co. for evaluating out-of-distribution detection
performance [50.88341818412508]
Given their relevance for safe(r) AI, it is important to examine whether the basis for comparing OOD detection methods is consistent with practical needs.
We propose a new metric - Area Under the Threshold Curve (AUTC), which explicitly penalizes poor separation between ID and OOD samples.
arXiv Detail & Related papers (2023-06-26T12:51:32Z) - How to Exploit Hyperspherical Embeddings for Out-of-Distribution
Detection? [22.519572587827213]
CIDER is a representation learning framework that exploits hyperspherical embeddings for OOD detection.
CIDER establishes superior performance, outperforming the latest rival by 19.36% in FPR95.
arXiv Detail & Related papers (2022-03-08T23:44:01Z) - No True State-of-the-Art? OOD Detection Methods are Inconsistent across
Datasets [69.725266027309]
Out-of-distribution detection is an important component of reliable ML systems.
In this work, we show that none of these methods are inherently better at OOD detection than others on a standardized set of 16 pairs.
We also show that a method outperforming another on a certain (ID, OOD) pair may not do so in a low-data regime.
arXiv Detail & Related papers (2021-09-12T16:35:00Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.