Nearest Neighbor Guidance for Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2309.14888v1
- Date: Tue, 26 Sep 2023 12:40:35 GMT
- Title: Nearest Neighbor Guidance for Out-of-Distribution Detection
- Authors: Jaewoo Park, Yoon Gyo Jung, Andrew Beng Jin Teoh
- Abstract summary: We propose Nearest Neighbor Guidance (NNGuide) for detecting out-of-distribution (OOD) samples.
NNGuide reduces the overconfidence of OOD samples while preserving the fine-grained capability of the classifier-based score.
Our results demonstrate that NNGuide provides a significant performance improvement on the base detection scores.
- Score: 18.851275688720108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detecting out-of-distribution (OOD) samples are crucial for machine learning
models deployed in open-world environments. Classifier-based scores are a
standard approach for OOD detection due to their fine-grained detection
capability. However, these scores often suffer from overconfidence issues,
misclassifying OOD samples distant from the in-distribution region. To address
this challenge, we propose a method called Nearest Neighbor Guidance (NNGuide)
that guides the classifier-based score to respect the boundary geometry of the
data manifold. NNGuide reduces the overconfidence of OOD samples while
preserving the fine-grained capability of the classifier-based score. We
conduct extensive experiments on ImageNet OOD detection benchmarks under
diverse settings, including a scenario where the ID data undergoes natural
distribution shift. Our results demonstrate that NNGuide provides a significant
performance improvement on the base detection scores, achieving
state-of-the-art results on both AUROC, FPR95, and AUPR metrics. The code is
given at \url{https://github.com/roomo7time/nnguide}.
Related papers
- Margin-bounded Confidence Scores for Out-of-Distribution Detection [2.373572816573706]
We propose a novel method called Margin bounded Confidence Scores (MaCS) to address the nontrivial OOD detection problem.
MaCS enlarges the disparity between ID and OOD scores, which in turn makes the decision boundary more compact.
Experiments on various benchmark datasets for image classification tasks demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2024-09-22T05:40:25Z) - Rethinking the Evaluation of Out-of-Distribution Detection: A Sorites Paradox [70.57120710151105]
Most existing out-of-distribution (OOD) detection benchmarks classify samples with novel labels as the OOD data.
Some marginal OOD samples actually have close semantic contents to the in-distribution (ID) sample, which makes determining the OOD sample a Sorites Paradox.
We construct a benchmark named Incremental Shift OOD (IS-OOD) to address the issue.
arXiv Detail & Related papers (2024-06-14T09:27:56Z) - GROOD: GRadient-aware Out-Of-Distribution detection in interpolated
manifolds [12.727088216619386]
Out-of-distribution detection in deep neural networks (DNNs) can pose risks in real-world deployments.
We introduce GRadient-aware Out-Of-Distribution detection in.
internative manifold (GROOD), a novel framework that relies on the discriminative power of gradient space.
We show that GROD surpasses the established robustness of state-of-the-art baselines.
arXiv Detail & Related papers (2023-12-22T04:28:43Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Beyond AUROC & co. for evaluating out-of-distribution detection
performance [50.88341818412508]
Given their relevance for safe(r) AI, it is important to examine whether the basis for comparing OOD detection methods is consistent with practical needs.
We propose a new metric - Area Under the Threshold Curve (AUTC), which explicitly penalizes poor separation between ID and OOD samples.
arXiv Detail & Related papers (2023-06-26T12:51:32Z) - Unsupervised Evaluation of Out-of-distribution Detection: A Data-centric
Perspective [55.45202687256175]
Out-of-distribution (OOD) detection methods assume that they have test ground truths, i.e., whether individual test samples are in-distribution (IND) or OOD.
In this paper, we are the first to introduce the unsupervised evaluation problem in OOD detection.
We propose three methods to compute Gscore as an unsupervised indicator of OOD detection performance.
arXiv Detail & Related papers (2023-02-16T13:34:35Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - On the Usefulness of Deep Ensemble Diversity for Out-of-Distribution
Detection [7.221206118679026]
The ability to detect Out-of-Distribution (OOD) data is important in safety-critical applications of deep learning.
An existing intuition in the literature is that the diversity of Deep Ensemble predictions indicates distributional shift.
We show experimentally that this intuition is not valid on ImageNet-scale OOD detection.
arXiv Detail & Related papers (2022-07-15T15:02:38Z) - Energy-bounded Learning for Robust Models of Code [16.592638312365164]
In programming, learning code representations has a variety of applications, including code classification, code search, comment generation, bug prediction, and so on.
We propose the use of an energy-bounded learning objective function to assign a higher score to in-distribution samples and a lower score to out-of-distribution samples in order to incorporate such out-of-distribution samples into the training process of source code models.
arXiv Detail & Related papers (2021-12-20T06:28:56Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.