XOOD: Extreme Value Based Out-Of-Distribution Detection For Image
Classification
- URL: http://arxiv.org/abs/2208.00629v1
- Date: Mon, 1 Aug 2022 06:22:33 GMT
- Title: XOOD: Extreme Value Based Out-Of-Distribution Detection For Image
Classification
- Authors: Frej Berglind, Haron Temam, Supratik Mukhopadhyay, Kamalika Das, Md
Saiful Islam Sajol, Sricharan Kumar, Kumar Kallurupalli
- Abstract summary: We present XOOD: a novel extreme value-based OOD detection framework for image classification.
Both algorithms rely on the signals captured by the extreme values of the data in the activation layers of the neural network.
We show experimentally that both XOOD-M and XOOD-L outperform state-of-the-art OOD detection methods on many benchmark data sets in both efficiency and accuracy.
- Score: 1.1866955981399967
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detecting out-of-distribution (OOD) data at inference time is crucial for
many applications of machine learning. We present XOOD: a novel extreme
value-based OOD detection framework for image classification that consists of
two algorithms. The first, XOOD-M, is completely unsupervised, while the second
XOOD-L is self-supervised. Both algorithms rely on the signals captured by the
extreme values of the data in the activation layers of the neural network in
order to distinguish between in-distribution and OOD instances. We show
experimentally that both XOOD-M and XOOD-L outperform state-of-the-art OOD
detection methods on many benchmark data sets in both efficiency and accuracy,
reducing false-positive rate (FPR95) by 50%, while improving the inferencing
time by an order of magnitude.
Related papers
- Rethinking the Evaluation of Out-of-Distribution Detection: A Sorites Paradox [70.57120710151105]
Most existing out-of-distribution (OOD) detection benchmarks classify samples with novel labels as the OOD data.
Some marginal OOD samples actually have close semantic contents to the in-distribution (ID) sample, which makes determining the OOD sample a Sorites Paradox.
We construct a benchmark named Incremental Shift OOD (IS-OOD) to address the issue.
arXiv Detail & Related papers (2024-06-14T09:27:56Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Scaling for Training Time and Post-hoc Out-of-distribution Detection
Enhancement [41.650761556671775]
In this paper, we offer insights and analyses of recent state-of-the-art out-of-distribution (OOD) detection methods.
We demonstrate that activation pruning has a detrimental effect on OOD detection, while activation scaling enhances it.
We achieve AUROC scores of +1.85% for near-OOD and +0.74% for far-OOD datasets on the OpenOOD v1.5 ImageNet-1K benchmark.
arXiv Detail & Related papers (2023-09-30T02:10:54Z) - HAct: Out-of-Distribution Detection with Neural Net Activation
Histograms [7.795929277007233]
We propose a novel descriptor, HAct, for OOD detection, that is, probability distributions (approximated by histograms) of output values of neural network layers under the influence of incoming data.
We demonstrate that HAct is significantly more accurate than state-of-the-art in OOD detection on multiple image classification benchmarks.
arXiv Detail & Related papers (2023-09-09T16:22:18Z) - From Global to Local: Multi-scale Out-of-distribution Detection [129.37607313927458]
Out-of-distribution (OOD) detection aims to detect "unknown" data whose labels have not been seen during the in-distribution (ID) training process.
Recent progress in representation learning gives rise to distance-based OOD detection.
We propose Multi-scale OOD DEtection (MODE), a first framework leveraging both global visual information and local region details.
arXiv Detail & Related papers (2023-08-20T11:56:25Z) - General-Purpose Multi-Modal OOD Detection Framework [5.287829685181842]
Out-of-distribution (OOD) detection identifies test samples that differ from the training data, which is critical to ensuring the safety and reliability of machine learning (ML) systems.
We propose a general-purpose weakly-supervised OOD detection framework, called WOOD, that combines a binary classifier and a contrastive learning component.
We evaluate the proposed WOOD model on multiple real-world datasets, and the experimental results demonstrate that the WOOD model outperforms the state-of-the-art methods for multi-modal OOD detection.
arXiv Detail & Related papers (2023-07-24T18:50:49Z) - Unsupervised Evaluation of Out-of-distribution Detection: A Data-centric
Perspective [55.45202687256175]
Out-of-distribution (OOD) detection methods assume that they have test ground truths, i.e., whether individual test samples are in-distribution (IND) or OOD.
In this paper, we are the first to introduce the unsupervised evaluation problem in OOD detection.
We propose three methods to compute Gscore as an unsupervised indicator of OOD detection performance.
arXiv Detail & Related papers (2023-02-16T13:34:35Z) - Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is
All You Need [52.88953913542445]
We find surprisingly that simply using reconstruction-based methods could boost the performance of OOD detection significantly.
We take Masked Image Modeling as a pretext task for our OOD detection framework (MOOD)
arXiv Detail & Related papers (2023-02-06T08:24:41Z) - A Simple Test-Time Method for Out-of-Distribution Detection [45.11199798139358]
This paper proposes a simple Test-time Linear Training (ETLT) method for OOD detection.
We find that the probabilities of input images being out-of-distribution are surprisingly linearly correlated to the features extracted by neural networks.
We propose an online variant of the proposed method, which achieves promising performance and is more practical in real-world applications.
arXiv Detail & Related papers (2022-07-17T16:02:58Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.