A Simple Test-Time Method for Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2207.08210v1
- Date: Sun, 17 Jul 2022 16:02:58 GMT
- Title: A Simple Test-Time Method for Out-of-Distribution Detection
- Authors: Ke Fan, Yikai Wang, Qian Yu, Da Li, Yanwei Fu
- Abstract summary: This paper proposes a simple Test-time Linear Training (ETLT) method for OOD detection.
We find that the probabilities of input images being out-of-distribution are surprisingly linearly correlated to the features extracted by neural networks.
We propose an online variant of the proposed method, which achieves promising performance and is more practical in real-world applications.
- Score: 45.11199798139358
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks are known to produce over-confident predictions on input
images, even when these images are out-of-distribution (OOD) samples. This
limits the applications of neural network models in real-world scenarios, where
OOD samples exist. Many existing approaches identify the OOD instances via
exploiting various cues, such as finding irregular patterns in the feature
space, logits space, gradient space or the raw space of images. In contrast,
this paper proposes a simple Test-time Linear Training (ETLT) method for OOD
detection. Empirically, we find that the probabilities of input images being
out-of-distribution are surprisingly linearly correlated to the features
extracted by neural networks. To be specific, many state-of-the-art OOD
algorithms, although designed to measure reliability in different ways,
actually lead to OOD scores mostly linearly related to their image features.
Thus, by simply learning a linear regression model trained from the paired
image features and inferred OOD scores at test-time, we can make a more precise
OOD prediction for the test instances. We further propose an online variant of
the proposed method, which achieves promising performance and is more practical
in real-world applications. Remarkably, we improve FPR95 from $51.37\%$ to
$12.30\%$ on CIFAR-10 datasets with maximum softmax probability as the base OOD
detector. Extensive experiments on several benchmark datasets show the efficacy
of ETLT for OOD detection task.
Related papers
- Model-free Test Time Adaptation for Out-Of-Distribution Detection [62.49795078366206]
We propose a Non-Parametric Test Time textbfAdaptation framework for textbfDistribution textbfDetection (abbr)
abbr utilizes online test samples for model adaptation during testing, enhancing adaptability to changing data distributions.
We demonstrate the effectiveness of abbr through comprehensive experiments on multiple OOD detection benchmarks.
arXiv Detail & Related papers (2023-11-28T02:00:47Z) - General-Purpose Multi-Modal OOD Detection Framework [5.287829685181842]
Out-of-distribution (OOD) detection identifies test samples that differ from the training data, which is critical to ensuring the safety and reliability of machine learning (ML) systems.
We propose a general-purpose weakly-supervised OOD detection framework, called WOOD, that combines a binary classifier and a contrastive learning component.
We evaluate the proposed WOOD model on multiple real-world datasets, and the experimental results demonstrate that the WOOD model outperforms the state-of-the-art methods for multi-modal OOD detection.
arXiv Detail & Related papers (2023-07-24T18:50:49Z) - AUTO: Adaptive Outlier Optimization for Online Test-Time OOD Detection [81.49353397201887]
Out-of-distribution (OOD) detection is crucial to deploying machine learning models in open-world applications.
We introduce a novel paradigm called test-time OOD detection, which utilizes unlabeled online data directly at test time to improve OOD detection performance.
We propose adaptive outlier optimization (AUTO), which consists of an in-out-aware filter, an ID memory bank, and a semantically-consistent objective.
arXiv Detail & Related papers (2023-03-22T02:28:54Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is
All You Need [52.88953913542445]
We find surprisingly that simply using reconstruction-based methods could boost the performance of OOD detection significantly.
We take Masked Image Modeling as a pretext task for our OOD detection framework (MOOD)
arXiv Detail & Related papers (2023-02-06T08:24:41Z) - Igeood: An Information Geometry Approach to Out-of-Distribution
Detection [35.04325145919005]
We introduce Igeood, an effective method for detecting out-of-distribution (OOD) samples.
Igeood applies to any pre-trained neural network, works under various degrees of access to the machine learning model.
We show that Igeood outperforms competing state-of-the-art methods on a variety of network architectures and datasets.
arXiv Detail & Related papers (2022-03-15T11:26:35Z) - EARLIN: Early Out-of-Distribution Detection for Resource-efficient
Collaborative Inference [4.826988182025783]
Collaborative inference enables resource-constrained edge devices to make inferences by uploading inputs to a server.
While this setup works cost-effectively for successful inferences, it severely underperforms when the model faces input samples on which the model was not trained.
We propose a novel lightweight OOD detection approach that mines important features from the shallow layers of a pretrained CNN model.
arXiv Detail & Related papers (2021-06-25T18:43:23Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Out-Of-Distribution Detection With Subspace Techniques And Probabilistic
Modeling Of Features [7.219077740523682]
This paper presents a principled approach for detecting out-of-distribution (OOD) samples in deep neural networks (DNN)
Modeling probability distributions on deep features has recently emerged as an effective, yet computationally cheap method to detect OOD samples in DNN.
We apply linear statistical dimensionality reduction techniques and nonlinear manifold-learning techniques on the high-dimensional features in order to capture the true subspace spanned by the features.
arXiv Detail & Related papers (2020-12-08T07:07:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.