GradOrth: A Simple yet Efficient Out-of-Distribution Detection with
Orthogonal Projection of Gradients
- URL: http://arxiv.org/abs/2308.00310v1
- Date: Tue, 1 Aug 2023 06:12:12 GMT
- Title: GradOrth: A Simple yet Efficient Out-of-Distribution Detection with
Orthogonal Projection of Gradients
- Authors: Sima Behpour, Thang Doan, Xin Li, Wenbin He, Liang Gou, Liu Ren
- Abstract summary: out-of-distribution (OOD) data is crucial for ensuring the safe deployment of machine learning models in real-world applications.
We propose a novel approach called GradOrth to facilitate OOD detection based on one intriguing observation.
This simple yet effective method exhibits outstanding performance, showcasing a notable reduction in the average false positive rate at a 95% true positive rate (FPR95) of up to 8% when compared to the current state-of-the-art methods.
- Score: 24.50445616970387
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detecting out-of-distribution (OOD) data is crucial for ensuring the safe
deployment of machine learning models in real-world applications. However,
existing OOD detection approaches primarily rely on the feature maps or the
full gradient space information to derive OOD scores neglecting the role of
most important parameters of the pre-trained network over in-distribution (ID)
data. In this study, we propose a novel approach called GradOrth to facilitate
OOD detection based on one intriguing observation that the important features
to identify OOD data lie in the lower-rank subspace of in-distribution (ID)
data. In particular, we identify OOD data by computing the norm of gradient
projection on the subspaces considered important for the in-distribution data.
A large orthogonal projection value (i.e. a small projection value) indicates
the sample as OOD as it captures a weak correlation of the ID data. This simple
yet effective method exhibits outstanding performance, showcasing a notable
reduction in the average false positive rate at a 95% true positive rate
(FPR95) of up to 8% when compared to the current state-of-the-art methods.
Related papers
- EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Model-free Test Time Adaptation for Out-Of-Distribution Detection [62.49795078366206]
We propose a Non-Parametric Test Time textbfAdaptation framework for textbfDistribution textbfDetection (abbr)
abbr utilizes online test samples for model adaptation during testing, enhancing adaptability to changing data distributions.
We demonstrate the effectiveness of abbr through comprehensive experiments on multiple OOD detection benchmarks.
arXiv Detail & Related papers (2023-11-28T02:00:47Z) - Out-of-distribution Detection Learning with Unreliable
Out-of-distribution Sources [73.28967478098107]
Out-of-distribution (OOD) detection discerns OOD data where the predictor cannot make valid predictions as in-distribution (ID) data.
It is typically hard to collect real out-of-distribution (OOD) data for training a predictor capable of discerning OOD patterns.
We propose a data generation-based learning method named Auxiliary Task-based OOD Learning (ATOL) that can relieve the mistaken OOD generation.
arXiv Detail & Related papers (2023-11-06T16:26:52Z) - LINe: Out-of-Distribution Detection by Leveraging Important Neurons [15.797257361788812]
We introduce a new aspect for analyzing the difference in model outputs between in-distribution data and OOD data.
We propose a novel method, Leveraging Important Neurons (LINe), for post-hoc Out of distribution detection.
arXiv Detail & Related papers (2023-03-24T13:49:05Z) - Out-of-distribution Detection with Implicit Outlier Transformation [72.73711947366377]
Outlier exposure (OE) is powerful in out-of-distribution (OOD) detection.
We propose a novel OE-based approach that makes the model perform well for unseen OOD situations.
arXiv Detail & Related papers (2023-03-09T04:36:38Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - How Useful are Gradients for OOD Detection Really? [5.459639971144757]
Out of distribution (OOD) detection is a critical challenge in deploying highly performant machine learning models in real-life applications.
We provide an in-depth analysis and comparison of gradient based methods for OOD detection.
We propose a general, non-gradient based method of OOD detection which improves over previous baselines in both performance and computational efficiency.
arXiv Detail & Related papers (2022-05-20T21:10:05Z) - RODD: A Self-Supervised Approach for Robust Out-of-Distribution
Detection [12.341250124228859]
We propose a simple yet effective generalized OOD detection method independent of out-of-distribution datasets.
Our approach relies on self-supervised feature learning of the training samples, where the embeddings lie on a compact low-dimensional space.
We empirically show that a pre-trained model with self-supervised contrastive learning yields a better model for uni-dimensional feature learning in the latent space.
arXiv Detail & Related papers (2022-04-06T03:05:58Z) - Training OOD Detectors in their Natural Habitats [31.565635192716712]
Out-of-distribution (OOD) detection is important for machine learning models deployed in the wild.
Recent methods use auxiliary outlier data to regularize the model for improved OOD detection.
We propose a novel framework that leverages wild mixture data -- that naturally consists of both ID and OOD samples.
arXiv Detail & Related papers (2022-02-07T15:38:39Z) - On the Importance of Gradients for Detecting Distributional Shifts in
the Wild [15.548068221414384]
We present GradNorm, a simple and effective approach for detecting OOD inputs by utilizing information extracted from the gradient space.
GradNorm demonstrates superior performance, reducing the average FPR95 by up to 10.89% compared to the previous best method.
arXiv Detail & Related papers (2021-10-01T05:19:32Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.