GROOD: GRadient-aware Out-Of-Distribution detection in interpolated
manifolds
- URL: http://arxiv.org/abs/2312.14427v1
- Date: Fri, 22 Dec 2023 04:28:43 GMT
- Title: GROOD: GRadient-aware Out-Of-Distribution detection in interpolated
manifolds
- Authors: Mostafa ElAraby, Sabyasachi Sahoo, Yann Pequignot, Paul Novello, Liam
Paull
- Abstract summary: Out-of-distribution detection in deep neural networks (DNNs) can pose risks in real-world deployments.
We introduce GRadient-aware Out-Of-Distribution detection in.
internative manifold (GROOD), a novel framework that relies on the discriminative power of gradient space.
We show that GROD surpasses the established robustness of state-of-the-art baselines.
- Score: 12.727088216619386
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) often fail silently with over-confident
predictions on out-of-distribution (OOD) samples, posing risks in real-world
deployments. Existing techniques predominantly emphasize either the feature
representation space or the gradient norms computed with respect to DNN
parameters, yet they overlook the intricate gradient distribution and the
topology of classification regions. To address this gap, we introduce
GRadient-aware Out-Of-Distribution detection in interpolated manifolds (GROOD),
a novel framework that relies on the discriminative power of gradient space to
distinguish between in-distribution (ID) and OOD samples. To build this space,
GROOD relies on class prototypes together with a prototype that specifically
captures OOD characteristics. Uniquely, our approach incorporates a targeted
mix-up operation at an early intermediate layer of the DNN to refine the
separation of gradient spaces between ID and OOD samples. We quantify OOD
detection efficacy using the distance to the nearest neighbor gradients derived
from the training set, yielding a robust OOD score. Experimental evaluations
substantiate that the introduction of targeted input mix-upamplifies the
separation between ID and OOD in the gradient space, yielding impressive
results across diverse datasets. Notably, when benchmarked against ImageNet-1k,
GROOD surpasses the established robustness of state-of-the-art baselines.
Through this work, we establish the utility of leveraging gradient spaces and
class prototypes for enhanced OOD detection for DNN in image classification.
Related papers
- Leveraging Perturbation Robustness to Enhance Out-of-Distribution Detection [15.184096796229115]
We propose a post-hoc method, Perturbation-Rectified OOD detection (PRO), based on the insight that prediction confidence for OOD inputs is more susceptible to reduction under perturbation than in-distribution (IND) inputs.
On a CIFAR-10 model with adversarial training, PRO effectively detects near-OOD inputs, achieving a reduction of more than 10% on FPR@95 compared to state-of-the-art methods.
arXiv Detail & Related papers (2025-03-24T15:32:33Z) - Can OOD Object Detectors Learn from Foundation Models? [56.03404530594071]
Out-of-distribution (OOD) object detection is a challenging task due to the absence of open-set OOD data.
Inspired by recent advancements in text-to-image generative models, we study the potential of generative models trained on large-scale open-set data to synthesize OOD samples.
We introduce SyncOOD, a simple data curation method that capitalizes on the capabilities of large foundation models.
arXiv Detail & Related papers (2024-09-08T17:28:22Z) - Rethinking the Evaluation of Out-of-Distribution Detection: A Sorites Paradox [70.57120710151105]
Most existing out-of-distribution (OOD) detection benchmarks classify samples with novel labels as the OOD data.
Some marginal OOD samples actually have close semantic contents to the in-distribution (ID) sample, which makes determining the OOD sample a Sorites Paradox.
We construct a benchmark named Incremental Shift OOD (IS-OOD) to address the issue.
arXiv Detail & Related papers (2024-06-14T09:27:56Z) - WeiPer: OOD Detection using Weight Perturbations of Class Projections [11.130659240045544]
We introduce perturbations of the class projections in the final fully connected layer which creates a richer representation of the input.
We achieve state-of-the-art OOD detection results across multiple benchmarks of the OpenOOD framework.
arXiv Detail & Related papers (2024-05-27T13:38:28Z) - Optimizing OOD Detection in Molecular Graphs: A Novel Approach with Diffusion Models [71.39421638547164]
We propose to detect OOD molecules by adopting an auxiliary diffusion model-based framework, which compares similarities between input molecules and reconstructed graphs.
Due to the generative bias towards reconstructing ID training samples, the similarity scores of OOD molecules will be much lower to facilitate detection.
Our research pioneers an approach of Prototypical Graph Reconstruction for Molecular OOD Detection, dubbed as PGR-MOOD and hinges on three innovations.
arXiv Detail & Related papers (2024-04-24T03:25:53Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Nearest Neighbor Guidance for Out-of-Distribution Detection [18.851275688720108]
We propose Nearest Neighbor Guidance (NNGuide) for detecting out-of-distribution (OOD) samples.
NNGuide reduces the overconfidence of OOD samples while preserving the fine-grained capability of the classifier-based score.
Our results demonstrate that NNGuide provides a significant performance improvement on the base detection scores.
arXiv Detail & Related papers (2023-09-26T12:40:35Z) - From Global to Local: Multi-scale Out-of-distribution Detection [129.37607313927458]
Out-of-distribution (OOD) detection aims to detect "unknown" data whose labels have not been seen during the in-distribution (ID) training process.
Recent progress in representation learning gives rise to distance-based OOD detection.
We propose Multi-scale OOD DEtection (MODE), a first framework leveraging both global visual information and local region details.
arXiv Detail & Related papers (2023-08-20T11:56:25Z) - WDiscOOD: Out-of-Distribution Detection via Whitened Linear Discriminant
Analysis [21.023001428704085]
We propose a novel feature-space OOD detection score based on class-specific and class-agnostic information.
The efficacy of our method, named WDiscOOD, is verified on the large-scale ImageNet-1k benchmark.
arXiv Detail & Related papers (2023-03-14T00:13:57Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Watermarking for Out-of-distribution Detection [76.20630986010114]
Out-of-distribution (OOD) detection aims to identify OOD data based on representations extracted from well-trained deep models.
We propose a general methodology named watermarking in this paper.
We learn a unified pattern that is superimposed onto features of original data, and the model's detection capability is largely boosted after watermarking.
arXiv Detail & Related papers (2022-10-27T06:12:32Z) - Pseudo-OOD training for robust language models [78.15712542481859]
OOD detection is a key component of a reliable machine-learning model for any industry-scale application.
We propose POORE - POsthoc pseudo-Ood REgularization, that generates pseudo-OOD samples using in-distribution (IND) data.
We extensively evaluate our framework on three real-world dialogue systems, achieving new state-of-the-art in OOD detection.
arXiv Detail & Related papers (2022-10-17T14:32:02Z) - How Useful are Gradients for OOD Detection Really? [5.459639971144757]
Out of distribution (OOD) detection is a critical challenge in deploying highly performant machine learning models in real-life applications.
We provide an in-depth analysis and comparison of gradient based methods for OOD detection.
We propose a general, non-gradient based method of OOD detection which improves over previous baselines in both performance and computational efficiency.
arXiv Detail & Related papers (2022-05-20T21:10:05Z) - How to Exploit Hyperspherical Embeddings for Out-of-Distribution
Detection? [22.519572587827213]
CIDER is a representation learning framework that exploits hyperspherical embeddings for OOD detection.
CIDER establishes superior performance, outperforming the latest rival by 19.36% in FPR95.
arXiv Detail & Related papers (2022-03-08T23:44:01Z) - WOOD: Wasserstein-based Out-of-Distribution Detection [6.163329453024915]
Training data for deep-neural-network-based classifiers are usually assumed to be sampled from the same distribution.
When part of the test samples are drawn from a distribution that is far away from that of the training samples, the trained neural network has a tendency to make high confidence predictions for these OOD samples.
We propose a Wasserstein-based out-of-distribution detection (WOOD) method to overcome these challenges.
arXiv Detail & Related papers (2021-12-13T02:35:15Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z) - Joint Distribution across Representation Space for Out-of-Distribution
Detection [16.96466730536722]
We present a novel outlook on in-distribution data in a generative manner, which takes their latent features generated from each hidden layer as a joint distribution across representation spaces.
We first construct the Gaussian Mixture Model (GMM) based on in-distribution latent features for each hidden layer, and then connect GMMs via the transition probabilities of the inference traces.
arXiv Detail & Related papers (2021-03-23T06:39:29Z) - Out-Of-Distribution Detection With Subspace Techniques And Probabilistic
Modeling Of Features [7.219077740523682]
This paper presents a principled approach for detecting out-of-distribution (OOD) samples in deep neural networks (DNN)
Modeling probability distributions on deep features has recently emerged as an effective, yet computationally cheap method to detect OOD samples in DNN.
We apply linear statistical dimensionality reduction techniques and nonlinear manifold-learning techniques on the high-dimensional features in order to capture the true subspace spanned by the features.
arXiv Detail & Related papers (2020-12-08T07:07:11Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.