WDiscOOD: Out-of-Distribution Detection via Whitened Linear Discriminant
Analysis
- URL: http://arxiv.org/abs/2303.07543v4
- Date: Wed, 30 Aug 2023 03:12:34 GMT
- Title: WDiscOOD: Out-of-Distribution Detection via Whitened Linear Discriminant
Analysis
- Authors: Yiye Chen, Yunzhi Lin, Ruinian Xu, Patricio A. Vela
- Abstract summary: We propose a novel feature-space OOD detection score based on class-specific and class-agnostic information.
The efficacy of our method, named WDiscOOD, is verified on the large-scale ImageNet-1k benchmark.
- Score: 21.023001428704085
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks are susceptible to generating overconfident yet
erroneous predictions when presented with data beyond known concepts. This
challenge underscores the importance of detecting out-of-distribution (OOD)
samples in the open world. In this work, we propose a novel feature-space OOD
detection score based on class-specific and class-agnostic information.
Specifically, the approach utilizes Whitened Linear Discriminant Analysis to
project features into two subspaces - the discriminative and residual subspaces
- for which the in-distribution (ID) classes are maximally separated and
closely clustered, respectively. The OOD score is then determined by combining
the deviation from the input data to the ID pattern in both subspaces. The
efficacy of our method, named WDiscOOD, is verified on the large-scale
ImageNet-1k benchmark, with six OOD datasets that cover a variety of
distribution shifts. WDiscOOD demonstrates superior performance on deep
classifiers with diverse backbone architectures, including CNN and vision
transformer. Furthermore, we also show that WDiscOOD more effectively detects
novel concepts in representation spaces trained with contrastive objectives,
including supervised contrastive loss and multi-modality contrastive loss.
Related papers
- What If the Input is Expanded in OOD Detection? [77.37433624869857]
Out-of-distribution (OOD) detection aims to identify OOD inputs from unknown classes.
Various scoring functions are proposed to distinguish it from in-distribution (ID) data.
We introduce a novel perspective, i.e., employing different common corruptions on the input space.
arXiv Detail & Related papers (2024-10-24T06:47:28Z) - Pursuing Feature Separation based on Neural Collapse for Out-of-Distribution Detection [21.357620914949624]
In the open world, detecting out-of-distribution (OOD) data, whose labels are disjoint with those of in-distribution (ID) samples, is important for reliable deep neural networks (DNNs)
We propose a simple but effective loss called OrthLoss, which binds the features of OOD data in a subspace to the principal subspace of ID features formed by NC.
Our detection achieves SOTA performance on CIFAR benchmarks without any additional data augmentation or sampling.
arXiv Detail & Related papers (2024-05-28T04:24:38Z) - WeiPer: OOD Detection using Weight Perturbations of Class Projections [11.130659240045544]
We introduce perturbations of the class projections in the final fully connected layer which creates a richer representation of the input.
We achieve state-of-the-art OOD detection results across multiple benchmarks of the OpenOOD framework.
arXiv Detail & Related papers (2024-05-27T13:38:28Z) - Out-of-distribution detection based on subspace projection of high-dimensional features output by the last convolutional layer [5.902332693463877]
This paper concentrates on the high-dimensional features output by the final convolutional layer, which contain rich image features.
Our key idea is to project these high-dimensional features into two specific feature subspaces, trained with Predefined Evenly-Distribution Class Centroids (PEDCC)-Loss.
Our method requires only the training of the classification network model, eschewing any need for input pre-processing or specific OOD data pre-tuning.
arXiv Detail & Related papers (2024-05-02T18:33:02Z) - GROOD: GRadient-aware Out-Of-Distribution detection in interpolated
manifolds [12.727088216619386]
Out-of-distribution detection in deep neural networks (DNNs) can pose risks in real-world deployments.
We introduce GRadient-aware Out-Of-Distribution detection in.
internative manifold (GROOD), a novel framework that relies on the discriminative power of gradient space.
We show that GROD surpasses the established robustness of state-of-the-art baselines.
arXiv Detail & Related papers (2023-12-22T04:28:43Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - From Global to Local: Multi-scale Out-of-distribution Detection [129.37607313927458]
Out-of-distribution (OOD) detection aims to detect "unknown" data whose labels have not been seen during the in-distribution (ID) training process.
Recent progress in representation learning gives rise to distance-based OOD detection.
We propose Multi-scale OOD DEtection (MODE), a first framework leveraging both global visual information and local region details.
arXiv Detail & Related papers (2023-08-20T11:56:25Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.