Out-of-distribution detection based on subspace projection of high-dimensional features output by the last convolutional layer
- URL: http://arxiv.org/abs/2405.01662v1
- Date: Thu, 2 May 2024 18:33:02 GMT
- Title: Out-of-distribution detection based on subspace projection of high-dimensional features output by the last convolutional layer
- Authors: Qiuyu Zhu, Yiwei He,
- Abstract summary: This paper concentrates on the high-dimensional features output by the final convolutional layer, which contain rich image features.
Our key idea is to project these high-dimensional features into two specific feature subspaces, trained with Predefined Evenly-Distribution Class Centroids (PEDCC)-Loss.
Our method requires only the training of the classification network model, eschewing any need for input pre-processing or specific OOD data pre-tuning.
- Score: 5.902332693463877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection, crucial for reliable pattern classification, discerns whether a sample originates outside the training distribution. This paper concentrates on the high-dimensional features output by the final convolutional layer, which contain rich image features. Our key idea is to project these high-dimensional features into two specific feature subspaces, leveraging the dimensionality reduction capacity of the network's linear layers, trained with Predefined Evenly-Distribution Class Centroids (PEDCC)-Loss. This involves calculating the cosines of three projection angles and the norm values of features, thereby identifying distinctive information for in-distribution (ID) and OOD data, which assists in OOD detection. Building upon this, we have modified the batch normalization (BN) and ReLU layer preceding the fully connected layer, diminishing their impact on the output feature distributions and thereby widening the distribution gap between ID and OOD data features. Our method requires only the training of the classification network model, eschewing any need for input pre-processing or specific OOD data pre-tuning. Extensive experiments on several benchmark datasets demonstrates that our approach delivers state-of-the-art performance. Our code is available at https://github.com/Hewell0/ProjOOD.
Related papers
- Pursuing Feature Separation based on Neural Collapse for Out-of-Distribution Detection [21.357620914949624]
In the open world, detecting out-of-distribution (OOD) data, whose labels are disjoint with those of in-distribution (ID) samples, is important for reliable deep neural networks (DNNs)
We propose a simple but effective loss called OrthLoss, which binds the features of OOD data in a subspace to the principal subspace of ID features formed by NC.
Our detection achieves SOTA performance on CIFAR benchmarks without any additional data augmentation or sampling.
arXiv Detail & Related papers (2024-05-28T04:24:38Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Classifier-head Informed Feature Masking and Prototype-based Logit
Smoothing for Out-of-Distribution Detection [27.062465089674763]
Out-of-distribution (OOD) detection is essential when deploying neural networks in the real world.
One main challenge is that neural networks often make overconfident predictions on OOD data.
We propose an effective post-hoc OOD detection method based on a new feature masking strategy and a novel logit smoothing strategy.
arXiv Detail & Related papers (2023-10-27T12:42:17Z) - Do-GOOD: Towards Distribution Shift Evaluation for Pre-Trained Visual
Document Understanding Models [68.12229916000584]
We develop an out-of-distribution (OOD) benchmark termed Do-GOOD for the fine-Grained analysis on Document image-related tasks.
We then evaluate the robustness and perform a fine-grained analysis of 5 latest VDU pre-trained models and 2 typical OOD generalization algorithms.
arXiv Detail & Related papers (2023-06-05T06:50:42Z) - WDiscOOD: Out-of-Distribution Detection via Whitened Linear Discriminant
Analysis [21.023001428704085]
We propose a novel feature-space OOD detection score based on class-specific and class-agnostic information.
The efficacy of our method, named WDiscOOD, is verified on the large-scale ImageNet-1k benchmark.
arXiv Detail & Related papers (2023-03-14T00:13:57Z) - Beyond Mahalanobis-Based Scores for Textual OOD Detection [32.721317681946246]
We introduce TRUSTED, a new OOD detector for classifiers based on Transformer architectures that meets operational requirements.
The efficiency of TRUSTED relies on the fruitful idea that all hidden layers carry relevant information to detect OOD examples.
Our experiments involve 51k model configurations, including various checkpoints, seeds, datasets, and demonstrate that TRUSTED achieves state-of-the-art performances.
arXiv Detail & Related papers (2022-11-24T10:51:58Z) - Exploring Covariate and Concept Shift for Detection and Calibration of
Out-of-Distribution Data [77.27338842609153]
characterization reveals that sensitivity to each type of shift is important to the detection and confidence calibration of OOD data.
We propose a geometrically-inspired method to improve OOD detection under both shifts with only in-distribution data.
We are the first to propose a method that works well across both OOD detection and calibration and under different types of shifts.
arXiv Detail & Related papers (2021-10-28T15:42:55Z) - Meta Learning Low Rank Covariance Factors for Energy-Based Deterministic
Uncertainty [58.144520501201995]
Bi-Lipschitz regularization of neural network layers preserve relative distances between data instances in the feature spaces of each layer.
With the use of an attentive set encoder, we propose to meta learn either diagonal or diagonal plus low-rank factors to efficiently construct task specific covariance matrices.
We also propose an inference procedure which utilizes scaled energy to achieve a final predictive distribution.
arXiv Detail & Related papers (2021-10-12T22:04:19Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z) - Densely Nested Top-Down Flows for Salient Object Detection [137.74130900326833]
This paper revisits the role of top-down modeling in salient object detection.
It designs a novel densely nested top-down flows (DNTDF)-based framework.
In every stage of DNTDF, features from higher levels are read in via the progressive compression shortcut paths (PCSP)
arXiv Detail & Related papers (2021-02-18T03:14:02Z) - Out-Of-Distribution Detection With Subspace Techniques And Probabilistic
Modeling Of Features [7.219077740523682]
This paper presents a principled approach for detecting out-of-distribution (OOD) samples in deep neural networks (DNN)
Modeling probability distributions on deep features has recently emerged as an effective, yet computationally cheap method to detect OOD samples in DNN.
We apply linear statistical dimensionality reduction techniques and nonlinear manifold-learning techniques on the high-dimensional features in order to capture the true subspace spanned by the features.
arXiv Detail & Related papers (2020-12-08T07:07:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.