Frustratingly Easy Feature Reconstruction for Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2509.06988v1
- Date: Tue, 02 Sep 2025 13:24:40 GMT
- Title: Frustratingly Easy Feature Reconstruction for Out-of-Distribution Detection
- Authors: Yingsheng Wang, Shuo Lu, Jian Liang, Aihua Zheng, Ran He,
- Abstract summary: Out-of-distribution (OOD) detection helps models identify data outside the training categories, crucial for security applications.<n>While feature-based post-hoc methods address this by evaluating data differences in the feature space without changing network parameters, they often require access to training data.<n>We propose a simple yet effective post-hoc method, termed Feature Reconstruction (ClaFR), from the perspective of subspace projection.
- Score: 39.00123727894414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection helps models identify data outside the training categories, crucial for security applications. While feature-based post-hoc methods address this by evaluating data differences in the feature space without changing network parameters, they often require access to training data, which may not be suitable for some data privacy scenarios. This may not be suitable in scenarios where data privacy protection is a concern. In this paper, we propose a simple yet effective post-hoc method, termed Classifier-based Feature Reconstruction (ClaFR), from the perspective of subspace projection. It first performs an orthogonal decomposition of the classifier's weights to extract the class-known subspace, then maps the original data features into this subspace to obtain new data representations. Subsequently, the OOD score is determined by calculating the feature reconstruction error of the data within the subspace. Compared to existing OOD detection algorithms, our method does not require access to training data while achieving leading performance on multiple OOD benchmarks. Our code is released at https://github.com/Aie0923/ClaFR.
Related papers
- Improving Out-of-Distribution Detection via Dynamic Covariance Calibration [12.001290283557466]
Out-of-Distribution (OOD) detection is essential for the trustworthiness of AI systems.<n>We argue that the influence of ill-distributed samples can be corrected by dynamically adjusting the prior geometry.<n>Our approach significantly enhances OOD detection across various models.
arXiv Detail & Related papers (2025-06-11T05:05:26Z) - Buffer-free Class-Incremental Learning with Out-of-Distribution Detection [17.67144692440415]
Class-incremental learning (CIL) poses significant challenges in open-world scenarios.<n>We present an in-depth analysis of post-hoc OOD detection methods and investigate their potential to eliminate the need for a memory buffer.<n>We show that this buffer-free approach achieves comparable or superior performance to buffer-based methods both in terms of class-incremental learning and the rejection of unknown samples.
arXiv Detail & Related papers (2025-05-29T13:01:00Z) - PEEL the Layers and Find Yourself: Revisiting Inference-time Data Leakage for Residual Neural Networks [64.90981115460937]
This paper explores inference-time data leakage risks of deep neural networks (NNs)<n>We propose a novel backward feature inversion method, textbfPEEL, which can effectively recover block-wise input features from the intermediate output of residual NNs.<n>Our results show that PEEL outperforms the state-of-the-art recovery methods by an order of magnitude when evaluated by mean squared error (MSE)
arXiv Detail & Related papers (2025-04-08T20:11:05Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - DST-Det: Simple Dynamic Self-Training for Open-Vocabulary Object Detection [72.25697820290502]
This work introduces a straightforward and efficient strategy to identify potential novel classes through zero-shot classification.
We refer to this approach as the self-training strategy, which enhances recall and accuracy for novel classes without requiring extra annotations, datasets, and re-training.
Empirical evaluations on three datasets, including LVIS, V3Det, and COCO, demonstrate significant improvements over the baseline performance.
arXiv Detail & Related papers (2023-10-02T17:52:24Z) - LORD: Leveraging Open-Set Recognition with Unknown Data [10.200937444995944]
LORD is a framework to Leverage Open-set Recognition by exploiting unknown data.
We identify three model-agnostic training strategies that exploit background data and applied them to well-established classifiers.
arXiv Detail & Related papers (2023-08-24T06:12:41Z) - Detecting Out-of-distribution Examples via Class-conditional Impressions
Reappearing [30.938412222724608]
Out-of-distribution (OOD) detection aims at enhancing standard deep neural networks to distinguish anomalous inputs from original training data.
Due to privacy and security, auxiliary data tends to be impractical in a real-world scenario.
We propose a data-free method without training on natural data, called Class-Conditional Impressions Reappearing (C2IR)
arXiv Detail & Related papers (2023-03-17T02:55:08Z) - READ: Aggregating Reconstruction Error into Out-of-distribution
Detection [5.069442437365223]
Deep neural networks are known to be overconfident for abnormal data.
We propose READ (Reconstruction Error Aggregated Detector) to unify inconsistencies from classifier and autoencoder.
Our method reduces the average FPR@95TPR by up to 9.8% compared with previous state-of-the-art OOD detection algorithms.
arXiv Detail & Related papers (2022-06-15T11:30:41Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Learning with Out-of-Distribution Data for Audio Classification [60.48251022280506]
We show that detecting and relabelling certain OOD instances, rather than discarding them, can have a positive effect on learning.
The proposed method is shown to improve the performance of convolutional neural networks by a significant margin.
arXiv Detail & Related papers (2020-02-11T21:08:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.