Local Background Features Matter in Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2510.12259v1
- Date: Tue, 14 Oct 2025 08:12:49 GMT
- Title: Local Background Features Matter in Out-of-Distribution Detection
- Authors: Jinlun Ye, Zhuohao Sun, Yiqiao Qiu, Qiu Li, Zhijun Tan, Ruixuan Wang,
- Abstract summary: Out-of-distribution (OOD) detection is crucial when deploying deep neural networks in the real world to ensure the reliability and safety of their applications.<n>One main challenge in OOD detection is that neural network models often produce overconfident predictions on OOD data.<n>We propose a novel and effective OOD detection method that utilizes local background features as fake OOD features for model training.
- Score: 7.9113572982737494
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Out-of-distribution (OOD) detection is crucial when deploying deep neural networks in the real world to ensure the reliability and safety of their applications. One main challenge in OOD detection is that neural network models often produce overconfident predictions on OOD data. While some methods using auxiliary OOD datasets or generating fake OOD images have shown promising OOD detection performance, they are limited by the high costs of data collection and training. In this study, we propose a novel and effective OOD detection method that utilizes local background features as fake OOD features for model training. Inspired by the observation that OOD images generally share similar background regions with ID images, the background features are extracted from ID images as simulated OOD visual representations during training based on the local invariance of convolution. Through being optimized to reduce the $L_2$-norm of these background features, the neural networks are able to alleviate the overconfidence issue on OOD data. Extensive experiments on multiple standard OOD detection benchmarks confirm the effectiveness of our method and its wide combinatorial compatibility with existing post-hoc methods, with new state-of-the-art performance achieved from our method.
Related papers
- Logits-Based Finetuning [48.18151583153572]
We propose a logits-based fine-tuning framework that integrates the strengths of supervised learning and knowledge distillation.<n>Our approach constructs enriched training targets by combining teacher logits with ground truth labels, preserving both correctness and linguistic diversity.
arXiv Detail & Related papers (2025-05-30T10:57:09Z) - Unsupervised Out-of-Distribution Detection in Medical Imaging Using Multi-Exit Class Activation Maps and Feature Masking [15.899277292315995]
Out-of-distribution (OOD) detection is essential for ensuring the reliability of deep learning models in medical imaging applications.<n>This work is motivated by the observation that class activation maps (CAMs) for in-distribution (ID) data typically emphasize regions that are highly relevant to the model's predictions.<n>We introduce a novel unsupervised OOD detection framework, Multi-Exit Class Activation Map (MECAM), which leverages multi-exit CAMs and feature masking.
arXiv Detail & Related papers (2025-05-13T14:18:58Z) - Advancing Out-of-Distribution Detection via Local Neuroplasticity [60.53625435889467]
This paper presents a novel OOD detection method that leverages the unique local neuroplasticity property of Kolmogorov-Arnold Networks (KANs)<n>Our method compares the activation patterns of a trained KAN against its untrained counterpart to detect OOD samples.<n>We validate our approach on benchmarks from image and medical domains, demonstrating superior performance and robustness compared to state-of-the-art techniques.
arXiv Detail & Related papers (2025-02-20T11:13:41Z) - Can OOD Object Detectors Learn from Foundation Models? [56.03404530594071]
Out-of-distribution (OOD) object detection is a challenging task due to the absence of open-set OOD data.
Inspired by recent advancements in text-to-image generative models, we study the potential of generative models trained on large-scale open-set data to synthesize OOD samples.
We introduce SyncOOD, a simple data curation method that capitalizes on the capabilities of large foundation models.
arXiv Detail & Related papers (2024-09-08T17:28:22Z) - Skeleton-OOD: An End-to-End Skeleton-Based Model for Robust Out-of-Distribution Human Action Detection [17.85872085904999]
We propose a novel end-to-end skeleton-based model called Skeleton-OOD.<n>Skeleton-OOD is committed to improving the effectiveness of OOD tasks while ensuring the accuracy of ID recognition.<n>Our findings underscore the effectiveness of classic OOD detection techniques in the context of skeleton-based action recognition tasks.
arXiv Detail & Related papers (2024-05-31T05:49:37Z) - Classifier-head Informed Feature Masking and Prototype-based Logit
Smoothing for Out-of-Distribution Detection [27.062465089674763]
Out-of-distribution (OOD) detection is essential when deploying neural networks in the real world.
One main challenge is that neural networks often make overconfident predictions on OOD data.
We propose an effective post-hoc OOD detection method based on a new feature masking strategy and a novel logit smoothing strategy.
arXiv Detail & Related papers (2023-10-27T12:42:17Z) - Out-of-distribution Detection with Implicit Outlier Transformation [72.73711947366377]
Outlier exposure (OE) is powerful in out-of-distribution (OOD) detection.
We propose a novel OE-based approach that makes the model perform well for unseen OOD situations.
arXiv Detail & Related papers (2023-03-09T04:36:38Z) - Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is
All You Need [52.88953913542445]
We find surprisingly that simply using reconstruction-based methods could boost the performance of OOD detection significantly.
We take Masked Image Modeling as a pretext task for our OOD detection framework (MOOD)
arXiv Detail & Related papers (2023-02-06T08:24:41Z) - A Simple Test-Time Method for Out-of-Distribution Detection [45.11199798139358]
This paper proposes a simple Test-time Linear Training (ETLT) method for OOD detection.
We find that the probabilities of input images being out-of-distribution are surprisingly linearly correlated to the features extracted by neural networks.
We propose an online variant of the proposed method, which achieves promising performance and is more practical in real-world applications.
arXiv Detail & Related papers (2022-07-17T16:02:58Z) - ReAct: Out-of-distribution Detection With Rectified Activations [20.792140933660075]
Out-of-distribution (OOD) detection has received much attention lately due to its practical importance.
One of the primary challenges is that models often produce highly confident predictions on OOD data.
We propose ReAct--a simple and effective technique for reducing model overconfidence on OOD data.
arXiv Detail & Related papers (2021-11-24T21:02:07Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.