Perturbations in the Orthogonal Complement Subspace for Efficient Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2511.00849v1
- Date: Sun, 02 Nov 2025 08:21:13 GMT
- Title: Perturbations in the Orthogonal Complement Subspace for Efficient Out-of-Distribution Detection
- Authors: Zhexiao Huang, Weihao He, Shutao Deng, Junzhe Chen, Chao Yuan, Hongxin Wang, Changsheng Zhou,
- Abstract summary: Out-of-distribution (OOD) detection is essential for deploying deep learning models in open-world environments.<n>We introduce P-OCS, a lightweight and theoretically grounded method that operates in the complement of the principal subspace defined by ID features.<n>We show that a one-step update is sufficient in the small-perturbation regime and provide convergence guarantees for the resulting detection score.
- Score: 5.986846311786858
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Out-of-distribution (OOD) detection is essential for deploying deep learning models in open-world environments. Existing approaches, such as energy-based scoring and gradient-projection methods, typically rely on high-dimensional representations to separate in-distribution (ID) and OOD samples. We introduce P-OCS (Perturbations in the Orthogonal Complement Subspace), a lightweight and theoretically grounded method that operates in the orthogonal complement of the principal subspace defined by ID features. P-OCS applies a single projected perturbation restricted to this complementary subspace, enhancing subtle ID-OOD distinctions while preserving the geometry of ID representations. We show that a one-step update is sufficient in the small-perturbation regime and provide convergence guarantees for the resulting detection score. Experiments across multiple architectures and datasets demonstrate that P-OCS achieves state-of-the-art OOD detection with negligible computational cost and without requiring model retraining, access to OOD data, or changes to model architecture.
Related papers
- Learning to Explore: Policy-Guided Outlier Synthesis for Graph Out-of-Distribution Detection [51.93878677594561]
In unsupervised graph-level OOD detection, models are typically trained using only in-distribution (ID) data.<n>We propose a Policy-Guided Outlier Synthesis framework that replaces statics with a learned exploration strategy.
arXiv Detail & Related papers (2026-02-28T11:40:18Z) - Predictive Sample Assignment for Semantically Coherent Out-of-Distribution Detection [62.1052001316508]
Semantically coherent out-of-distribution detection (SCOOD) is a recently proposed realistic OOD detection setting.<n>We propose a concise SCOOD framework based on predictive sample assignment (PSA)<n>Our approach outperforms the state-of-the-art methods by a significant margin.
arXiv Detail & Related papers (2025-12-15T01:18:38Z) - SupLID: Geometrical Guidance for Out-of-Distribution Detection in Semantic Segmentation [6.1937472685875]
Out-of-Distribution (OOD) detection in semantic segmentation aims to localize anomalous regions at the pixel level.<n>Recent literature has successfully explored the adaptation of commonly used image-level OOD methods.<n>We introduce SupLID, a novel framework that effectively guides classifier-derived OOD scores by exploiting the geometrical structure of the underlying semantic space.
arXiv Detail & Related papers (2025-11-24T06:49:54Z) - Revisiting Logit Distributions for Reliable Out-of-Distribution Detection [73.9121001113687]
Out-of-distribution (OOD) detection is critical for ensuring the reliability of deep learning models in open-world applications.<n>LogitGap is a novel post-hoc OOD detection method that exploits the relationship between the maximum logit and the remaining logits.<n>We show that LogitGap consistently achieves state-of-the-art performance across diverse OOD detection scenarios and benchmarks.
arXiv Detail & Related papers (2025-10-23T02:16:45Z) - GOOD: Training-Free Guided Diffusion Sampling for Out-of-Distribution Detection [61.96025941146103]
GOOD is a novel framework that guides sampling trajectories towards OOD regions using off-the-shelf in-distribution (ID) classifiers.<n> GOOD incorporates dual-level guidance: Image-level guidance based on the gradient of log partition to reduce input likelihood, drives samples toward low-density regions in pixel space.<n>We introduce a unified OOD score that adaptively combines image and feature discrepancies, enhancing detection robustness.
arXiv Detail & Related papers (2025-10-20T03:58:46Z) - Mysteries of the Deep: Role of Intermediate Representations in Out of Distribution Detection [0.0]
Methods treat large pre-trained models as monolithic encoders and rely solely on their final-layer representations for detection.<n>We reveal the textitintermediate layers of pre-trained models, shaped by residual connections that subtly transform input projections.<n>We show that selectively incorporating these intermediate representations can increase the accuracy of OOD detection by up to textbf$10%$ in far-OOD and over textbf$7%$ in near-OOD benchmarks.
arXiv Detail & Related papers (2025-10-07T10:55:47Z) - Pseudo-label Induced Subspace Representation Learning for Robust Out-of-Distribution Detection [6.5679810906772325]
We propose a novel OOD detection framework based on a pseudo-label-induced subspace representation.<n>In addition, we introduce a simple yet effective learning criterion that integrates a cross-entropy-based ID classification loss with a subspace distance-based regularization loss to enhance ID-OOD separability.
arXiv Detail & Related papers (2025-08-05T05:38:00Z) - Diffusion-based Layer-wise Semantic Reconstruction for Unsupervised Out-of-Distribution Detection [30.02748131967826]
Unsupervised out-of-distribution (OOD) detection aims to identify out-of-domain data by learning only from unlabeled In-Distribution (ID) training samples.
Current reconstruction-based methods provide a good alternative approach by measuring the reconstruction error between the input and its corresponding generative counterpart in the pixel/feature space.
We propose the diffusion-based layer-wise semantic reconstruction approach for unsupervised OOD detection.
arXiv Detail & Related papers (2024-11-16T04:54:07Z) - Optimizing OOD Detection in Molecular Graphs: A Novel Approach with Diffusion Models [71.39421638547164]
We propose to detect OOD molecules by adopting an auxiliary diffusion model-based framework, which compares similarities between input molecules and reconstructed graphs.
Due to the generative bias towards reconstructing ID training samples, the similarity scores of OOD molecules will be much lower to facilitate detection.
Our research pioneers an approach of Prototypical Graph Reconstruction for Molecular OOD Detection, dubbed as PGR-MOOD and hinges on three innovations.
arXiv Detail & Related papers (2024-04-24T03:25:53Z) - GROOD: GRadient-Aware Out-of-Distribution Detection [11.511906612904255]
Out-of-distribution (OOD) detection is crucial for ensuring the reliability of deep learning models in real-world applications.<n>We propose GRadient-aware Out-Of-Distribution detection (GROOD), a method that derives an OOD prototype from synthetic samples and computes class prototypes directly from In-distribution (ID) training data.
arXiv Detail & Related papers (2023-12-22T04:28:43Z) - Improving Out-of-Distribution Detection with Disentangled Foreground and Background Features [23.266183020469065]
We propose a novel framework that disentangles foreground and background features from ID training samples via a dense prediction approach.
It is a generic framework that allows for a seamless combination with various existing OOD detection methods.
arXiv Detail & Related papers (2023-03-15T16:12:14Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.