Pseudo-label Induced Subspace Representation Learning for Robust Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2508.03108v1
- Date: Tue, 05 Aug 2025 05:38:00 GMT
- Title: Pseudo-label Induced Subspace Representation Learning for Robust Out-of-Distribution Detection
- Authors: Tarhib Al Azad, Faizul Rakib Sayem, Shahana Ibrahim,
- Abstract summary: We propose a novel OOD detection framework based on a pseudo-label-induced subspace representation.<n>In addition, we introduce a simple yet effective learning criterion that integrates a cross-entropy-based ID classification loss with a subspace distance-based regularization loss to enhance ID-OOD separability.
- Score: 6.5679810906772325
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Out-of-distribution (OOD) detection lies at the heart of robust artificial intelligence (AI), aiming to identify samples from novel distributions beyond the training set. Recent approaches have exploited feature representations as distinguishing signatures for OOD detection. However, most existing methods rely on restrictive assumptions on the feature space that limit the separability between in-distribution (ID) and OOD samples. In this work, we propose a novel OOD detection framework based on a pseudo-label-induced subspace representation, that works under more relaxed and natural assumptions compared to existing feature-based techniques. In addition, we introduce a simple yet effective learning criterion that integrates a cross-entropy-based ID classification loss with a subspace distance-based regularization loss to enhance ID-OOD separability. Extensive experiments validate the effectiveness of our framework.
Related papers
- Enclosing Prototypical Variational Autoencoder for Explainable Out-of-Distribution Detection [0.013888374577155822]
We extend self-explainable Prototypical Variational models with autoencoder-based out-of-distribution (OOD) detection.<n>A Variational Autoencoder is applied to learn a meaningful latent space which can be used for distance-based classification.<n>A novel restriction loss is introduced that promotes a compact ID region in the latent space without collapsing it into single points.
arXiv Detail & Related papers (2025-06-17T10:38:29Z) - Advancing Out-of-Distribution Detection via Local Neuroplasticity [60.53625435889467]
This paper presents a novel OOD detection method that leverages the unique local neuroplasticity property of Kolmogorov-Arnold Networks (KANs)<n>Our method compares the activation patterns of a trained KAN against its untrained counterpart to detect OOD samples.<n>We validate our approach on benchmarks from image and medical domains, demonstrating superior performance and robustness compared to state-of-the-art techniques.
arXiv Detail & Related papers (2025-02-20T11:13:41Z) - ARES: Auxiliary Range Expansion for Outlier Synthesis [1.7306463705863946]
We propose a novel methodology for OOD detection named Auxiliary Range Expansion for Outlier Synthesis.<n>Various stages consists ARES to ultimately generate valuable OOD-like virtual instances.<n>The energy score-based discriminator is then trained to effectively separate in-distribution data and outlier data.
arXiv Detail & Related papers (2025-01-11T05:44:33Z) - Semantic or Covariate? A Study on the Intractable Case of Out-of-Distribution Detection [70.57120710151105]
We provide a more precise definition of the Semantic Space for the ID distribution.
We also define the "Tractable OOD" setting which ensures the distinguishability of OOD and ID distributions.
arXiv Detail & Related papers (2024-11-18T03:09:39Z) - Margin-bounded Confidence Scores for Out-of-Distribution Detection [2.373572816573706]
We propose a novel method called Margin bounded Confidence Scores (MaCS) to address the nontrivial OOD detection problem.
MaCS enlarges the disparity between ID and OOD scores, which in turn makes the decision boundary more compact.
Experiments on various benchmark datasets for image classification tasks demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2024-09-22T05:40:25Z) - Diffusion based Semantic Outlier Generation via Nuisance Awareness for Out-of-Distribution Detection [9.936136347796413]
Out-of-distribution (OOD) detection has recently shown promising results through training with synthetic OOD datasets.<n>We propose a novel framework, Semantic Outlier generation via Nuisance Awareness (SONA), which notably produces challenging outliers.<n>Our approach incorporates SONA guidance, providing separate control over semantic and nuisance regions of ID samples.
arXiv Detail & Related papers (2024-08-27T07:52:44Z) - Model-free Test Time Adaptation for Out-Of-Distribution Detection [62.49795078366206]
We propose a Non-Parametric Test Time textbfAdaptation framework for textbfDistribution textbfDetection (abbr)
abbr utilizes online test samples for model adaptation during testing, enhancing adaptability to changing data distributions.
We demonstrate the effectiveness of abbr through comprehensive experiments on multiple OOD detection benchmarks.
arXiv Detail & Related papers (2023-11-28T02:00:47Z) - Beyond AUROC & co. for evaluating out-of-distribution detection
performance [50.88341818412508]
Given their relevance for safe(r) AI, it is important to examine whether the basis for comparing OOD detection methods is consistent with practical needs.
We propose a new metric - Area Under the Threshold Curve (AUTC), which explicitly penalizes poor separation between ID and OOD samples.
arXiv Detail & Related papers (2023-06-26T12:51:32Z) - Improving Out-of-Distribution Detection with Disentangled Foreground and Background Features [23.266183020469065]
We propose a novel framework that disentangles foreground and background features from ID training samples via a dense prediction approach.
It is a generic framework that allows for a seamless combination with various existing OOD detection methods.
arXiv Detail & Related papers (2023-03-15T16:12:14Z) - Plugin estimators for selective classification with out-of-distribution
detection [67.28226919253214]
Real-world classifiers can benefit from abstaining from predicting on samples where they have low confidence.
These settings have been the subject of extensive but disjoint study in the selective classification (SC) and out-of-distribution (OOD) detection literature.
Recent work on selective classification with OOD detection has argued for the unified study of these problems.
We propose new plugin estimators for SCOD that are theoretically grounded, effective, and generalise existing approaches.
arXiv Detail & Related papers (2023-01-29T07:45:17Z) - How to Exploit Hyperspherical Embeddings for Out-of-Distribution
Detection? [22.519572587827213]
CIDER is a representation learning framework that exploits hyperspherical embeddings for OOD detection.
CIDER establishes superior performance, outperforming the latest rival by 19.36% in FPR95.
arXiv Detail & Related papers (2022-03-08T23:44:01Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.