Network Inversion for Uncertainty-Aware Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2505.23448v1
- Date: Thu, 29 May 2025 13:53:52 GMT
- Title: Network Inversion for Uncertainty-Aware Out-of-Distribution Detection
- Authors: Pirzada Suhail, Rehna Afroz, Amit Sethi,
- Abstract summary: Out-of-distribution (OOD) detection and uncertainty estimation are critical components for building safe machine learning systems.<n>We propose a novel framework that combines network inversion with classifier training to address both OOD detection and uncertainty estimation.<n>Our approach is scalable, interpretable, and does not require access to external OOD datasets or post-hoc calibration techniques.
- Score: 2.6733991338938026
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Out-of-distribution (OOD) detection and uncertainty estimation (UE) are critical components for building safe machine learning systems, especially in real-world scenarios where unexpected inputs are inevitable. In this work, we propose a novel framework that combines network inversion with classifier training to simultaneously address both OOD detection and uncertainty estimation. For a standard n-class classification task, we extend the classifier to an (n+1)-class model by introducing a "garbage" class, initially populated with random gaussian noise to represent outlier inputs. After each training epoch, we use network inversion to reconstruct input images corresponding to all output classes that initially appear as noisy and incoherent and are therefore excluded to the garbage class for retraining the classifier. This cycle of training, inversion, and exclusion continues iteratively till the inverted samples begin to resemble the in-distribution data more closely, suggesting that the classifier has learned to carve out meaningful decision boundaries while sanitising the class manifolds by pushing OOD content into the garbage class. During inference, this training scheme enables the model to effectively detect and reject OOD samples by classifying them into the garbage class. Furthermore, the confidence scores associated with each prediction can be used to estimate uncertainty for both in-distribution and OOD inputs. Our approach is scalable, interpretable, and does not require access to external OOD datasets or post-hoc calibration techniques while providing a unified solution to the dual challenges of OOD detection and uncertainty estimation.
Related papers
- Outlier detection by ensembling uncertainty with negative objectness [0.0]
Outlier detection is an essential capability in safety-critical applications of supervised visual recognition.
We reconsider direct prediction of K+1 logits that correspond to K groundtruth classes and one outlier class.
We embed our method into a dense prediction architecture with mask-level recognition over K+2 classes.
arXiv Detail & Related papers (2024-02-23T15:19:37Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Unified Classification and Rejection: A One-versus-All Framework [47.58109235690227]
We build a unified framework for building open set classifiers for both classification and OOD rejection.
By decomposing the $ K $-class problem into $ K $ one-versus-all (OVA) binary classification tasks, we show that combining the scores of OVA classifiers can give $ (K+1) $-class posterior probabilities.
Experiments on popular OSR and OOD detection datasets demonstrate that the proposed framework, using a single multi-class classifier, yields competitive performance.
arXiv Detail & Related papers (2023-11-22T12:47:12Z) - Large Class Separation is not what you need for Relational
Reasoning-based OOD Detection [12.578844450586]
Out-Of-Distribution (OOD) detection methods provide a solution by identifying semantic novelty.
Most of these methods leverage a learning stage on the known data, which means training (or fine-tuning) a model to capture the concept of normality.
A viable alternative is that of evaluating similarities in the embedding space produced by large pre-trained models without any further learning effort.
arXiv Detail & Related papers (2023-07-12T14:10:15Z) - Robust Outlier Rejection for 3D Registration with Variational Bayes [70.98659381852787]
We develop a novel variational non-local network-based outlier rejection framework for robust alignment.
We propose a voting-based inlier searching strategy to cluster the high-quality hypothetical inliers for transformation estimation.
arXiv Detail & Related papers (2023-04-04T03:48:56Z) - READ: Aggregating Reconstruction Error into Out-of-distribution
Detection [5.069442437365223]
Deep neural networks are known to be overconfident for abnormal data.
We propose READ (Reconstruction Error Aggregated Detector) to unify inconsistencies from classifier and autoencoder.
Our method reduces the average FPR@95TPR by up to 9.8% compared with previous state-of-the-art OOD detection algorithms.
arXiv Detail & Related papers (2022-06-15T11:30:41Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z) - An Algorithm for Out-Of-Distribution Attack to Neural Network Encoder [1.7305469511995404]
Out-Of-Distribution (OOD) samples do not follow the distribution of training set, and therefore the predicted class labels on OOD samples become meaningless.
We show that this type of method has no theoretical guarantee and is practically breakable by our OOD Attack algorithm.
We also show that Glow likelihood-based OOD detection is breakable as well.
arXiv Detail & Related papers (2020-09-17T02:10:36Z) - Solving Long-tailed Recognition with Deep Realistic Taxonomic Classifier [68.38233199030908]
Long-tail recognition tackles the natural non-uniformly distributed data in realworld scenarios.
While moderns perform well on populated classes, its performance degrades significantly on tail classes.
Deep-RTC is proposed as a new solution to the long-tail problem, combining realism with hierarchical predictions.
arXiv Detail & Related papers (2020-07-20T05:57:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.