Optimal Transport-Induced Samples against Out-of-Distribution Overconfidence
- URL: http://arxiv.org/abs/2601.21320v1
- Date: Thu, 29 Jan 2026 06:29:36 GMT
- Title: Optimal Transport-Induced Samples against Out-of-Distribution Overconfidence
- Authors: Keke Tang, Ziyong Du, Xiaofei Wang, Weilong Peng, Peican Zhu, Zhihong Tian,
- Abstract summary: Singularities in semi-discrete optimal transport (OT) mark regions of semantic ambiguity.<n>We propose a principled framework to mitigate OOD overconfidence by leveraging the geometry of OT-induced singular boundaries.<n>Our method significantly alleviates OOD overconfidence and outperforms state-of-the-art methods.
- Score: 36.624406746797085
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) often produce overconfident predictions on out-of-distribution (OOD) inputs, undermining their reliability in open-world environments. Singularities in semi-discrete optimal transport (OT) mark regions of semantic ambiguity, where classifiers are particularly prone to unwarranted high-confidence predictions. Motivated by this observation, we propose a principled framework to mitigate OOD overconfidence by leveraging the geometry of OT-induced singular boundaries. Specifically, we formulate an OT problem between a continuous base distribution and the latent embeddings of training data, and identify the resulting singular boundaries. By sampling near these boundaries, we construct a class of OOD inputs, termed optimal transport-induced OOD samples (OTIS), which are geometrically grounded and inherently semantically ambiguous. During training, a confidence suppression loss is applied to OTIS to guide the model toward more calibrated predictions in structurally uncertain regions. Extensive experiments show that our method significantly alleviates OOD overconfidence and outperforms state-of-the-art methods.
Related papers
- TULiP: Test-time Uncertainty Estimation via Linearization and Weight Perturbation [11.334867025651233]
We propose TULiP, a theoretically-driven uncertainty estimator for OOD detection.<n>Our approach considers a hypothetical perturbation applied to the network before convergence.<n>Our method exhibits state-of-the-art performance, particularly for near-distribution samples.
arXiv Detail & Related papers (2025-05-22T17:16:41Z) - OT Score: An OT based Confidence Score for Source Free Unsupervised Domain Adaptation [2.6912673131004468]
We introduce the Optimal Transport (OT) score, a confidence metric derived from a novel theoretical analysis.<n> OT score is intuitively interpretable and theoretically rigorous.<n>It provides principled uncertainty estimates for any given set of target pseudo-labels.<n>It improves SFUDA performance through training-time reweighting and provides a reliable, label-free proxy for model performance.
arXiv Detail & Related papers (2025-05-16T20:09:05Z) - Network Inversion for Generating Confidently Classified Counterfeits [11.599035626374409]
In vision classification, generating inputs that elicit confident predictions is key to understanding model behavior and reliability.<n>We extend network inversion techniques to generate Confidently Classified Counterfeits (CCCs)<n>CCCs offer a model-centric perspective on confidence, revealing that models can assign high confidence to entirely synthetic, out-of-distribution inputs.
arXiv Detail & Related papers (2025-03-26T03:26:49Z) - Uncertainty Calibration with Energy Based Instance-wise Scaling in the Wild Dataset [23.155946032377052]
We introduce a novel instance-wise calibration method based on an energy model.
Our method incorporates energy scores instead of softmax confidence scores, allowing for adaptive consideration of uncertainty.
In experiments, we show that the proposed method consistently maintains robust performance across the spectrum.
arXiv Detail & Related papers (2024-07-17T06:14:55Z) - Revisiting Confidence Estimation: Towards Reliable Failure Prediction [53.79160907725975]
We find a general, widely existing but actually-neglected phenomenon that most confidence estimation methods are harmful for detecting misclassification errors.
We propose to enlarge the confidence gap by finding flat minima, which yields state-of-the-art failure prediction performance.
arXiv Detail & Related papers (2024-03-05T11:44:14Z) - Free Lunch for Generating Effective Outlier Supervision [46.37464572099351]
We propose an ultra-effective method to generate near-realistic outlier supervision.
Our proposed textttBayesAug significantly reduces the false positive rate over 12.50% compared with the previous schemes.
arXiv Detail & Related papers (2023-01-17T01:46:45Z) - The Open-World Lottery Ticket Hypothesis for OOD Intent Classification [68.93357975024773]
We shed light on the fundamental cause of model overconfidence on OOD.
We also extend the Lottery Ticket Hypothesis to open-world scenarios.
arXiv Detail & Related papers (2022-10-13T14:58:35Z) - CODEs: Chamfer Out-of-Distribution Examples against Overconfidence Issue [22.900378003745196]
Overconfident predictions on out-of-distribution (OOD) samples is a thorny issue for deep neural networks.
This paper proposes the Chamfer OOD examples (CODEs), whose distribution is close to that of in-distribution samples.
We show that CODEs could be utilized to alleviate the OOD overconfidence issue effectively by suppressing predictions on them.
arXiv Detail & Related papers (2021-08-13T01:56:10Z) - On the Practicality of Deterministic Epistemic Uncertainty [106.06571981780591]
deterministic uncertainty methods (DUMs) achieve strong performance on detecting out-of-distribution data.
It remains unclear whether DUMs are well calibrated and can seamlessly scale to real-world applications.
arXiv Detail & Related papers (2021-07-01T17:59:07Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z) - Learning Calibrated Uncertainties for Domain Shift: A Distributionally
Robust Learning Approach [150.8920602230832]
We propose a framework for learning calibrated uncertainties under domain shifts.
In particular, the density ratio estimation reflects the closeness of a target (test) sample to the source (training) distribution.
We show that our proposed method generates calibrated uncertainties that benefit downstream tasks.
arXiv Detail & Related papers (2020-10-08T02:10:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.