Reject option models comprising out-of-distribution detection
- URL: http://arxiv.org/abs/2307.05199v1
- Date: Tue, 11 Jul 2023 12:09:14 GMT
- Title: Reject option models comprising out-of-distribution detection
- Authors: Vojtech Franc, Daniel Prusa, Jakub Paplham
- Abstract summary: The optimal prediction strategy for out-of-distribution setups is a fundamental question in machine learning.
We propose three reject option models for OOD setups.
We establish that all the proposed models, despite their different formulations, share a common class of optimal strategies.
- Score: 6.746400031322727
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The optimal prediction strategy for out-of-distribution (OOD) setups is a
fundamental question in machine learning. In this paper, we address this
question and present several contributions. We propose three reject option
models for OOD setups: the Cost-based model, the Bounded TPR-FPR model, and the
Bounded Precision-Recall model. These models extend the standard reject option
models used in non-OOD setups and define the notion of an optimal OOD selective
classifier. We establish that all the proposed models, despite their different
formulations, share a common class of optimal strategies. Motivated by the
optimal strategy, we introduce double-score OOD methods that leverage
uncertainty scores from two chosen OOD detectors: one focused on OOD/ID
discrimination and the other on misclassification detection. The experimental
results consistently demonstrate the superior performance of this simple
strategy compared to state-of-the-art methods. Additionally, we propose novel
evaluation metrics derived from the definition of the optimal strategy under
the proposed OOD rejection models. These new metrics provide a comprehensive
and reliable assessment of OOD methods without the deficiencies observed in
existing evaluation approaches.
Related papers
- Harnessing Large Language and Vision-Language Models for Robust Out-of-Distribution Detection [11.277049921075026]
Out-of-distribution (OOD) detection has seen significant advancements with zero-shot approaches.
We propose a novel strategy to enhance zero-shot OOD detection performances for both Far-OOD and Near-OOD scenarios.
We introduce novel few-shot prompt tuning and visual prompt tuning to adapt the proposed framework to better align with the target distribution.
arXiv Detail & Related papers (2025-01-09T13:36:37Z) - Scalable Ensemble Diversification for OOD Generalization and Detection [68.8982448081223]
SED identifies hard training samples on the fly and encourages the ensemble members to disagree on these.
We show how to avoid the expensive computations in existing methods of exhaustive pairwise disagreements across models.
For OOD generalization, we observe large benefits from the diversification in multiple settings including output-space (classical) ensembles and weight-space ensembles (model soups)
arXiv Detail & Related papers (2024-09-25T10:30:24Z) - SeTAR: Out-of-Distribution Detection with Selective Low-Rank Approximation [5.590633742488972]
Out-of-distribution (OOD) detection is crucial for the safe deployment of neural networks.
We propose SeTAR, a training-free OOD detection method.
SeTAR enhances OOD detection via post-hoc modification of the model's weight matrices using a simple greedy search algorithm.
Our work offers a scalable, efficient solution for OOD detection, setting a new state-of-the-art in this area.
arXiv Detail & Related papers (2024-06-18T13:55:13Z) - Model-free Test Time Adaptation for Out-Of-Distribution Detection [62.49795078366206]
We propose a Non-Parametric Test Time textbfAdaptation framework for textbfDistribution textbfDetection (abbr)
abbr utilizes online test samples for model adaptation during testing, enhancing adaptability to changing data distributions.
We demonstrate the effectiveness of abbr through comprehensive experiments on multiple OOD detection benchmarks.
arXiv Detail & Related papers (2023-11-28T02:00:47Z) - Towards Calibrated Robust Fine-Tuning of Vision-Language Models [97.19901765814431]
This work proposes a robust fine-tuning method that improves both OOD accuracy and confidence calibration simultaneously in vision language models.
We show that both OOD classification and OOD calibration errors have a shared upper bound consisting of two terms of ID data.
Based on this insight, we design a novel framework that conducts fine-tuning with a constrained multimodal contrastive loss enforcing a larger smallest singular value.
arXiv Detail & Related papers (2023-11-03T05:41:25Z) - Optimal Budgeted Rejection Sampling for Generative Models [54.050498411883495]
Rejection sampling methods have been proposed to improve the performance of discriminator-based generative models.
We first propose an Optimal Budgeted Rejection Sampling scheme that is provably optimal.
Second, we propose an end-to-end method that incorporates the sampling scheme into the training procedure to further enhance the model's overall performance.
arXiv Detail & Related papers (2023-11-01T11:52:41Z) - Towards Realistic Out-of-Distribution Detection: A Novel Evaluation
Framework for Improving Generalization in OOD Detection [14.541761912174799]
This paper presents a novel evaluation framework for Out-of-Distribution (OOD) detection.
It aims to assess the performance of machine learning models in more realistic settings.
arXiv Detail & Related papers (2022-11-20T07:30:15Z) - RODD: A Self-Supervised Approach for Robust Out-of-Distribution
Detection [12.341250124228859]
We propose a simple yet effective generalized OOD detection method independent of out-of-distribution datasets.
Our approach relies on self-supervised feature learning of the training samples, where the embeddings lie on a compact low-dimensional space.
We empirically show that a pre-trained model with self-supervised contrastive learning yields a better model for uni-dimensional feature learning in the latent space.
arXiv Detail & Related papers (2022-04-06T03:05:58Z) - ReAct: Out-of-distribution Detection With Rectified Activations [20.792140933660075]
Out-of-distribution (OOD) detection has received much attention lately due to its practical importance.
One of the primary challenges is that models often produce highly confident predictions on OOD data.
We propose ReAct--a simple and effective technique for reducing model overconfidence on OOD data.
arXiv Detail & Related papers (2021-11-24T21:02:07Z) - On the model-based stochastic value gradient for continuous
reinforcement learning [50.085645237597056]
We show that simple model-based agents can outperform state-of-the-art model-free agents in terms of both sample-efficiency and final reward.
Our findings suggest that model-based policy evaluation deserves closer attention.
arXiv Detail & Related papers (2020-08-28T17:58:29Z) - Providing reliability in Recommender Systems through Bernoulli Matrix
Factorization [63.732639864601914]
This paper proposes Bernoulli Matrix Factorization (BeMF) to provide both prediction values and reliability values.
BeMF acts on model-based collaborative filtering rather than on memory-based filtering.
The more reliable a prediction is, the less liable it is to be wrong.
arXiv Detail & Related papers (2020-06-05T14:24:27Z) - Likelihood Regret: An Out-of-Distribution Detection Score For
Variational Auto-encoder [6.767885381740952]
probabilistic generative models can assign higher likelihoods on certain types of out-of-distribution samples.
We propose Likelihood Regret, an efficient OOD score for VAEs.
arXiv Detail & Related papers (2020-03-06T00:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.