Revisiting Likelihood-Based Out-of-Distribution Detection by Modeling Representations
- URL: http://arxiv.org/abs/2504.07793v1
- Date: Thu, 10 Apr 2025 14:30:41 GMT
- Title: Revisiting Likelihood-Based Out-of-Distribution Detection by Modeling Representations
- Authors: Yifan Ding, Arturas Aleksandrauskas, Amirhossein Ahmadian, Jonas Unger, Fredrik Lindsten, Gabriel Eilertsen,
- Abstract summary: Out-of-distribution (OOD) detection is critical for ensuring the reliability of deep learning systems.<n>Likelihood-based deep generative models have historically faced criticism for their unsatisfactory performance in OOD detection.<n>We show that likelihood-based methods can still perform on par with state-of-the-art methods when applied in the representation space of pre-trained encoders.
- Score: 16.317861186815364
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Out-of-distribution (OOD) detection is critical for ensuring the reliability of deep learning systems, particularly in safety-critical applications. Likelihood-based deep generative models have historically faced criticism for their unsatisfactory performance in OOD detection, often assigning higher likelihood to OOD data than in-distribution samples when applied to image data. In this work, we demonstrate that likelihood is not inherently flawed. Rather, several properties in the images space prohibit likelihood as a valid detection score. Given a sufficiently good likelihood estimator, specifically using the probability flow formulation of a diffusion model, we show that likelihood-based methods can still perform on par with state-of-the-art methods when applied in the representation space of pre-trained encoders. The code of our work can be found at $\href{https://github.com/limchaos/Likelihood-OOD.git}{\texttt{https://github.com/limchaos/Likelihood-OOD.git}}$.
Related papers
- Going Beyond Conventional OOD Detection [0.0]
Out-of-distribution (OOD) detection is critical to ensure the safe deployment of deep learning models in critical applications.<n>We present a unified Approach to Spurimatious, fine-grained, and Conventional OOD Detection (ASCOOD)<n>Our approach effectively mitigates the impact of spurious correlations and encourages capturing fine-grained attributes.
arXiv Detail & Related papers (2024-11-16T13:04:52Z) - Your Classifier Can Be Secretly a Likelihood-Based OOD Detector [17.420727709895736]
We propose Intrinsic Likelihood (INK), which offers rigorous likelihood interpretation to modern discriminative-based classifiers.
INK establishes a new state-of-the-art in a variety of OOD detection setups, including both far-OOD and near-OOD.
arXiv Detail & Related papers (2024-08-09T04:00:53Z) - Model-free Test Time Adaptation for Out-Of-Distribution Detection [62.49795078366206]
We propose a Non-Parametric Test Time textbfAdaptation framework for textbfDistribution textbfDetection (abbr)
abbr utilizes online test samples for model adaptation during testing, enhancing adaptability to changing data distributions.
We demonstrate the effectiveness of abbr through comprehensive experiments on multiple OOD detection benchmarks.
arXiv Detail & Related papers (2023-11-28T02:00:47Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - Masked Images Are Counterfactual Samples for Robust Fine-tuning [77.82348472169335]
Fine-tuning deep learning models can lead to a trade-off between in-distribution (ID) performance and out-of-distribution (OOD) robustness.
We propose a novel fine-tuning method, which uses masked images as counterfactual samples that help improve the robustness of the fine-tuning model.
arXiv Detail & Related papers (2023-03-06T11:51:28Z) - Free Lunch for Generating Effective Outlier Supervision [46.37464572099351]
We propose an ultra-effective method to generate near-realistic outlier supervision.
Our proposed textttBayesAug significantly reduces the false positive rate over 12.50% compared with the previous schemes.
arXiv Detail & Related papers (2023-01-17T01:46:45Z) - Watermarking for Out-of-distribution Detection [76.20630986010114]
Out-of-distribution (OOD) detection aims to identify OOD data based on representations extracted from well-trained deep models.
We propose a general methodology named watermarking in this paper.
We learn a unified pattern that is superimposed onto features of original data, and the model's detection capability is largely boosted after watermarking.
arXiv Detail & Related papers (2022-10-27T06:12:32Z) - Understanding Failures in Out-of-Distribution Detection with Deep
Generative Models [22.11487118547924]
We prove that no method can guarantee performance beyond random chance without assumptions on which out-distributions are relevant.
We highlight the consequences implied by assuming support overlap between in- and out-distributions.
Our results suggest that estimation error is a more plausible explanation than the misalignment between likelihood-based OOD detection and out-distributions of interest.
arXiv Detail & Related papers (2021-07-14T18:00:11Z) - Robust Out-of-Distribution Detection on Deep Probabilistic Generative
Models [0.06372261626436676]
Out-of-distribution (OOD) detection is an important task in machine learning systems.
Deep probabilistic generative models facilitate OOD detection by estimating the likelihood of a data sample.
We propose a new detection metric that operates without outlier exposure.
arXiv Detail & Related papers (2021-06-15T06:36:10Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Why Normalizing Flows Fail to Detect Out-of-Distribution Data [51.552870594221865]
Normalizing flows fail to distinguish between in- and out-of-distribution data.
We demonstrate that flows learn local pixel correlations and generic image-to-latent-space transformations.
We show that by modifying the architecture of flow coupling layers we can bias the flow towards learning the semantic structure of the target data.
arXiv Detail & Related papers (2020-06-15T17:00:01Z) - Likelihood Regret: An Out-of-Distribution Detection Score For
Variational Auto-encoder [6.767885381740952]
probabilistic generative models can assign higher likelihoods on certain types of out-of-distribution samples.
We propose Likelihood Regret, an efficient OOD score for VAEs.
arXiv Detail & Related papers (2020-03-06T00:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.