Rethinking Test-time Likelihood: The Likelihood Path Principle and Its
Application to OOD Detection
- URL: http://arxiv.org/abs/2401.04933v1
- Date: Wed, 10 Jan 2024 05:07:14 GMT
- Title: Rethinking Test-time Likelihood: The Likelihood Path Principle and Its
Application to OOD Detection
- Authors: Sicong Huang, Jiawei He, Kry Yik Chau Lui
- Abstract summary: We introduce the likelihood path (LPath) principle, generalizing the likelihood principle.
This narrows the search for informative summary statistics down to the minimal sufficient statistics of VAEs' conditional likelihoods.
The corresponding LPath algorithm demonstrates SOTA performances, even using simple and small VAEs with poor likelihood estimates.
- Score: 5.747789057967598
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While likelihood is attractive in theory, its estimates by deep generative
models (DGMs) are often broken in practice, and perform poorly for out of
distribution (OOD) Detection. Various recent works started to consider
alternative scores and achieved better performances. However, such recipes do
not come with provable guarantees, nor is it clear that their choices extract
sufficient information.
We attempt to change this by conducting a case study on variational
autoencoders (VAEs). First, we introduce the likelihood path (LPath) principle,
generalizing the likelihood principle. This narrows the search for informative
summary statistics down to the minimal sufficient statistics of VAEs'
conditional likelihoods. Second, introducing new theoretic tools such as nearly
essential support, essential distance and co-Lipschitzness, we obtain
non-asymptotic provable OOD detection guarantees for certain distillation of
the minimal sufficient statistics. The corresponding LPath algorithm
demonstrates SOTA performances, even using simple and small VAEs with poor
likelihood estimates. To our best knowledge, this is the first provable
unsupervised OOD method that delivers excellent empirical results, better than
any other VAEs based techniques. We use the same model as
\cite{xiao2020likelihood}, open sourced from:
https://github.com/XavierXiao/Likelihood-Regret
Related papers
- Resultant: Incremental Effectiveness on Likelihood for Unsupervised Out-of-Distribution Detection [63.93728560200819]
Unsupervised out-of-distribution (U-OOD) detection is to identify data samples with a detector trained solely on unlabeled in-distribution (ID) data.
Recent studies have developed various detectors based on DGMs to move beyond likelihood.
We apply two techniques for each direction, specifically post-hoc prior and dataset entropy-mutual calibration.
Experimental results demonstrate that the Resultant could be a new state-of-the-art U-OOD detector.
arXiv Detail & Related papers (2024-09-05T02:58:13Z) - Model-free Test Time Adaptation for Out-Of-Distribution Detection [62.49795078366206]
We propose a Non-Parametric Test Time textbfAdaptation framework for textbfDistribution textbfDetection (abbr)
abbr utilizes online test samples for model adaptation during testing, enhancing adaptability to changing data distributions.
We demonstrate the effectiveness of abbr through comprehensive experiments on multiple OOD detection benchmarks.
arXiv Detail & Related papers (2023-11-28T02:00:47Z) - FLatS: Principled Out-of-Distribution Detection with Feature-Based
Likelihood Ratio Score [2.9914612342004503]
FLatS is a principled solution for OOD detection based on likelihood ratio.
We demonstrate that FLatS can serve as a general framework capable of enhancing other OOD detection methods.
Experiments show that FLatS establishes a new SOTA on popular benchmarks.
arXiv Detail & Related papers (2023-10-08T09:16:46Z) - LINe: Out-of-Distribution Detection by Leveraging Important Neurons [15.797257361788812]
We introduce a new aspect for analyzing the difference in model outputs between in-distribution data and OOD data.
We propose a novel method, Leveraging Important Neurons (LINe), for post-hoc Out of distribution detection.
arXiv Detail & Related papers (2023-03-24T13:49:05Z) - Detecting Out-of-distribution Examples via Class-conditional Impressions
Reappearing [30.938412222724608]
Out-of-distribution (OOD) detection aims at enhancing standard deep neural networks to distinguish anomalous inputs from original training data.
Due to privacy and security, auxiliary data tends to be impractical in a real-world scenario.
We propose a data-free method without training on natural data, called Class-Conditional Impressions Reappearing (C2IR)
arXiv Detail & Related papers (2023-03-17T02:55:08Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Robust Out-of-Distribution Detection on Deep Probabilistic Generative
Models [0.06372261626436676]
Out-of-distribution (OOD) detection is an important task in machine learning systems.
Deep probabilistic generative models facilitate OOD detection by estimating the likelihood of a data sample.
We propose a new detection metric that operates without outlier exposure.
arXiv Detail & Related papers (2021-06-15T06:36:10Z) - Label Smoothed Embedding Hypothesis for Out-of-Distribution Detection [72.35532598131176]
We propose an unsupervised method to detect OOD samples using a $k$-NN density estimate.
We leverage a recent insight about label smoothing, which we call the emphLabel Smoothed Embedding Hypothesis
We show that our proposal outperforms many OOD baselines and also provide new finite-sample high-probability statistical results.
arXiv Detail & Related papers (2021-02-09T21:04:44Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Density of States Estimation for Out-of-Distribution Detection [69.90130863160384]
DoSE is the density of states estimator.
We demonstrate DoSE's state-of-the-art performance against other unsupervised OOD detectors.
arXiv Detail & Related papers (2020-06-16T16:06:25Z) - Likelihood Regret: An Out-of-Distribution Detection Score For
Variational Auto-encoder [6.767885381740952]
probabilistic generative models can assign higher likelihoods on certain types of out-of-distribution samples.
We propose Likelihood Regret, an efficient OOD score for VAEs.
arXiv Detail & Related papers (2020-03-06T00:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.