Likelihood Regret: An Out-of-Distribution Detection Score For
Variational Auto-encoder
- URL: http://arxiv.org/abs/2003.02977v3
- Date: Sat, 10 Oct 2020 21:58:14 GMT
- Title: Likelihood Regret: An Out-of-Distribution Detection Score For
Variational Auto-encoder
- Authors: Zhisheng Xiao, Qing Yan, Yali Amit
- Abstract summary: probabilistic generative models can assign higher likelihoods on certain types of out-of-distribution samples.
We propose Likelihood Regret, an efficient OOD score for VAEs.
- Score: 6.767885381740952
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep probabilistic generative models enable modeling the likelihoods of very
high dimensional data. An important application of generative modeling should
be the ability to detect out-of-distribution (OOD) samples by setting a
threshold on the likelihood. However, some recent studies show that
probabilistic generative models can, in some cases, assign higher likelihoods
on certain types of OOD samples, making the OOD detection rules based on
likelihood threshold problematic. To address this issue, several OOD detection
methods have been proposed for deep generative models. In this paper, we make
the observation that many of these methods fail when applied to generative
models based on Variational Auto-encoders (VAE). As an alternative, we propose
Likelihood Regret, an efficient OOD score for VAEs. We benchmark our proposed
method over existing approaches, and empirical results suggest that our method
obtains the best overall OOD detection performances when applied to VAEs.
Related papers
- Out-of-Distribution Detection with a Single Unconditional Diffusion Model [54.15132801131365]
Out-of-distribution (OOD) detection is a critical task in machine learning that seeks to identify abnormal samples.
Traditionally, unsupervised methods utilize a deep generative model for OOD detection.
This paper explores whether a single model can perform OOD detection across diverse tasks.
arXiv Detail & Related papers (2024-05-20T08:54:03Z) - Model Reprogramming Outperforms Fine-tuning on Out-of-distribution Data in Text-Image Encoders [56.47577824219207]
In this paper, we unveil the hidden costs associated with intrusive fine-tuning techniques.
We introduce a new model reprogramming approach for fine-tuning, which we name Reprogrammer.
Our empirical evidence reveals that Reprogrammer is less intrusive and yields superior downstream models.
arXiv Detail & Related papers (2024-03-16T04:19:48Z) - Unsupervised Out-of-Distribution Detection by Restoring Lossy Inputs
with Variational Autoencoder [3.498694457257263]
We propose a novel VAE-based score called Error Reduction (ER) for OOD detection.
ER is based on a VAE that takes a lossy version of the training set as inputs and the original set as targets.
arXiv Detail & Related papers (2023-09-05T09:42:15Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - Watermarking for Out-of-distribution Detection [76.20630986010114]
Out-of-distribution (OOD) detection aims to identify OOD data based on representations extracted from well-trained deep models.
We propose a general methodology named watermarking in this paper.
We learn a unified pattern that is superimposed onto features of original data, and the model's detection capability is largely boosted after watermarking.
arXiv Detail & Related papers (2022-10-27T06:12:32Z) - Robust Out-of-Distribution Detection on Deep Probabilistic Generative
Models [0.06372261626436676]
Out-of-distribution (OOD) detection is an important task in machine learning systems.
Deep probabilistic generative models facilitate OOD detection by estimating the likelihood of a data sample.
We propose a new detection metric that operates without outlier exposure.
arXiv Detail & Related papers (2021-06-15T06:36:10Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z) - Detecting Out-of-Distribution Examples with In-distribution Examples and
Gram Matrices [8.611328447624679]
Deep neural networks yield confident, incorrect predictions when presented with Out-of-Distribution examples.
In this paper, we propose to detect OOD examples by identifying inconsistencies between activity patterns and class predicted.
We find that characterizing activity patterns by Gram matrices and identifying anomalies in gram matrix values can yield high OOD detection rates.
arXiv Detail & Related papers (2019-12-28T19:44:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.