Detecting Out-of-distribution Samples via Variational Auto-encoder with
Reliable Uncertainty Estimation
- URL: http://arxiv.org/abs/2007.08128v3
- Date: Mon, 1 Nov 2021 05:50:15 GMT
- Title: Detecting Out-of-distribution Samples via Variational Auto-encoder with
Reliable Uncertainty Estimation
- Authors: Xuming Ran, Mingkun Xu, Lingrui Mei, Qi Xu, Quanying Liu
- Abstract summary: Variational autoencoders (VAEs) are influential generative models with rich representation capabilities.
VAE models have a weakness that assign a higher likelihood to out-of-distribution (OOD) inputs than in-distribution (ID) inputs.
In this study, we propose an improved noise contrastive prior (INCP) to be able to integrate into the encoder of VAEs, called INCPVAE.
- Score: 5.430048915427229
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Variational autoencoders (VAEs) are influential generative models with rich
representation capabilities from the deep neural network architecture and
Bayesian method. However, VAE models have a weakness that assign a higher
likelihood to out-of-distribution (OOD) inputs than in-distribution (ID)
inputs. To address this problem, a reliable uncertainty estimation is
considered to be critical for in-depth understanding of OOD inputs. In this
study, we propose an improved noise contrastive prior (INCP) to be able to
integrate into the encoder of VAEs, called INCPVAE. INCP is scalable, trainable
and compatible with VAEs, and it also adopts the merits from the INCP for
uncertainty estimation. Experiments on various datasets demonstrate that
compared to the standard VAEs, our model is superior in uncertainty estimation
for the OOD data and is robust in anomaly detection tasks. The INCPVAE model
obtains reliable uncertainty estimation for OOD inputs and solves the OOD
problem in VAE models.
Related papers
- What If the Input is Expanded in OOD Detection? [77.37433624869857]
Out-of-distribution (OOD) detection aims to identify OOD inputs from unknown classes.
Various scoring functions are proposed to distinguish it from in-distribution (ID) data.
We introduce a novel perspective, i.e., employing different common corruptions on the input space.
arXiv Detail & Related papers (2024-10-24T06:47:28Z) - Rethinking Out-of-Distribution Detection on Imbalanced Data Distribution [38.844580833635725]
We present a training-time regularization technique to mitigate the bias and boost imbalanced OOD detectors across architecture designs.
Our method translates into consistent improvements on the representative CIFAR10-LT, CIFAR100-LT, and ImageNet-LT benchmarks.
arXiv Detail & Related papers (2024-07-23T12:28:59Z) - Distilling the Unknown to Unveil Certainty [66.29929319664167]
Out-of-distribution (OOD) detection is essential in identifying test samples that deviate from the in-distribution (ID) data upon which a standard network is trained.
This paper introduces OOD knowledge distillation, a pioneering learning framework applicable whether or not training ID data is available.
arXiv Detail & Related papers (2023-11-14T08:05:02Z) - LINe: Out-of-Distribution Detection by Leveraging Important Neurons [15.797257361788812]
We introduce a new aspect for analyzing the difference in model outputs between in-distribution data and OOD data.
We propose a novel method, Leveraging Important Neurons (LINe), for post-hoc Out of distribution detection.
arXiv Detail & Related papers (2023-03-24T13:49:05Z) - Diffusion Denoising Process for Perceptron Bias in Out-of-distribution
Detection [67.49587673594276]
We introduce a new perceptron bias assumption that suggests discriminator models are more sensitive to certain features of the input, leading to the overconfidence problem.
We demonstrate that the diffusion denoising process (DDP) of DMs serves as a novel form of asymmetric, which is well-suited to enhance the input and mitigate the overconfidence problem.
Our experiments on CIFAR10, CIFAR100, and ImageNet show that our method outperforms SOTA approaches.
arXiv Detail & Related papers (2022-11-21T08:45:08Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Bigeminal Priors Variational auto-encoder [5.430048915427229]
Variational auto-encoders (VAEs) are an influential and generally-used class of likelihood-based generative models in unsupervised learning.
We introduce a new model, namely Bigeminal Priors Variational auto-encoder (BPVAE), to address this phenomenon.
BPVAE learns two datasets' features, assigning a higher likelihood for the training dataset than the simple dataset.
arXiv Detail & Related papers (2020-10-05T07:10:52Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z) - Uncertainty-Based Out-of-Distribution Classification in Deep
Reinforcement Learning [17.10036674236381]
Wrong predictions for out-of-distribution data can cause safety critical situations in machine learning systems.
We propose a framework for uncertainty-based OOD classification: UBOOD.
We show that UBOOD produces reliable classification results when combined with ensemble-based estimators.
arXiv Detail & Related papers (2019-12-31T09:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.