Posterior Consistency for Missing Data in Variational Autoencoders
- URL: http://arxiv.org/abs/2310.16648v1
- Date: Wed, 25 Oct 2023 13:56:02 GMT
- Title: Posterior Consistency for Missing Data in Variational Autoencoders
- Authors: Timur Sudak, Sebastian Tschiatschek
- Abstract summary: We consider the problem of learning Variational Autoencoders (VAEs) from data with missing values.
We propose an approach for regularizing an encoder's posterior distribution which promotes this consistency.
This improved performance can be observed for many classes of VAEs including VAEs equipped with normalizing flows.
- Score: 11.18081298867526
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of learning Variational Autoencoders (VAEs), i.e., a
type of deep generative model, from data with missing values. Such data is
omnipresent in real-world applications of machine learning because complete
data is often impossible or too costly to obtain. We particularly focus on
improving a VAE's amortized posterior inference, i.e., the encoder, which in
the case of missing data can be susceptible to learning inconsistent posterior
distributions regarding the missingness. To this end, we provide a formal
definition of posterior consistency and propose an approach for regularizing an
encoder's posterior distribution which promotes this consistency. We observe
that the proposed regularization suggests a different training objective than
that typically considered in the literature when facing missing values.
Furthermore, we empirically demonstrate that our regularization leads to
improved performance in missing value settings in terms of reconstruction
quality and downstream tasks utilizing uncertainty in the latent space. This
improved performance can be observed for many classes of VAEs including VAEs
equipped with normalizing flows.
Related papers
- SINDER: Repairing the Singular Defects of DINOv2 [61.98878352956125]
Vision Transformer models trained on large-scale datasets often exhibit artifacts in the patch token they extract.
We propose a novel fine-tuning smooth regularization that rectifies structural deficiencies using only a small dataset.
arXiv Detail & Related papers (2024-07-23T20:34:23Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Leveraging variational autoencoders for multiple data imputation [0.5156484100374059]
We investigate the ability of deep models, namely variational autoencoders (VAEs), to account for uncertainty in missing data through multiple imputation strategies.
We find that VAEs provide poor empirical coverage of missing data, with underestimation and overconfident imputations.
To overcome this, we employ $beta$-VAEs, which viewed from a generalized Bayes framework, provide robustness to model misspecification.
arXiv Detail & Related papers (2022-09-30T08:58:43Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Consistency Regularization for Variational Auto-Encoders [14.423556966548544]
Variational auto-encoders (VAEs) are a powerful approach to unsupervised learning.
We propose a regularization method to enforce consistency in VAEs.
arXiv Detail & Related papers (2021-05-31T10:26:32Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Generalizing Variational Autoencoders with Hierarchical Empirical Bayes [6.273154057349038]
We present Hierarchical Empirical Bayes Autoencoder (HEBAE), a computationally stable framework for probabilistic generative models.
Our key contributions are two-fold. First, we make gains by placing a hierarchical prior over the encoding distribution, enabling us to adaptively balance the trade-off between minimizing the reconstruction loss function and avoiding over-regularization.
arXiv Detail & Related papers (2020-07-20T18:18:39Z) - Tomographic Auto-Encoder: Unsupervised Bayesian Recovery of Corrupted
Data [4.725669222165439]
We propose a new probabilistic method for unsupervised recovery of corrupted data.
Given a large ensemble of degraded samples, our method recovers accurate posteriors of clean values.
We test our model in a data recovery task under the common setting of missing values and noise.
arXiv Detail & Related papers (2020-06-30T16:18:16Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.