Improving Variational Autoencoders with Density Gap-based Regularization
- URL: http://arxiv.org/abs/2211.00321v1
- Date: Tue, 1 Nov 2022 08:17:10 GMT
- Title: Improving Variational Autoencoders with Density Gap-based Regularization
- Authors: Jianfei Zhang, Jun Bai, Chenghua Lin, Yanmeng Wang, Wenge Rong
- Abstract summary: Variational autoencoders (VAEs) are one of the powerful unsupervised learning frameworks in NLP for latent representation learning and latent-directed generation.
In practice, optimizing ELBo often leads the posterior distribution of all samples converge to the same degenerated local optimum, namely posterior collapse or KL vanishing.
We introduce new training objectives to tackle both problems through a novel regularization based on the probabilistic density gap between the aggregated posterior distribution and the prior distribution.
- Score: 16.770753948524167
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational autoencoders (VAEs) are one of the powerful unsupervised learning
frameworks in NLP for latent representation learning and latent-directed
generation. The classic optimization goal of VAEs is to maximize the Evidence
Lower Bound (ELBo), which consists of a conditional likelihood for generation
and a negative Kullback-Leibler (KL) divergence for regularization. In
practice, optimizing ELBo often leads the posterior distribution of all samples
converge to the same degenerated local optimum, namely posterior collapse or KL
vanishing. There are effective ways proposed to prevent posterior collapse in
VAEs, but we observe that they in essence make trade-offs between posterior
collapse and hole problem, i.e., mismatch between the aggregated posterior
distribution and the prior distribution. To this end, we introduce new training
objectives to tackle both two problems through a novel regularization based on
the probabilistic density gap between the aggregated posterior distribution and
the prior distribution. Through experiments on language modeling, latent space
visualization and interpolation, we show that our proposed method can solve
both problems effectively and thus outperforms the existing methods in
latent-directed generation. To the best of our knowledge, we are the first to
jointly solve the hole problem and the posterior collapse.
Related papers
- Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - How to train your VAE [0.0]
Variational Autoencoders (VAEs) have become a cornerstone in generative modeling and representation learning within machine learning.
This paper explores interpreting the Kullback-Leibler (KL) Divergence, a critical component within the Evidence Lower Bound (ELBO)
The proposed method redefines the ELBO with a mixture of Gaussians for the posterior probability, introduces a regularization term, and employs a PatchGAN discriminator to enhance texture realism.
arXiv Detail & Related papers (2023-09-22T19:52:28Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Generalizing Variational Autoencoders with Hierarchical Empirical Bayes [6.273154057349038]
We present Hierarchical Empirical Bayes Autoencoder (HEBAE), a computationally stable framework for probabilistic generative models.
Our key contributions are two-fold. First, we make gains by placing a hierarchical prior over the encoding distribution, enabling us to adaptively balance the trade-off between minimizing the reconstruction loss function and avoiding over-regularization.
arXiv Detail & Related papers (2020-07-20T18:18:39Z) - Preventing Posterior Collapse with Levenshtein Variational Autoencoder [61.30283661804425]
We propose to replace the evidence lower bound (ELBO) with a new objective which is simple to optimize and prevents posterior collapse.
We show that Levenstein VAE produces more informative latent representations than alternative approaches to preventing posterior collapse.
arXiv Detail & Related papers (2020-04-30T13:27:26Z) - A Batch Normalized Inference Network Keeps the KL Vanishing Away [35.40781000297285]
Variational Autoencoder (VAE) is widely used to approximate a model's posterior on latent variables.
VAE often converges to a degenerated local optimum known as "posterior collapse"
arXiv Detail & Related papers (2020-04-27T05:20:01Z) - Discrete Variational Attention Models for Language Generation [51.88612022940496]
We propose a discrete variational attention model with categorical distribution over the attention mechanism owing to the discrete nature in languages.
Thanks to the property of discreteness, the training of our proposed approach does not suffer from posterior collapse.
arXiv Detail & Related papers (2020-04-21T05:49:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.