Coupled Variational Autoencoder
- URL: http://arxiv.org/abs/2306.02565v1
- Date: Mon, 5 Jun 2023 03:36:31 GMT
- Title: Coupled Variational Autoencoder
- Authors: Xiaoran Hao, Patrick Shafto
- Abstract summary: We propose the Coupled Variational Auto-Encoder (C-VAE), which formulates the VAE problem as one of Optimal Transport (OT)
The C-VAE allows greater flexibility in priors and natural resolution of the prior hole problem.
We show that the C-VAE outperforms alternatives including VAE, WAE, and InfoVAE in fidelity to the data, quality of the latent representation, and in quality of generated samples.
- Score: 6.599344783327053
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational auto-encoders are powerful probabilistic models in generative
tasks but suffer from generating low-quality samples which are caused by the
holes in the prior. We propose the Coupled Variational Auto-Encoder (C-VAE),
which formulates the VAE problem as one of Optimal Transport (OT) between the
prior and data distributions. The C-VAE allows greater flexibility in priors
and natural resolution of the prior hole problem by enforcing coupling between
the prior and the data distribution and enables flexible optimization through
the primal, dual, and semi-dual formulations of entropic OT. Simulations on
synthetic and real data show that the C-VAE outperforms alternatives including
VAE, WAE, and InfoVAE in fidelity to the data, quality of the latent
representation, and in quality of generated samples.
Related papers
- Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - Targeted Analysis of High-Risk States Using an Oriented Variational
Autoencoder [3.494548275937873]
Variational autoencoder (VAE) neural networks can be trained to generate power system states.
The coordinates of the latent space codes of VAEs have been shown to correlate with conceptual features of the data.
In this paper, an oriented variation autoencoder (OVAE) is proposed to constrain the link between latent space code and generated data.
arXiv Detail & Related papers (2023-03-20T19:34:21Z) - Dizygotic Conditional Variational AutoEncoder for Multi-Modal and
Partial Modality Absent Few-Shot Learning [19.854565192491123]
We present a novel multi-modal data augmentation approach named Dizygotic Conditional Variational AutoEncoder (DCVAE)
DCVAE conducts feature synthesis via pairing two Conditional Variational AutoEncoders (CVAEs) with the same seed but different modality conditions in a dizygotic symbiosis manner.
The generated features of two CVAEs are adaptively combined to yield the final feature, which can be converted back into its paired conditions.
arXiv Detail & Related papers (2021-06-28T08:29:55Z) - Discrete Auto-regressive Variational Attention Models for Text Modeling [53.38382932162732]
Variational autoencoders (VAEs) have been widely applied for text modeling.
They are troubled by two challenges: information underrepresentation and posterior collapse.
We propose Discrete Auto-regressive Variational Attention Model (DAVAM) to address the challenges.
arXiv Detail & Related papers (2021-06-16T06:36:26Z) - PriorGrad: Improving Conditional Denoising Diffusion Models with
Data-Driven Adaptive Prior [103.00403682863427]
We propose PriorGrad to improve the efficiency of the conditional diffusion model.
We show that PriorGrad achieves a faster convergence leading to data and parameter efficiency and improved quality.
arXiv Detail & Related papers (2021-06-11T14:04:03Z) - Model Selection for Bayesian Autoencoders [25.619565817793422]
We propose to optimize the distributional sliced-Wasserstein distance between the output of the autoencoder and the empirical data distribution.
We turn our BAE into a generative model by fitting a flexible Dirichlet mixture model in the latent space.
We evaluate our approach qualitatively and quantitatively using a vast experimental campaign on a number of unsupervised learning tasks and show that, in small-data regimes where priors matter, our approach provides state-of-the-art results.
arXiv Detail & Related papers (2021-06-11T08:55:00Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Bigeminal Priors Variational auto-encoder [5.430048915427229]
Variational auto-encoders (VAEs) are an influential and generally-used class of likelihood-based generative models in unsupervised learning.
We introduce a new model, namely Bigeminal Priors Variational auto-encoder (BPVAE), to address this phenomenon.
BPVAE learns two datasets' features, assigning a higher likelihood for the training dataset than the simple dataset.
arXiv Detail & Related papers (2020-10-05T07:10:52Z) - Decomposed Adversarial Learned Inference [118.27187231452852]
We propose a novel approach, Decomposed Adversarial Learned Inference (DALI)
DALI explicitly matches prior and conditional distributions in both data and code spaces.
We validate the effectiveness of DALI on the MNIST, CIFAR-10, and CelebA datasets.
arXiv Detail & Related papers (2020-04-21T20:00:35Z) - On the Encoder-Decoder Incompatibility in Variational Text Modeling and
Beyond [82.18770740564642]
Variational autoencoders (VAEs) combine latent variables with amortized variational inference.
We observe the encoder-decoder incompatibility that leads to poor parameterizations of the data manifold.
We propose Coupled-VAE, which couples a VAE model with a deterministic autoencoder with the same structure.
arXiv Detail & Related papers (2020-04-20T10:34:10Z) - Variational auto-encoders with Student's t-prior [0.0]
We propose a new structure for the variational auto-encoders (VAEs) prior.
All distribution parameters are trained, thereby allowing for a more robust approximation of the underlying data distribution.
arXiv Detail & Related papers (2020-04-06T11:54:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.