Neighbor Embedding Variational Autoencoder
- URL: http://arxiv.org/abs/2103.11349v1
- Date: Sun, 21 Mar 2021 09:49:12 GMT
- Title: Neighbor Embedding Variational Autoencoder
- Authors: Renfei Tu, Yang Liu, Yongzeng Xue, Cheng Wang and Maozu Guo
- Abstract summary: We propose a novel model, neighbor embedding VAE(NE-VAE), which explicitly constraints the encoder to encode inputs close in the input space to be close in the latent space.
In our experiments, NE-VAE can produce qualitatively different latent representations with majority of the latent dimensions remained active.
- Score: 14.08587678497785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Being one of the most popular generative framework, variational
autoencoders(VAE) are known to suffer from a phenomenon termed posterior
collapse, i.e. the latent variational distributions collapse to the prior,
especially when a strong decoder network is used. In this work, we analyze the
latent representation of collapsed VAEs, and proposed a novel model, neighbor
embedding VAE(NE-VAE), which explicitly constraints the encoder to encode
inputs close in the input space to be close in the latent space. We observed
that for VAE variants that report similar ELBO, KL divergence or even mutual
information scores may still behave quite differently in the latent
organization. In our experiments, NE-VAE can produce qualitatively different
latent representations with majority of the latent dimensions remained active,
which may benefit downstream latent space optimization tasks. NE-VAE can
prevent posterior collapse to a much greater extent than it's predecessors, and
can be easily plugged into any autoencoder framework, without introducing
addition model components and complex training routines.
Related papers
- Symmetric Equilibrium Learning of VAEs [56.56929742714685]
We view variational autoencoders (VAEs) as decoder-encoder pairs, which map distributions in the data space to distributions in the latent space and vice versa.
We propose a Nash equilibrium learning approach, which is symmetric with respect to the encoder and decoder and allows learning VAEs in situations where both the data and the latent distributions are accessible only by sampling.
arXiv Detail & Related papers (2023-07-19T10:27:34Z) - Beyond Vanilla Variational Autoencoders: Detecting Posterior Collapse in Conditional and Hierarchical Variational Autoencoders [25.61363481391964]
The posterior collapse phenomenon in variational autoencoder (VAE) can hinder the quality of the learned latent variables.
In this work, we advance the theoretical understanding of posterior collapse to two important and prevalent yet less studied classes of VAE: conditional VAE and hierarchical VAE.
arXiv Detail & Related papers (2023-06-08T08:22:27Z) - Defending Variational Autoencoders from Adversarial Attacks with MCMC [74.36233246536459]
Variational autoencoders (VAEs) are deep generative models used in various domains.
As previous work has shown, one can easily fool VAEs to produce unexpected latent representations and reconstructions for a visually slightly modified input.
Here, we examine several objective functions for adversarial attacks construction, suggest metrics assess the model robustness, and propose a solution.
arXiv Detail & Related papers (2022-03-18T13:25:18Z) - Variational Autoencoders Without the Variation [0.0]
Variational autoencdoers (VAEs) are a popular approach to generative modelling.
Recent work on regularised and entropic autoencoders have begun to explore the potential, for generative modelling, of removing the variational approach and returning to the classic deterministic autoencoder (DAE)
In this paper we empirically explore the capability of DAEs for image generation without additional novel methods and the effect of the implicit regularisation and smoothness of large networks.
arXiv Detail & Related papers (2022-03-01T17:39:02Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Discrete Auto-regressive Variational Attention Models for Text Modeling [53.38382932162732]
Variational autoencoders (VAEs) have been widely applied for text modeling.
They are troubled by two challenges: information underrepresentation and posterior collapse.
We propose Discrete Auto-regressive Variational Attention Model (DAVAM) to address the challenges.
arXiv Detail & Related papers (2021-06-16T06:36:26Z) - Hierarchical Variational Autoencoder for Visual Counterfactuals [79.86967775454316]
Conditional Variational Autos (VAE) are gathering significant attention as an Explainable Artificial Intelligence (XAI) tool.
In this paper we show how relaxing the effect of the posterior leads to successful counterfactuals.
We introduce VAEX an Hierarchical VAE designed for this approach that can visually audit a classifier in applications.
arXiv Detail & Related papers (2021-02-01T14:07:11Z) - Generalizing Variational Autoencoders with Hierarchical Empirical Bayes [6.273154057349038]
We present Hierarchical Empirical Bayes Autoencoder (HEBAE), a computationally stable framework for probabilistic generative models.
Our key contributions are two-fold. First, we make gains by placing a hierarchical prior over the encoding distribution, enabling us to adaptively balance the trade-off between minimizing the reconstruction loss function and avoiding over-regularization.
arXiv Detail & Related papers (2020-07-20T18:18:39Z) - A Batch Normalized Inference Network Keeps the KL Vanishing Away [35.40781000297285]
Variational Autoencoder (VAE) is widely used to approximate a model's posterior on latent variables.
VAE often converges to a degenerated local optimum known as "posterior collapse"
arXiv Detail & Related papers (2020-04-27T05:20:01Z) - On the Encoder-Decoder Incompatibility in Variational Text Modeling and
Beyond [82.18770740564642]
Variational autoencoders (VAEs) combine latent variables with amortized variational inference.
We observe the encoder-decoder incompatibility that leads to poor parameterizations of the data manifold.
We propose Coupled-VAE, which couples a VAE model with a deterministic autoencoder with the same structure.
arXiv Detail & Related papers (2020-04-20T10:34:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.