Variational Autoencoders Without the Variation
- URL: http://arxiv.org/abs/2203.00645v1
- Date: Tue, 1 Mar 2022 17:39:02 GMT
- Title: Variational Autoencoders Without the Variation
- Authors: Gregory A. Daly, Jonathan E. Fieldsend and Gavin Tabor
- Abstract summary: Variational autoencdoers (VAEs) are a popular approach to generative modelling.
Recent work on regularised and entropic autoencoders have begun to explore the potential, for generative modelling, of removing the variational approach and returning to the classic deterministic autoencoder (DAE)
In this paper we empirically explore the capability of DAEs for image generation without additional novel methods and the effect of the implicit regularisation and smoothness of large networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational autoencdoers (VAE) are a popular approach to generative
modelling. However, exploiting the capabilities of VAEs in practice can be
difficult. Recent work on regularised and entropic autoencoders have begun to
explore the potential, for generative modelling, of removing the variational
approach and returning to the classic deterministic autoencoder (DAE) with
additional novel regularisation methods. In this paper we empirically explore
the capability of DAEs for image generation without additional novel methods
and the effect of the implicit regularisation and smoothness of large networks.
We find that DAEs can be used successfully for image generation without
additional loss terms, and that many of the useful properties of VAEs can arise
implicitly from sufficiently large convolutional encoders and decoders when
trained on CIFAR-10 and CelebA.
Related papers
- A Bayesian Non-parametric Approach to Generative Models: Integrating
Variational Autoencoder and Generative Adversarial Networks using Wasserstein
and Maximum Mean Discrepancy [2.966338139852619]
Generative adversarial networks (GANs) and variational autoencoders (VAEs) are two of the most prominent and widely studied generative models.
We employ a Bayesian non-parametric (BNP) approach to merge GANs and VAEs.
By fusing the discriminative power of GANs with the reconstruction capabilities of VAEs, our novel model achieves superior performance in various generative tasks.
arXiv Detail & Related papers (2023-08-27T08:58:31Z) - Defending Variational Autoencoders from Adversarial Attacks with MCMC [74.36233246536459]
Variational autoencoders (VAEs) are deep generative models used in various domains.
As previous work has shown, one can easily fool VAEs to produce unexpected latent representations and reconstructions for a visually slightly modified input.
Here, we examine several objective functions for adversarial attacks construction, suggest metrics assess the model robustness, and propose a solution.
arXiv Detail & Related papers (2022-03-18T13:25:18Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Discrete Auto-regressive Variational Attention Models for Text Modeling [53.38382932162732]
Variational autoencoders (VAEs) have been widely applied for text modeling.
They are troubled by two challenges: information underrepresentation and posterior collapse.
We propose Discrete Auto-regressive Variational Attention Model (DAVAM) to address the challenges.
arXiv Detail & Related papers (2021-06-16T06:36:26Z) - Neighbor Embedding Variational Autoencoder [14.08587678497785]
We propose a novel model, neighbor embedding VAE(NE-VAE), which explicitly constraints the encoder to encode inputs close in the input space to be close in the latent space.
In our experiments, NE-VAE can produce qualitatively different latent representations with majority of the latent dimensions remained active.
arXiv Detail & Related papers (2021-03-21T09:49:12Z) - Adversarial and Contrastive Variational Autoencoder for Sequential
Recommendation [25.37244686572865]
We propose a novel method called Adversarial and Contrastive Variational Autoencoder (ACVAE) for sequential recommendation.
We first introduce the adversarial training for sequence generation under the Adversarial Variational Bayes framework, which enables our model to generate high-quality latent variables.
Besides, when encoding the sequence, we apply a recurrent and convolutional structure to capture global and local relationships in the sequence.
arXiv Detail & Related papers (2021-03-19T09:01:14Z) - Hierarchical Variational Autoencoder for Visual Counterfactuals [79.86967775454316]
Conditional Variational Autos (VAE) are gathering significant attention as an Explainable Artificial Intelligence (XAI) tool.
In this paper we show how relaxing the effect of the posterior leads to successful counterfactuals.
We introduce VAEX an Hierarchical VAE designed for this approach that can visually audit a classifier in applications.
arXiv Detail & Related papers (2021-02-01T14:07:11Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - NVAE: A Deep Hierarchical Variational Autoencoder [102.29977384039805]
We propose a deep hierarchical VAE built for image generation using depth-wise separable convolutions and batch normalization.
We show that NVAE achieves state-of-the-art results among non-autoregressive likelihood-based models.
To the best of our knowledge, NVAE is the first successful VAE applied to natural images as large as 256$times $256 pixels.
arXiv Detail & Related papers (2020-07-08T04:56:56Z) - Robust Training of Vector Quantized Bottleneck Models [21.540133031071438]
We demonstrate methods for reliable and efficient training of discrete representation using Vector-Quantized Variational Auto-Encoder models (VQ-VAEs)
For unsupervised representation learning, they became viable alternatives to continuous latent variable models such as the Variational Auto-Encoder (VAE)
arXiv Detail & Related papers (2020-05-18T08:23:41Z) - Regularized Autoencoders via Relaxed Injective Probability Flow [35.39933775720789]
Invertible flow-based generative models are an effective method for learning to generate samples, while allowing for tractable likelihood computation and inference.
We propose a generative model based on probability flows that does away with the bijectivity requirement on the model and only assumes injectivity.
arXiv Detail & Related papers (2020-02-20T18:22:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.