Variance Constrained Autoencoding
- URL: http://arxiv.org/abs/2005.03807v1
- Date: Fri, 8 May 2020 00:50:50 GMT
- Title: Variance Constrained Autoencoding
- Authors: D. T. Braithwaite, M. O'Connor, W. B. Kleijn
- Abstract summary: We show that for encoders, simultaneously attempting to enforce a distribution constraint and minimising an output distortion leads to a reduction in generative and reconstruction quality.
We propose the variance-constrained autoencoder (VCAE), which only enforces a variance constraint on the latent distribution.
Our experiments show that VCAE improves upon Wasserstein Autoencoder and the Variational Autoencoder in both reconstruction and generative quality on MNIST and CelebA.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent state-of-the-art autoencoder based generative models have an
encoder-decoder structure and learn a latent representation with a pre-defined
distribution that can be sampled from. Implementing the encoder networks of
these models in a stochastic manner provides a natural and common approach to
avoid overfitting and enforce a smooth decoder function. However, we show that
for stochastic encoders, simultaneously attempting to enforce a distribution
constraint and minimising an output distortion leads to a reduction in
generative and reconstruction quality. In addition, attempting to enforce a
latent distribution constraint is not reasonable when performing
disentanglement. Hence, we propose the variance-constrained autoencoder (VCAE),
which only enforces a variance constraint on the latent distribution. Our
experiments show that VCAE improves upon Wasserstein Autoencoder and the
Variational Autoencoder in both reconstruction and generative quality on MNIST
and CelebA. Moreover, we show that VCAE equipped with a total correlation
penalty term performs equivalently to FactorVAE at learning disentangled
representations on 3D-Shapes while being a more principled approach.
Related papers
- Symmetric Equilibrium Learning of VAEs [56.56929742714685]
We view variational autoencoders (VAEs) as decoder-encoder pairs, which map distributions in the data space to distributions in the latent space and vice versa.
We propose a Nash equilibrium learning approach, which is symmetric with respect to the encoder and decoder and allows learning VAEs in situations where both the data and the latent distributions are accessible only by sampling.
arXiv Detail & Related papers (2023-07-19T10:27:34Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Exploiting Diffusion Prior for Real-World Image Super-Resolution [75.5898357277047]
We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution.
By employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model.
arXiv Detail & Related papers (2023-05-11T17:55:25Z) - Denoising Diffusion Error Correction Codes [92.10654749898927]
Recently, neural decoders have demonstrated their advantage over classical decoding techniques.
Recent state-of-the-art neural decoders suffer from high complexity and lack the important iterative scheme characteristic of many legacy decoders.
We propose to employ denoising diffusion models for the soft decoding of linear codes at arbitrary block lengths.
arXiv Detail & Related papers (2022-09-16T11:00:50Z) - String-based Molecule Generation via Multi-decoder VAE [56.465033997245776]
We investigate the problem of string-based molecular generation via variational autoencoders (VAEs)
We propose a simple, yet effective idea to improve the performance of VAE for the task.
In our experiments, the proposed VAE model particularly performs well for generating a sample from out-of-domain distribution.
arXiv Detail & Related papers (2022-08-23T03:56:30Z) - Closing the gap: Exact maximum likelihood training of generative
autoencoders using invertible layers [7.76925617801895]
We show that VAE-style autoencoders can be constructed using invertible layers, which offer a tractable exact likelihood without the need for regularization terms.
This is achieved while leaving complete freedom in the choice of encoder, decoder and prior architectures.
We show that the approach results in strikingly higher performance than architecturally equivalent VAEs in term of log-likelihood, sample quality and denoising performance.
arXiv Detail & Related papers (2022-05-19T13:16:09Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Sparse aNETT for Solving Inverse Problems with Deep Learning [2.5234156040689237]
We propose a sparse reconstruction framework (aNETT) for solving inverse problems.
We train an autoencoder network $D circ E$ with $E$ acting as a nonlinear sparsifying transform.
Numerical results are presented for sparse view CT.
arXiv Detail & Related papers (2020-04-20T18:43:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.