Closing the gap: Exact maximum likelihood training of generative
autoencoders using invertible layers
- URL: http://arxiv.org/abs/2205.09546v1
- Date: Thu, 19 May 2022 13:16:09 GMT
- Title: Closing the gap: Exact maximum likelihood training of generative
autoencoders using invertible layers
- Authors: Gianluigi Silvestri, Daan Roos, Luca Ambrogioni
- Abstract summary: We show that VAE-style autoencoders can be constructed using invertible layers, which offer a tractable exact likelihood without the need for regularization terms.
This is achieved while leaving complete freedom in the choice of encoder, decoder and prior architectures.
We show that the approach results in strikingly higher performance than architecturally equivalent VAEs in term of log-likelihood, sample quality and denoising performance.
- Score: 7.76925617801895
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we provide an exact likelihood alternative to the variational
training of generative autoencoders. We show that VAE-style autoencoders can be
constructed using invertible layers, which offer a tractable exact likelihood
without the need for any regularization terms. This is achieved while leaving
complete freedom in the choice of encoder, decoder and prior architectures,
making our approach a drop-in replacement for the training of existing VAEs and
VAE-style models. We refer to the resulting models as Autoencoders within Flows
(AEF), since the encoder, decoder and prior are defined as individual layers of
an overall invertible architecture. We show that the approach results in
strikingly higher performance than architecturally equivalent VAEs in term of
log-likelihood, sample quality and denoising performance. In a broad sense, the
main ambition of this work is to close the gap between the normalizing flow and
autoencoder literature under the common framework of invertibility and exact
maximum likelihood.
Related papers
- Unified Generation, Reconstruction, and Representation: Generalized Diffusion with Adaptive Latent Encoding-Decoding [90.77521413857448]
Deep generative models are anchored in three core capabilities -- generating new instances, reconstructing inputs, and learning compact representations.
We introduce Generalized generative adversarial-Decoding Diffusion Probabilistic Models (EDDPMs)
EDDPMs generalize the Gaussian noising-denoising in standard diffusion by introducing parameterized encoding-decoding.
Experiments on text, proteins, and images demonstrate the flexibility to handle diverse data and tasks.
arXiv Detail & Related papers (2024-02-29T10:08:57Z) - Interpretable Spectral Variational AutoEncoder (ISVAE) for time series
clustering [48.0650332513417]
We introduce a novel model that incorporates an interpretable bottleneck-termed the Filter Bank (FB)-at the outset of a Variational Autoencoder (VAE)
This arrangement compels the VAE to attend on the most informative segments of the input signal.
By deliberately constraining the VAE with this FB, we promote the development of an encoding that is discernible, separable, and of reduced dimensionality.
arXiv Detail & Related papers (2023-10-18T13:06:05Z) - Benign Autoencoders [0.0]
We formalize the problem of finding the optimal encoder-decoder pair and characterize its solution, which we name the "benign autoencoder" (BAE)
We prove that BAE projects data onto a manifold whose dimension is the optimal compressibility dimension of the generative problem.
As an illustration, we show how BAE can find optimal, low-dimensional latent representations that improve the performance of a discriminator under a distribution shift.
arXiv Detail & Related papers (2022-10-02T21:36:27Z) - String-based Molecule Generation via Multi-decoder VAE [56.465033997245776]
We investigate the problem of string-based molecular generation via variational autoencoders (VAEs)
We propose a simple, yet effective idea to improve the performance of VAE for the task.
In our experiments, the proposed VAE model particularly performs well for generating a sample from out-of-domain distribution.
arXiv Detail & Related papers (2022-08-23T03:56:30Z) - CANF-VC: Conditional Augmented Normalizing Flows for Video Compression [81.41594331948843]
CANF-VC is an end-to-end learning-based video compression system.
It is based on conditional augmented normalizing flows (ANF)
arXiv Detail & Related papers (2022-07-12T04:53:24Z) - Hierarchical Variational Autoencoder for Visual Counterfactuals [79.86967775454316]
Conditional Variational Autos (VAE) are gathering significant attention as an Explainable Artificial Intelligence (XAI) tool.
In this paper we show how relaxing the effect of the posterior leads to successful counterfactuals.
We introduce VAEX an Hierarchical VAE designed for this approach that can visually audit a classifier in applications.
arXiv Detail & Related papers (2021-02-01T14:07:11Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Variance Constrained Autoencoding [0.0]
We show that for encoders, simultaneously attempting to enforce a distribution constraint and minimising an output distortion leads to a reduction in generative and reconstruction quality.
We propose the variance-constrained autoencoder (VCAE), which only enforces a variance constraint on the latent distribution.
Our experiments show that VCAE improves upon Wasserstein Autoencoder and the Variational Autoencoder in both reconstruction and generative quality on MNIST and CelebA.
arXiv Detail & Related papers (2020-05-08T00:50:50Z) - On the Encoder-Decoder Incompatibility in Variational Text Modeling and
Beyond [82.18770740564642]
Variational autoencoders (VAEs) combine latent variables with amortized variational inference.
We observe the encoder-decoder incompatibility that leads to poor parameterizations of the data manifold.
We propose Coupled-VAE, which couples a VAE model with a deterministic autoencoder with the same structure.
arXiv Detail & Related papers (2020-04-20T10:34:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.