Batch norm with entropic regularization turns deterministic autoencoders
into generative models
- URL: http://arxiv.org/abs/2002.10631v2
- Date: Wed, 22 Sep 2021 00:51:29 GMT
- Title: Batch norm with entropic regularization turns deterministic autoencoders
into generative models
- Authors: Amur Ghose, Abdullah Rashwan, Pascal Poupart
- Abstract summary: The variational autoencoder is a well defined deep generative model.
We show in this work that utilizing batch normalization as a source for non-determinism suffices to turn deterministic autoencoders into generative models.
- Score: 14.65554816300632
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The variational autoencoder is a well defined deep generative model that
utilizes an encoder-decoder framework where an encoding neural network outputs
a non-deterministic code for reconstructing an input. The encoder achieves this
by sampling from a distribution for every input, instead of outputting a
deterministic code per input. The great advantage of this process is that it
allows the use of the network as a generative model for sampling from the data
distribution beyond provided samples for training. We show in this work that
utilizing batch normalization as a source for non-determinism suffices to turn
deterministic autoencoders into generative models on par with variational ones,
so long as we add a suitable entropic regularization to the training objective.
Related papers
- Generative Autoencoding of Dropout Patterns [11.965844936801801]
We propose a generative model termed Deciphering Autoencoders.
We assign a unique random dropout pattern to each data point in the training dataset.
We then train an autoencoder to reconstruct the corresponding data point using this pattern as information to be encoded.
arXiv Detail & Related papers (2023-10-03T00:54:13Z) - Are We Using Autoencoders in a Wrong Way? [3.110260251019273]
Autoencoders are used for dimensionality reduction, anomaly detection and feature extraction.
We revisited the standard training for the undercomplete Autoencoder modifying the shape of the latent space.
We also explored the behaviour of the latent space in the case of reconstruction of a random sample from the whole dataset.
arXiv Detail & Related papers (2023-09-04T11:22:43Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - String-based Molecule Generation via Multi-decoder VAE [56.465033997245776]
We investigate the problem of string-based molecular generation via variational autoencoders (VAEs)
We propose a simple, yet effective idea to improve the performance of VAE for the task.
In our experiments, the proposed VAE model particularly performs well for generating a sample from out-of-domain distribution.
arXiv Detail & Related papers (2022-08-23T03:56:30Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - End-to-End Synthetic Data Generation for Domain Adaptation of Question
Answering Systems [34.927828428293864]
Our model comprises a single transformer-based encoder-decoder network that is trained end-to-end to generate both answers and questions.
In a nutshell, we feed a passage to the encoder and ask the decoder to generate a question and an answer token-by-token.
arXiv Detail & Related papers (2020-10-12T21:10:18Z) - Variance Constrained Autoencoding [0.0]
We show that for encoders, simultaneously attempting to enforce a distribution constraint and minimising an output distortion leads to a reduction in generative and reconstruction quality.
We propose the variance-constrained autoencoder (VCAE), which only enforces a variance constraint on the latent distribution.
Our experiments show that VCAE improves upon Wasserstein Autoencoder and the Variational Autoencoder in both reconstruction and generative quality on MNIST and CelebA.
arXiv Detail & Related papers (2020-05-08T00:50:50Z) - On Sparsifying Encoder Outputs in Sequence-to-Sequence Models [90.58793284654692]
We take Transformer as the testbed and introduce a layer of gates in-between the encoder and the decoder.
The gates are regularized using the expected value of the sparsity-inducing L0penalty.
We investigate the effects of this sparsification on two machine translation and two summarization tasks.
arXiv Detail & Related papers (2020-04-24T16:57:52Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.