Autoencoding Variational Autoencoder
- URL: http://arxiv.org/abs/2012.03715v1
- Date: Mon, 7 Dec 2020 14:16:14 GMT
- Title: Autoencoding Variational Autoencoder
- Authors: A. Taylan Cemgil, Sumedh Ghaisas, Krishnamurthy Dvijotham, Sven Gowal,
Pushmeet Kohli
- Abstract summary: We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
- Score: 56.05008520271406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Does a Variational AutoEncoder (VAE) consistently encode typical samples
generated from its decoder? This paper shows that the perhaps surprising answer
to this question is `No'; a (nominally trained) VAE does not necessarily
amortize inference for typical samples that it is capable of generating. We
study the implications of this behaviour on the learned representations and
also the consequences of fixing it by introducing a notion of self consistency.
Our approach hinges on an alternative construction of the variational
approximation distribution to the true posterior of an extended VAE model with
a Markov chain alternating between the encoder and the decoder. The method can
be used to train a VAE model from scratch or given an already trained VAE, it
can be run as a post processing step in an entirely self supervised way without
access to the original training data. Our experimental analysis reveals that
encoders trained with our self-consistency approach lead to representations
that are robust (insensitive) to perturbations in the input introduced by
adversarial attacks. We provide experimental results on the ColorMnist and
CelebA benchmark datasets that quantify the properties of the learned
representations and compare the approach with a baseline that is specifically
trained for the desired property.
Related papers
- Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Fully Bayesian Autoencoders with Latent Sparse Gaussian Processes [23.682509357305406]
Autoencoders and their variants are among the most widely used models in representation learning and generative modeling.
We propose a novel Sparse Gaussian Process Bayesian Autoencoder model in which we impose fully sparse Gaussian Process priors on the latent space of a Bayesian Autoencoder.
arXiv Detail & Related papers (2023-02-09T09:57:51Z) - String-based Molecule Generation via Multi-decoder VAE [56.465033997245776]
We investigate the problem of string-based molecular generation via variational autoencoders (VAEs)
We propose a simple, yet effective idea to improve the performance of VAE for the task.
In our experiments, the proposed VAE model particularly performs well for generating a sample from out-of-domain distribution.
arXiv Detail & Related papers (2022-08-23T03:56:30Z) - Laplacian Autoencoders for Learning Stochastic Representations [0.6999740786886537]
We present a Bayesian autoencoder for unsupervised representation learning, which is trained using a novel variational lower-bound of the autoencoder evidence.
We show that our Laplacian autoencoder estimates well-calibrated uncertainties in both latent and output space.
arXiv Detail & Related papers (2022-06-30T07:23:16Z) - Defending Variational Autoencoders from Adversarial Attacks with MCMC [74.36233246536459]
Variational autoencoders (VAEs) are deep generative models used in various domains.
As previous work has shown, one can easily fool VAEs to produce unexpected latent representations and reconstructions for a visually slightly modified input.
Here, we examine several objective functions for adversarial attacks construction, suggest metrics assess the model robustness, and propose a solution.
arXiv Detail & Related papers (2022-03-18T13:25:18Z) - Consistency Regularization for Variational Auto-Encoders [14.423556966548544]
Variational auto-encoders (VAEs) are a powerful approach to unsupervised learning.
We propose a regularization method to enforce consistency in VAEs.
arXiv Detail & Related papers (2021-05-31T10:26:32Z) - Variational Autoencoder-Based Vehicle Trajectory Prediction with an
Interpretable Latent Space [0.0]
This paper introduces the Descriptive Variational Autoencoder (DVAE), an unsupervised and end-to-end trainable neural network for predicting vehicle trajectories.
The proposed model provides a similar prediction accuracy but with the great advantage of having an interpretable latent space.
arXiv Detail & Related papers (2021-03-25T10:15:53Z) - Automatic Feature Extraction for Heartbeat Anomaly Detection [7.054093620465401]
We focus on automatic feature extraction for raw audio heartbeat sounds, aimed at anomaly detection applications in healthcare.
We learn features with the help of an autoencoder composed by a 1D non-causal convolutional encoder and a WaveNet decoder.
arXiv Detail & Related papers (2021-02-24T13:55:24Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - Simple and Effective VAE Training with Calibrated Decoders [123.08908889310258]
Variational autoencoders (VAEs) provide an effective and simple method for modeling complex distributions.
We study the impact of calibrated decoders, which learn the uncertainty of the decoding distribution.
We propose a simple but novel modification to the commonly used Gaussian decoder, which computes the prediction variance analytically.
arXiv Detail & Related papers (2020-06-23T17:57:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.