Consistency Regularization for Variational Auto-Encoders
- URL: http://arxiv.org/abs/2105.14859v1
- Date: Mon, 31 May 2021 10:26:32 GMT
- Title: Consistency Regularization for Variational Auto-Encoders
- Authors: Samarth Sinha, Adji B. Dieng
- Abstract summary: Variational auto-encoders (VAEs) are a powerful approach to unsupervised learning.
We propose a regularization method to enforce consistency in VAEs.
- Score: 14.423556966548544
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Variational auto-encoders (VAEs) are a powerful approach to unsupervised
learning. They enable scalable approximate posterior inference in
latent-variable models using variational inference (VI). A VAE posits a
variational family parameterized by a deep neural network called an encoder
that takes data as input. This encoder is shared across all the observations,
which amortizes the cost of inference. However the encoder of a VAE has the
undesirable property that it maps a given observation and a
semantics-preserving transformation of it to different latent representations.
This "inconsistency" of the encoder lowers the quality of the learned
representations, especially for downstream tasks, and also negatively affects
generalization. In this paper, we propose a regularization method to enforce
consistency in VAEs. The idea is to minimize the Kullback-Leibler (KL)
divergence between the variational distribution when conditioning on the
observation and the variational distribution when conditioning on a random
semantic-preserving transformation of this observation. This regularization is
applicable to any VAE. In our experiments we apply it to four different VAE
variants on several benchmark datasets and found it always improves the quality
of the learned representations but also leads to better generalization. In
particular, when applied to the Nouveau Variational Auto-Encoder (NVAE), our
regularization method yields state-of-the-art performance on MNIST and
CIFAR-10. We also applied our method to 3D data and found it learns
representations of superior quality as measured by accuracy on a downstream
classification task.
Related papers
- Gaussian Mixture Vector Quantization with Aggregated Categorical Posterior [5.862123282894087]
We introduce the Vector Quantized Variational Autoencoder (VQ-VAE)
VQ-VAE is a type of variational autoencoder using discrete embedding as latent.
We show that GM-VQ improves codebook utilization and reduces information loss without relying on handcrafteds.
arXiv Detail & Related papers (2024-10-14T05:58:11Z) - PseudoNeg-MAE: Self-Supervised Point Cloud Learning using Conditional Pseudo-Negative Embeddings [55.55445978692678]
PseudoNeg-MAE is a self-supervised learning framework that enhances global feature representation of point cloud mask autoencoders.
We show that PseudoNeg-MAE achieves state-of-the-art performance on the ModelNet40 and ScanObjectNN datasets.
arXiv Detail & Related papers (2024-09-24T07:57:21Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - AAVAE: Augmentation-Augmented Variational Autoencoders [43.73699420145321]
We introduce augmentation-augmented variational autoencoders (AAVAE), a third approach to self-supervised learning based on autoencoding.
We empirically evaluate the proposed AAVAE on image classification, similar to how recent contrastive and non-contrastive learning algorithms have been evaluated.
arXiv Detail & Related papers (2021-07-26T17:04:30Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Simple and Effective VAE Training with Calibrated Decoders [123.08908889310258]
Variational autoencoders (VAEs) provide an effective and simple method for modeling complex distributions.
We study the impact of calibrated decoders, which learn the uncertainty of the decoding distribution.
We propose a simple but novel modification to the commonly used Gaussian decoder, which computes the prediction variance analytically.
arXiv Detail & Related papers (2020-06-23T17:57:47Z) - tvGP-VAE: Tensor-variate Gaussian Process Prior Variational Autoencoder [0.0]
tvGP-VAE is able to explicitly model correlation via the use of kernel functions.
We show that the choice of which correlation structures to explicitly represent in the latent space has a significant impact on model performance.
arXiv Detail & Related papers (2020-06-08T17:59:13Z) - On the Encoder-Decoder Incompatibility in Variational Text Modeling and
Beyond [82.18770740564642]
Variational autoencoders (VAEs) combine latent variables with amortized variational inference.
We observe the encoder-decoder incompatibility that leads to poor parameterizations of the data manifold.
We propose Coupled-VAE, which couples a VAE model with a deterministic autoencoder with the same structure.
arXiv Detail & Related papers (2020-04-20T10:34:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.