CR-VAE: Contrastive Regularization on Variational Autoencoders for
Preventing Posterior Collapse
- URL: http://arxiv.org/abs/2309.02968v2
- Date: Sat, 9 Sep 2023 13:09:08 GMT
- Title: CR-VAE: Contrastive Regularization on Variational Autoencoders for
Preventing Posterior Collapse
- Authors: Fotios Lygerakis, Elmar Rueckert
- Abstract summary: The Variational Autoencoder (VAE) is known to suffer from the phenomenon of textitposterior collapse
We propose a novel solution, the Contrastive Regularization for Variational Autoencoders (CR-VAE)
- Score: 1.0044057719679085
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Variational Autoencoder (VAE) is known to suffer from the phenomenon of
\textit{posterior collapse}, where the latent representations generated by the
model become independent of the inputs. This leads to degenerated
representations of the input, which is attributed to the limitations of the
VAE's objective function. In this work, we propose a novel solution to this
issue, the Contrastive Regularization for Variational Autoencoders (CR-VAE).
The core of our approach is to augment the original VAE with a contrastive
objective that maximizes the mutual information between the representations of
similar visual inputs. This strategy ensures that the information flow between
the input and its latent representation is maximized, effectively avoiding
posterior collapse. We evaluate our method on a series of visual datasets and
demonstrate, that CR-VAE outperforms state-of-the-art approaches in preventing
posterior collapse.
Related papers
- Matching aggregate posteriors in the variational autoencoder [0.5759862457142761]
The variational autoencoder (VAE) is a well-studied, deep, latent-variable model (DLVM)
This paper addresses shortcomings in VAEs by reformulating the objective function associated with VAEs in order to match the aggregate/marginal posterior distribution to the prior.
The proposed method is named the emphaggregate variational autoencoder (AVAE) and is built on the theoretical framework of the VAE.
arXiv Detail & Related papers (2023-11-13T19:22:37Z) - Covariate-informed Representation Learning with Samplewise Optimal
Identifiable Variational Autoencoders [15.254297587065595]
Recently proposed identifiable variational autoencoder (iVAE) provides a promising approach for learning latent independent components of the data.
We develop a new approach, co-informed identifiable VAE (CI-iVAE)
In doing so, the objective function enforces the inverse relation, and learned representation contains more information of observations.
arXiv Detail & Related papers (2022-02-09T00:18:33Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - InteL-VAEs: Adding Inductive Biases to Variational Auto-Encoders via
Intermediary Latents [60.785317191131284]
We introduce a simple and effective method for learning VAEs with controllable biases by using an intermediary set of latent variables.
In particular, it allows us to impose desired properties like sparsity or clustering on learned representations.
We show that this, in turn, allows InteL-VAEs to learn both better generative models and representations.
arXiv Detail & Related papers (2021-06-25T16:34:05Z) - Discrete Auto-regressive Variational Attention Models for Text Modeling [53.38382932162732]
Variational autoencoders (VAEs) have been widely applied for text modeling.
They are troubled by two challenges: information underrepresentation and posterior collapse.
We propose Discrete Auto-regressive Variational Attention Model (DAVAM) to address the challenges.
arXiv Detail & Related papers (2021-06-16T06:36:26Z) - Hierarchical Variational Autoencoder for Visual Counterfactuals [79.86967775454316]
Conditional Variational Autos (VAE) are gathering significant attention as an Explainable Artificial Intelligence (XAI) tool.
In this paper we show how relaxing the effect of the posterior leads to successful counterfactuals.
We introduce VAEX an Hierarchical VAE designed for this approach that can visually audit a classifier in applications.
arXiv Detail & Related papers (2021-02-01T14:07:11Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Super-resolution Variational Auto-Encoders [8.873449722727026]
We propose to enhance VAEs by adding a random variable that is a downscaled version of the original image.
We present empirically that the proposed approach performs comparably to VAEs in terms of the negative log-likelihood.
arXiv Detail & Related papers (2020-06-09T12:32:16Z) - Preventing Posterior Collapse with Levenshtein Variational Autoencoder [61.30283661804425]
We propose to replace the evidence lower bound (ELBO) with a new objective which is simple to optimize and prevents posterior collapse.
We show that Levenstein VAE produces more informative latent representations than alternative approaches to preventing posterior collapse.
arXiv Detail & Related papers (2020-04-30T13:27:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.