GCVAE: Generalized-Controllable Variational AutoEncoder
- URL: http://arxiv.org/abs/2206.04225v1
- Date: Thu, 9 Jun 2022 02:29:30 GMT
- Title: GCVAE: Generalized-Controllable Variational AutoEncoder
- Authors: Kenneth Ezukwoke, Anis Hoayek, Mireille Batton-Hubert, and Xavier
Boucher
- Abstract summary: We present a framework to handle the trade-off between attaining extremely low reconstruction error and a high disentanglement score.
We prove that maximizing information in the reconstruction network is equivalent to information during amortized inference.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational autoencoders (VAEs) have recently been used for unsupervised
disentanglement learning of complex density distributions. Numerous variants
exist to encourage disentanglement in latent space while improving
reconstruction. However, none have simultaneously managed the trade-off between
attaining extremely low reconstruction error and a high disentanglement score.
We present a generalized framework to handle this challenge under constrained
optimization and demonstrate that it outperforms state-of-the-art existing
models as regards disentanglement while balancing reconstruction. We introduce
three controllable Lagrangian hyperparameters to control reconstruction loss,
KL divergence loss and correlation measure. We prove that maximizing
information in the reconstruction network is equivalent to information
maximization during amortized inference under reasonable assumptions and
constraint relaxation.
Related papers
- Uniform Transformation: Refining Latent Representation in Variational Autoencoders [7.4316292428754105]
We introduce a novel adaptable three-stage Uniform Transformation (UT) module to address irregular latent distributions.
By reconfiguring irregular distributions into a uniform distribution in the latent space, our approach significantly enhances the disentanglement and interpretability of latent representations.
Empirical evaluations demonstrated the efficacy of our proposed UT module in improving disentanglement metrics across benchmark datasets.
arXiv Detail & Related papers (2024-07-02T21:46:23Z) - Convex Latent-Optimized Adversarial Regularizers for Imaging Inverse
Problems [8.33626757808923]
We introduce Convex Latent-d Adrial Regularizers (CLEAR), a novel and interpretable data-driven paradigm.
CLEAR represents a fusion of deep learning (DL) and variational regularization.
Our method consistently outperforms conventional data-driven techniques and traditional regularization approaches.
arXiv Detail & Related papers (2023-09-17T12:06:04Z) - Exploiting Diffusion Prior for Real-World Image Super-Resolution [75.5898357277047]
We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution.
By employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model.
arXiv Detail & Related papers (2023-05-11T17:55:25Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - Variantional autoencoder with decremental information bottleneck for
disentanglement [16.93743613675349]
We present a novel framework for disentangled representation learning, DeVAE, which utilizes hierarchical latent spaces with decreasing information bottlenecks.
The key innovation of our approach lies in connecting the hierarchical latent spaces through disentanglement-invariant transformations.
We demonstrate the effectiveness of DeVAE in achieving a balance between disentanglement and reconstruction through a series of experiments and ablation studies on dSprites and Shapes3D datasets.
arXiv Detail & Related papers (2023-03-22T23:38:10Z) - DiffusionAD: Norm-guided One-step Denoising Diffusion for Anomaly
Detection [89.49600182243306]
We reformulate the reconstruction process using a diffusion model into a noise-to-norm paradigm.
We propose a rapid one-step denoising paradigm, significantly faster than the traditional iterative denoising in diffusion models.
The segmentation sub-network predicts pixel-level anomaly scores using the input image and its anomaly-free restoration.
arXiv Detail & Related papers (2023-03-15T16:14:06Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - End-to-end reconstruction meets data-driven regularization for inverse
problems [2.800608984818919]
We propose an unsupervised approach for learning end-to-end reconstruction operators for ill-posed inverse problems.
The proposed method combines the classical variational framework with iterative unrolling.
We demonstrate with the example of X-ray computed tomography (CT) that our approach outperforms state-of-the-art unsupervised methods.
arXiv Detail & Related papers (2021-06-07T12:05:06Z) - Learning disentangled representations with the Wasserstein Autoencoder [22.54887526392739]
We propose TCWAE (Total Correlation Wasserstein Autoencoder) to penalize the total correlation in latent variables.
We show that working in the WAE paradigm naturally enables the separation of the total-correlation term, thus providing disentanglement control over the learned representation.
We further study the trade off between disentanglement and reconstruction on more-difficult data sets with unknown generative factors, where the flexibility of the WAE paradigm in the reconstruction term improves reconstructions.
arXiv Detail & Related papers (2020-10-07T14:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.