Variational Auto-Encoder: not all failures are equal
- URL: http://arxiv.org/abs/2003.01972v1
- Date: Wed, 4 Mar 2020 09:48:02 GMT
- Title: Variational Auto-Encoder: not all failures are equal
- Authors: Michele Sebag (LRI), Victor Berger (TAU), Mich\`ele Sebag (LRI)
- Abstract summary: We show how sharpness learning addresses the notorious VAE blurriness issue.
The paper is backed upon experiments on artificial data, MNIST and CelebA, showing how sharpness learning addresses the notorious VAE blurriness issue.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We claim that a source of severe failures for Variational Auto-Encoders is
the choice of the distribution class used for the observation model.A first
theoretical and experimental contribution of the paper is to establish that
even in the large sample limit with arbitrarily powerful neural architectures
and latent space, the VAE failsif the sharpness of the distribution class does
not match the scale of the data.Our second claim is that the distribution
sharpness must preferably be learned by the VAE (as opposed to, fixed and
optimized offline): Autonomously adjusting this sharpness allows the VAE to
dynamically control the trade-off between the optimization of the
reconstruction loss and the latent compression. A second empirical contribution
is to show how the control of this trade-off is instrumental in escaping poor
local optima, akin a simulated annealing schedule.Both claims are backed upon
experiments on artificial data, MNIST and CelebA, showing how sharpness
learning addresses the notorious VAE blurriness issue.
Related papers
- Energy Score-based Pseudo-Label Filtering and Adaptive Loss for Imbalanced Semi-supervised SAR target recognition [1.2035771704626825]
Existing semi-supervised SAR ATR algorithms show low recognition accuracy in the case of class imbalance.
This work offers a non-balanced semi-supervised SAR target recognition approach using dynamic energy scores and adaptive loss.
arXiv Detail & Related papers (2024-11-06T14:45:16Z) - Fair CoVariance Neural Networks [34.68621550644667]
We propose Fair coVariance Neural Networks (FVNNs), which perform graph convolutions on the covariance matrix for both fair and accurate predictions.
We prove that FVNNs are intrinsically fairer than analogous PCA approaches thanks to their stability in low sample regimes.
arXiv Detail & Related papers (2024-09-13T06:24:18Z) - Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - Robustness and Accuracy Could Be Reconcilable by (Proper) Definition [109.62614226793833]
The trade-off between robustness and accuracy has been widely studied in the adversarial literature.
We find that it may stem from the improperly defined robust error, which imposes an inductive bias of local invariance.
By definition, SCORE facilitates the reconciliation between robustness and accuracy, while still handling the worst-case uncertainty.
arXiv Detail & Related papers (2022-02-21T10:36:09Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Re-parameterizing VAEs for stability [1.90365714903665]
We propose a theoretical approach towards the training numerical stability of Variational AutoEncoders (VAE)
Our work is motivated by recent studies empowering VAEs to reach state of the art generative results on complex image datasets.
We show that by implementing small changes to the way we parameterize the Normal distributions on which they rely, VAEs can securely be trained.
arXiv Detail & Related papers (2021-06-25T16:19:09Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - To Regularize or Not To Regularize? The Bias Variance Trade-off in
Regularized AEs [10.611727286504994]
We study the effect of the latent prior on the generation deterministic quality of AE models.
We show that our model, called FlexAE, is the new state-of-the-art for the AE based generative models.
arXiv Detail & Related papers (2020-06-10T14:00:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.