To Regularize or Not To Regularize? The Bias Variance Trade-off in
Regularized AEs
- URL: http://arxiv.org/abs/2006.05838v2
- Date: Sat, 19 Sep 2020 10:56:48 GMT
- Title: To Regularize or Not To Regularize? The Bias Variance Trade-off in
Regularized AEs
- Authors: Arnab Kumar Mondal, Himanshu Asnani, Parag Singla, Prathosh AP
- Abstract summary: We study the effect of the latent prior on the generation deterministic quality of AE models.
We show that our model, called FlexAE, is the new state-of-the-art for the AE based generative models.
- Score: 10.611727286504994
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Regularized Auto-Encoders (RAEs) form a rich class of neural generative
models. They effectively model the joint-distribution between the data and the
latent space using an Encoder-Decoder combination, with regularization imposed
in terms of a prior over the latent space. Despite their advantages, such as
stability in training, the performance of AE based models has not reached the
superior standards as that of the other generative models such as Generative
Adversarial Networks (GANs). Motivated by this, we examine the effect of the
latent prior on the generation quality of deterministic AE models in this
paper. Specifically, we consider the class of RAEs with deterministic
Encoder-Decoder pairs, Wasserstein Auto-Encoders (WAE), and show that having a
fixed prior distribution, \textit{a priori}, oblivious to the dimensionality of
the `true' latent space, will lead to the infeasibility of the optimization
problem considered. Further, we show that, in the finite data regime, despite
knowing the correct latent dimensionality, there exists a bias-variance
trade-off with any arbitrary prior imposition. As a remedy to both the issues
mentioned above, we introduce an additional state space in the form of flexibly
learnable latent priors, in the optimization objective of the WAEs. We
implicitly learn the distribution of the latent prior jointly with the AE
training, which not only makes the learning objective feasible but also
facilitates operation on different points of the bias-variance curve. We show
the efficacy of our model, called FlexAE, through several experiments on
multiple datasets, and demonstrate that it is the new state-of-the-art for the
AE based generative models.
Related papers
- Effort: Efficient Orthogonal Modeling for Generalizable AI-Generated Image Detection [66.16595174895802]
Existing AI-generated image (AIGI) detection methods often suffer from limited generalization performance.
In this paper, we identify a crucial yet previously overlooked asymmetry phenomenon in AIGI detection.
arXiv Detail & Related papers (2024-11-23T19:10:32Z) - MITA: Bridging the Gap between Model and Data for Test-time Adaptation [68.62509948690698]
Test-Time Adaptation (TTA) has emerged as a promising paradigm for enhancing the generalizability of models.
We propose Meet-In-The-Middle based MITA, which introduces energy-based optimization to encourage mutual adaptation of the model and data from opposing directions.
arXiv Detail & Related papers (2024-10-12T07:02:33Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Distributional Learning of Variational AutoEncoder: Application to
Synthetic Data Generation [0.7614628596146602]
We propose a new approach that expands the model capacity without sacrificing the computational advantages of the VAE framework.
Our VAE model's decoder is composed of an infinite mixture of asymmetric Laplace distribution.
We apply the proposed model to synthetic data generation, and particularly, our model demonstrates superiority in easily adjusting the level of data privacy.
arXiv Detail & Related papers (2023-02-22T11:26:50Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Momentum Contrastive Autoencoder: Using Contrastive Learning for Latent
Space Distribution Matching in WAE [51.09507030387935]
Wasserstein autoencoder (WAE) shows that matching two distributions is equivalent to minimizing a simple autoencoder (AE) loss under the constraint that the latent space of this AE matches a pre-specified prior distribution.
We propose to use the contrastive learning framework that has been shown to be effective for self-supervised representation learning, as a means to resolve this problem.
We show that using the contrastive learning framework to optimize the WAE loss achieves faster convergence and more stable optimization compared with existing popular algorithms for WAE.
arXiv Detail & Related papers (2021-10-19T22:55:47Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Automatic Differentiation Variational Inference with Mixtures [4.995383193706478]
We show how stratified sampling may be used to enable mixture distributions as the approximate posterior.
We derive a new lower bound on the evidence analogous to the importance weighted autoencoder (IWAE)
arXiv Detail & Related papers (2020-03-03T18:12:42Z) - Counterfactual fairness: removing direct effects through regularization [0.0]
We propose a new definition of fairness that incorporates causality through the Controlled Direct Effect (CDE)
We develop regularizations to tackle classical fairness measures and present a causal regularization that satisfies our new fairness definition.
Our results were found to mitigate unfairness from the predictions with small reductions in model performance.
arXiv Detail & Related papers (2020-02-25T10:13:55Z) - Regularized Autoencoders via Relaxed Injective Probability Flow [35.39933775720789]
Invertible flow-based generative models are an effective method for learning to generate samples, while allowing for tractable likelihood computation and inference.
We propose a generative model based on probability flows that does away with the bijectivity requirement on the model and only assumes injectivity.
arXiv Detail & Related papers (2020-02-20T18:22:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.