Fisher Auto-Encoders
- URL: http://arxiv.org/abs/2007.06120v2
- Date: Fri, 23 Oct 2020 12:45:31 GMT
- Title: Fisher Auto-Encoders
- Authors: Khalil Elkhalil, Ali Hasan, Jie Ding, Sina Farsiu, Vahid Tarokh
- Abstract summary: We propose a new class of robust generative auto-encoders (AE) referred to as Fisher auto-encoders.
Our approach is to design Fisher AEs by minimizing the Fisher divergence between the intractable joint distribution of observed data and latent variables.
In contrast to KL-based variational AEs (VAEs), the Fisher AE can exactly quantify the distance between the true and the model-based posterior distributions.
- Score: 37.14820766773909
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It has been conjectured that the Fisher divergence is more robust to model
uncertainty than the conventional Kullback-Leibler (KL) divergence. This
motivates the design of a new class of robust generative auto-encoders (AE)
referred to as Fisher auto-encoders. Our approach is to design Fisher AEs by
minimizing the Fisher divergence between the intractable joint distribution of
observed data and latent variables, with that of the postulated/modeled joint
distribution. In contrast to KL-based variational AEs (VAEs), the Fisher AE can
exactly quantify the distance between the true and the model-based posterior
distributions. Qualitative and quantitative results are provided on both MNIST
and celebA datasets demonstrating the competitive performance of Fisher AEs in
terms of robustness compared to other AEs such as VAEs and Wasserstein AEs.
Related papers
- Model Merging by Uncertainty-Based Gradient Matching [70.54580972266096]
We propose a new uncertainty-based scheme to improve the performance by reducing the mismatch.
Our new method gives consistent improvements for large language models and vision transformers.
arXiv Detail & Related papers (2023-10-19T15:02:45Z) - Unscented Autoencoder [3.0108936184913295]
Variational Autoencoder (VAE) is a seminal approach in deep generative modeling with latent variables.
We apply the Unscented Transform (UT) -- a well-known distribution approximation used in the Unscented Kalman Filter (UKF) from the field of filtering.
We derive a novel, deterministic-sampling flavor of the VAE, the Unscented Autoencoder (UAE), trained purely with regularization-like terms on the per-sample posterior.
arXiv Detail & Related papers (2023-06-08T14:53:02Z) - Do Bayesian Variational Autoencoders Know What They Don't Know? [0.6091702876917279]
The problem of detecting the Out-of-Distribution (OoD) inputs is paramount importance for Deep Neural Networks.
It has been previously shown that even Deep Generative Models that allow estimating the density of the inputs may not be reliable.
This paper investigates three approaches to inference: Markov chain Monte Carlo, Bayes gradient by Backpropagation and Weight Averaging-Gaussian.
arXiv Detail & Related papers (2022-12-29T11:48:01Z) - On the failure of variational score matching for VAE models [3.8073142980733]
We present a critical study of existing variational SM objectives, showing catastrophic failure on a wide range of datasets and network architectures.
Our theoretical insights on the objectives emerge directly from their equivalent autoencoding losses when optimizing variational autoencoder (VAE) models.
arXiv Detail & Related papers (2022-10-24T16:43:04Z) - Accuracy on the Line: On the Strong Correlation Between
Out-of-Distribution and In-Distribution Generalization [89.73665256847858]
We show that out-of-distribution performance is strongly correlated with in-distribution performance for a wide range of models and distribution shifts.
Specifically, we demonstrate strong correlations between in-distribution and out-of-distribution performance on variants of CIFAR-10 & ImageNet.
We also investigate cases where the correlation is weaker, for instance some synthetic distribution shifts from CIFAR-10-C and the tissue classification dataset Camelyon17-WILDS.
arXiv Detail & Related papers (2021-07-09T19:48:23Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Unbiased Gradient Estimation for Variational Auto-Encoders using Coupled
Markov Chains [34.77971292478243]
The variational auto-encoder (VAE) is a deep latent variable model that has two neural networks in an autoencoder-like architecture.
We develop a training scheme for VAEs by introducing unbiased estimators of the log-likelihood gradient.
We show experimentally that VAEs fitted with unbiased estimators exhibit better predictive performance.
arXiv Detail & Related papers (2020-10-05T08:11:55Z) - To Regularize or Not To Regularize? The Bias Variance Trade-off in
Regularized AEs [10.611727286504994]
We study the effect of the latent prior on the generation deterministic quality of AE models.
We show that our model, called FlexAE, is the new state-of-the-art for the AE based generative models.
arXiv Detail & Related papers (2020-06-10T14:00:14Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z) - Learnable Bernoulli Dropout for Bayesian Deep Learning [53.79615543862426]
Learnable Bernoulli dropout (LBD) is a new model-agnostic dropout scheme that considers the dropout rates as parameters jointly optimized with other model parameters.
LBD leads to improved accuracy and uncertainty estimates in image classification and semantic segmentation.
arXiv Detail & Related papers (2020-02-12T18:57:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.