Recursive Inference for Variational Autoencoders
- URL: http://arxiv.org/abs/2011.08544v1
- Date: Tue, 17 Nov 2020 10:22:12 GMT
- Title: Recursive Inference for Variational Autoencoders
- Authors: Minyoung Kim, Vladimir Pavlovic
- Abstract summary: Inference networks of traditional Variational Autoencoders (VAEs) are typically amortized.
Recent semi-amortized approaches were proposed to address this drawback.
We introduce an accurate amortized inference algorithm.
- Score: 34.552283758419506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inference networks of traditional Variational Autoencoders (VAEs) are
typically amortized, resulting in relatively inaccurate posterior approximation
compared to instance-wise variational optimization. Recent semi-amortized
approaches were proposed to address this drawback; however, their iterative
gradient update procedures can be computationally demanding. To address these
issues, in this paper we introduce an accurate amortized inference algorithm.
We propose a novel recursive mixture estimation algorithm for VAEs that
iteratively augments the current mixture with new components so as to maximally
reduce the divergence between the variational and the true posteriors. Using
the functional gradient approach, we devise an intuitive learning criteria for
selecting a new mixture component: the new component has to improve the data
likelihood (lower bound) and, at the same time, be as divergent from the
current mixture distribution as possible, thus increasing representational
diversity. Compared to recently proposed boosted variational inference (BVI),
our method relies on amortized inference in contrast to BVI's non-amortized
single optimization instance. A crucial benefit of our approach is that the
inference at test time requires a single feed-forward pass through the mixture
inference network, making it significantly faster than the semi-amortized
approaches. We show that our approach yields higher test data likelihood than
the state-of-the-art on several benchmark datasets.
Related papers
- Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment [81.84950252537618]
This paper reveals a unified game-theoretic connection between iterative BOND and self-play alignment.
We establish a novel framework, WIN rate Dominance (WIND), with a series of efficient algorithms for regularized win rate dominance optimization.
arXiv Detail & Related papers (2024-10-28T04:47:39Z) - Nonparametric Automatic Differentiation Variational Inference with
Spline Approximation [7.5620760132717795]
We develop a nonparametric approximation approach that enables flexible posterior approximation for distributions with complicated structures.
Compared with widely-used nonparametrical inference methods, the proposed method is easy to implement and adaptive to various data structures.
Experiments demonstrate the efficiency of the proposed method in approximating complex posterior distributions and improving the performance of generative models with incomplete data.
arXiv Detail & Related papers (2024-03-10T20:22:06Z) - Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance [52.093434664236014]
Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems.
Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation.
arXiv Detail & Related papers (2024-02-03T13:35:39Z) - Variational Bayes image restoration with compressive autoencoders [4.879530644978008]
Regularization of inverse problems is of paramount importance in computational imaging.
In this work, we first propose to use compressive autoencoders instead of state-of-the-art generative models.
As a second contribution, we introduce the Variational Bayes Latent Estimation (VBLE) algorithm.
arXiv Detail & Related papers (2023-11-29T15:49:31Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - AdaTerm: Adaptive T-Distribution Estimated Robust Moments for
Noise-Robust Stochastic Gradient Optimization [14.531550983885772]
We propose AdaTerm, a novel approach that incorporates the Student's t-distribution to derive not only the first-order moment but also all associated statistics.
This provides a unified treatment of the optimization process, offering a comprehensive framework under the statistical model of the t-distribution for the first time.
arXiv Detail & Related papers (2022-01-18T03:13:19Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Reducing the Amortization Gap in Variational Autoencoders: A Bayesian
Random Function Approach [38.45568741734893]
Inference in our GP model is done by a single feed forward pass through the network, significantly faster than semi-amortized methods.
We show that our approach attains higher test data likelihood than the state-of-the-arts on several benchmark datasets.
arXiv Detail & Related papers (2021-02-05T13:01:12Z) - Pairwise Supervised Hashing with Bernoulli Variational Auto-Encoder and
Self-Control Gradient Estimator [62.26981903551382]
Variational auto-encoders (VAEs) with binary latent variables provide state-of-the-art performance in terms of precision for document retrieval.
We propose a pairwise loss function with discrete latent VAE to reward within-class similarity and between-class dissimilarity for supervised hashing.
This new semantic hashing framework achieves superior performance compared to the state-of-the-arts.
arXiv Detail & Related papers (2020-05-21T06:11:33Z) - Differentially Private ADMM for Convex Distributed Learning: Improved
Accuracy via Multi-Step Approximation [10.742065340992525]
Alternating Direction Method of Multipliers (ADMM) is a popular computation for distributed learning.
When the training data is sensitive, the exchanged iterates will cause serious privacy concern.
We propose a new differentially private distributed ADMM with improved accuracy for a wide range of convex learning problems.
arXiv Detail & Related papers (2020-05-16T07:17:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.