Reparameterized Variational Rejection Sampling
- URL: http://arxiv.org/abs/2309.14612v1
- Date: Tue, 26 Sep 2023 01:46:53 GMT
- Title: Reparameterized Variational Rejection Sampling
- Authors: Martin Jankowiak and Du Phan
- Abstract summary: Variational Rejection Sampling (VRS) combines a parametric proposal distribution with sampling rejection to define a rich non-parametric family of distributions.
We show that our method performs well in practice and that it is well-suited for black-box inference, especially for models with local latent variables.
- Score: 12.189621777178354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional approaches to variational inference rely on parametric families
of variational distributions, with the choice of family playing a critical role
in determining the accuracy of the resulting posterior approximation. Simple
mean-field families often lead to poor approximations, while rich families of
distributions like normalizing flows can be difficult to optimize and usually
do not incorporate the known structure of the target distribution due to their
black-box nature. To expand the space of flexible variational families, we
revisit Variational Rejection Sampling (VRS) [Grover et al., 2018], which
combines a parametric proposal distribution with rejection sampling to define a
rich non-parametric family of distributions that explicitly utilizes the known
target distribution. By introducing a low-variance reparameterized gradient
estimator for the parameters of the proposal distribution, we make VRS an
attractive inference strategy for models with continuous latent variables. We
argue theoretically and demonstrate empirically that the resulting
method--Reparameterized Variational Rejection Sampling (RVRS)--offers an
attractive trade-off between computational cost and inference fidelity. In
experiments we show that our method performs well in practice and that it is
well-suited for black-box inference, especially for models with local latent
variables.
Related papers
- Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - Variational autoencoder with weighted samples for high-dimensional
non-parametric adaptive importance sampling [0.0]
We extend the existing framework to the case of weighted samples by introducing a new objective function.
In order to add flexibility to the model and to be able to learn multimodal distributions, we consider a learnable prior distribution.
We exploit the proposed procedure in existing adaptive importance sampling algorithms to draw points from a target distribution and to estimate a rare event probability in high dimension.
arXiv Detail & Related papers (2023-10-13T15:40:55Z) - Robust scalable initialization for Bayesian variational inference with
multi-modal Laplace approximations [0.0]
Variational mixtures with full-covariance structures suffer from a quadratic growth due to variational parameters with the number of parameters.
We propose a method for constructing an initial Gaussian model approximation that can be used to warm-start variational inference.
arXiv Detail & Related papers (2023-07-12T19:30:04Z) - Amortized backward variational inference in nonlinear state-space models [0.0]
We consider the problem of state estimation in general state-space models using variational inference.
We establish for the first time that, under mixing assumptions, the variational approximation of expectations of additive state functionals induces an error which grows at most linearly in the number of observations.
arXiv Detail & Related papers (2022-06-01T08:35:54Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - Variational Refinement for Importance Sampling Using the Forward
Kullback-Leibler Divergence [77.06203118175335]
Variational Inference (VI) is a popular alternative to exact sampling in Bayesian inference.
Importance sampling (IS) is often used to fine-tune and de-bias the estimates of approximate Bayesian inference procedures.
We propose a novel combination of optimization and sampling techniques for approximate Bayesian inference.
arXiv Detail & Related papers (2021-06-30T11:00:24Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Moment-Based Variational Inference for Stochastic Differential Equations [31.494103873662343]
We construct the variational process as a controlled version of the prior process.
We approximate the posterior by a set of moment functions.
In combination with moment closure, the smoothing problem is reduced to a deterministic optimal control problem.
arXiv Detail & Related papers (2021-03-01T13:20:38Z) - Statistical Guarantees for Transformation Based Models with Applications
to Implicit Variational Inference [8.333191406788423]
We provide theoretical justification for the use of non-linear latent variable models (NL-LVMs) in non-parametric inference.
We use the NL-LVMs to construct an implicit family of variational distributions, deemed GP-IVI.
To the best of our knowledge, this is the first work on providing theoretical guarantees for implicit variational inference.
arXiv Detail & Related papers (2020-10-23T21:06:29Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.