Efficient Debiased Evidence Estimation by Multilevel Monte Carlo
Sampling
- URL: http://arxiv.org/abs/2001.04676v2
- Date: Wed, 24 Feb 2021 23:48:39 GMT
- Title: Efficient Debiased Evidence Estimation by Multilevel Monte Carlo
Sampling
- Authors: Kei Ishikawa, Takashi Goda
- Abstract summary: We propose a new optimization algorithm for Bayesian inference based multilevel Monte Carlo (MLMC) methods.
Our numerical results confirm considerable computational savings compared to the conventional estimators.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a new stochastic optimization algorithm for
Bayesian inference based on multilevel Monte Carlo (MLMC) methods. In Bayesian
statistics, biased estimators of the model evidence have been often used as
stochastic objectives because the existing debiasing techniques are
computationally costly to apply. To overcome this issue, we apply an MLMC
sampling technique to construct low-variance unbiased estimators both for the
model evidence and its gradient. In the theoretical analysis, we show that the
computational cost required for our proposed MLMC estimator to estimate the
model evidence or its gradient with a given accuracy is an order of magnitude
smaller than those of the previously known estimators. Our numerical
experiments confirm considerable computational savings compared to the
conventional estimators. Combining our MLMC estimator with gradient-based
stochastic optimization results in a new scalable, efficient, debiased
inference algorithm for Bayesian statistical models.
Related papers
- Leveraging Nested MLMC for Sequential Neural Posterior Estimation with
Intractable Likelihoods [0.8287206589886881]
SNPE techniques are proposed for dealing with simulation-based models with intractable likelihoods.
In this paper, we propose a nested APT method to estimate the involved nested expectation.
Since the nested estimators for the loss function and its gradient are biased, we make use of unbiased multi-level Monte Carlo (MLMC) estimators.
arXiv Detail & Related papers (2024-01-30T06:29:41Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Low-variance estimation in the Plackett-Luce model via quasi-Monte Carlo
sampling [58.14878401145309]
We develop a novel approach to producing more sample-efficient estimators of expectations in the PL model.
We illustrate our findings both theoretically and empirically using real-world recommendation data from Amazon Music and the Yahoo learning-to-rank challenge.
arXiv Detail & Related papers (2022-05-12T11:15:47Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Hierarchical Gaussian Process Models for Regression Discontinuity/Kink
under Sharp and Fuzzy Designs [0.0]
We propose nonparametric Bayesian estimators for causal inference exploiting Regression Discontinuity/Kink (RD/RK)
These estimators are extended to hierarchical GP models with an intermediate Bayesian neural network layer.
Monte Carlo simulations show that our estimators perform similarly and often better than competing estimators in terms of precision, coverage and interval length.
arXiv Detail & Related papers (2021-10-03T04:23:56Z) - Unbiased Gradient Estimation for Distributionally Robust Learning [2.1777837784979277]
We consider a new approach based on distributionally robust learning (DRL) that applies gradient descent to the inner problem.
Our algorithm efficiently estimates gradient gradient through multi-level Monte Carlo randomization.
arXiv Detail & Related papers (2020-12-22T21:35:03Z) - Instability, Computational Efficiency and Statistical Accuracy [101.32305022521024]
We develop a framework that yields statistical accuracy based on interplay between the deterministic convergence rate of the algorithm at the population level, and its degree of (instability) when applied to an empirical object based on $n$ samples.
We provide applications of our general results to several concrete classes of models, including Gaussian mixture estimation, non-linear regression models, and informative non-response models.
arXiv Detail & Related papers (2020-05-22T22:30:52Z) - Unbiased MLMC stochastic gradient-based optimization of Bayesian
experimental designs [4.112293524466434]
The gradient of the expected information gain with respect to experimental design parameters is given by a nested expectation.
We introduce an unbiased Monte Carlo estimator for the gradient of the expected information gain with finite expected squared $ell$-norm and finite expected computational cost per sample.
arXiv Detail & Related papers (2020-05-18T01:02:31Z) - SUMO: Unbiased Estimation of Log Marginal Probability for Latent
Variable Models [80.22609163316459]
We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series.
We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost.
arXiv Detail & Related papers (2020-04-01T11:49:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.