A Quadrature Rule combining Control Variates and Adaptive Importance
Sampling
- URL: http://arxiv.org/abs/2205.11890v1
- Date: Tue, 24 May 2022 08:21:45 GMT
- Title: A Quadrature Rule combining Control Variates and Adaptive Importance
Sampling
- Authors: R\'emi Leluc, Fran\c{c}ois Portier, Johan Segers, Aigerim Zhuman
- Abstract summary: We show that a simple weighted least squares approach can be used to improve the accuracy of Monte Carlo integration estimates.
Our main result is a non-asymptotic bound on the probabilistic error of the procedure.
The good behavior of the method is illustrated empirically on synthetic examples and real-world data for Bayesian linear regression.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Driven by several successful applications such as in stochastic gradient
descent or in Bayesian computation, control variates have become a major tool
for Monte Carlo integration. However, standard methods do not allow the
distribution of the particles to evolve during the algorithm, as is the case in
sequential simulation methods. Within the standard adaptive importance sampling
framework, a simple weighted least squares approach is proposed to improve the
procedure with control variates. The procedure takes the form of a quadrature
rule with adapted quadrature weights to reflect the information brought in by
the control variates. The quadrature points and weights do not depend on the
integrand, a computational advantage in case of multiple integrands. Moreover,
the target density needs to be known only up to a multiplicative constant. Our
main result is a non-asymptotic bound on the probabilistic error of the
procedure. The bound proves that for improving the estimate's accuracy, the
benefits from adaptive importance sampling and control variates can be
combined. The good behavior of the method is illustrated empirically on
synthetic examples and real-world data for Bayesian linear regression.
Related papers
- Variational Bayesian surrogate modelling with application to robust design optimisation [0.9626666671366836]
Surrogate models provide a quick-to-evaluate approximation to complex computational models.
We consider Bayesian inference for constructing statistical surrogates with input uncertainties and dimensionality reduction.
We demonstrate intrinsic and robust structural optimisation problems where cost functions depend on a weighted sum of the mean and standard deviation of model outputs.
arXiv Detail & Related papers (2024-04-23T09:22:35Z) - Sobolev Space Regularised Pre Density Models [51.558848491038916]
We propose a new approach to non-parametric density estimation that is based on regularizing a Sobolev norm of the density.
This method is statistically consistent, and makes the inductive validation model clear and consistent.
arXiv Detail & Related papers (2023-07-25T18:47:53Z) - Robust scalable initialization for Bayesian variational inference with
multi-modal Laplace approximations [0.0]
Variational mixtures with full-covariance structures suffer from a quadratic growth due to variational parameters with the number of parameters.
We propose a method for constructing an initial Gaussian model approximation that can be used to warm-start variational inference.
arXiv Detail & Related papers (2023-07-12T19:30:04Z) - Manifold Gaussian Variational Bayes on the Precision Matrix [70.44024861252554]
We propose an optimization algorithm for Variational Inference (VI) in complex models.
We develop an efficient algorithm for Gaussian Variational Inference whose updates satisfy the positive definite constraint on the variational covariance matrix.
Due to its black-box nature, MGVBP stands as a ready-to-use solution for VI in complex models.
arXiv Detail & Related papers (2022-10-26T10:12:31Z) - Detecting Label Noise via Leave-One-Out Cross Validation [0.0]
We present a simple algorithm for identifying and correcting real-valued noisy labels from a mixture of clean and corrupted samples.
A heteroscedastic noise model is employed, in which additive Gaussian noise terms with independent variances are associated with each and all of the observed labels.
We show that the presented method can pinpoint corrupted samples and lead to better regression models when trained on synthetic and real-world scientific data sets.
arXiv Detail & Related papers (2021-03-21T10:02:50Z) - Pathwise Conditioning of Gaussian Processes [72.61885354624604]
Conventional approaches for simulating Gaussian process posteriors view samples as draws from marginal distributions of process values at finite sets of input locations.
This distribution-centric characterization leads to generative strategies that scale cubically in the size of the desired random vector.
We show how this pathwise interpretation of conditioning gives rise to a general family of approximations that lend themselves to efficiently sampling Gaussian process posteriors.
arXiv Detail & Related papers (2020-11-08T17:09:37Z) - An adaptive Hessian approximated stochastic gradient MCMC method [12.93317525451798]
We present an adaptive Hessian approximated gradient MCMC method to incorporate local geometric information while sampling from the posterior.
We adopt a magnitude-based weight pruning method to enforce the sparsity of the network.
arXiv Detail & Related papers (2020-10-03T16:22:15Z) - Scalable Control Variates for Monte Carlo Methods via Stochastic
Optimization [62.47170258504037]
This paper presents a framework that encompasses and generalizes existing approaches that use controls, kernels and neural networks.
Novel theoretical results are presented to provide insight into the variance reduction that can be achieved, and an empirical assessment, including applications to Bayesian inference, is provided in support.
arXiv Detail & Related papers (2020-06-12T22:03:25Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z) - Efficiently Sampling Functions from Gaussian Process Posteriors [76.94808614373609]
We propose an easy-to-use and general-purpose approach for fast posterior sampling.
We demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost.
arXiv Detail & Related papers (2020-02-21T14:03:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.