Scalable Control Variates for Monte Carlo Methods via Stochastic
Optimization
- URL: http://arxiv.org/abs/2006.07487v2
- Date: Wed, 21 Jul 2021 11:46:11 GMT
- Title: Scalable Control Variates for Monte Carlo Methods via Stochastic
Optimization
- Authors: Shijing Si, Chris. J. Oates, Andrew B. Duncan, Lawrence Carin,
Fran\c{c}ois-Xavier Briol
- Abstract summary: This paper presents a framework that encompasses and generalizes existing approaches that use controls, kernels and neural networks.
Novel theoretical results are presented to provide insight into the variance reduction that can be achieved, and an empirical assessment, including applications to Bayesian inference, is provided in support.
- Score: 62.47170258504037
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Control variates are a well-established tool to reduce the variance of Monte
Carlo estimators. However, for large-scale problems including high-dimensional
and large-sample settings, their advantages can be outweighed by a substantial
computational cost. This paper considers control variates based on Stein
operators, presenting a framework that encompasses and generalizes existing
approaches that use polynomials, kernels and neural networks. A learning
strategy based on minimising a variational objective through stochastic
optimization is proposed, leading to scalable and effective control variates.
Novel theoretical results are presented to provide insight into the variance
reduction that can be achieved, and an empirical assessment, including
applications to Bayesian inference, is provided in support.
Related papers
- Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - When can Regression-Adjusted Control Variates Help? Rare Events, Sobolev
Embedding and Minimax Optimality [10.21792151799121]
We show that a machine learning-based estimator can be used to mitigate the variance of Monte Carlo sampling.
In the presence of rare and extreme events, a truncated version of the Monte Carlo algorithm can achieve the minimax optimal rate.
arXiv Detail & Related papers (2023-05-25T23:09:55Z) - Multistage Stochastic Optimization via Kernels [3.7565501074323224]
We develop a non-parametric, data-driven, tractable approach for solving multistage optimization problems.
We show that the proposed method produces decision rules with near-optimal average performance.
arXiv Detail & Related papers (2023-03-11T23:19:32Z) - Recursive Monte Carlo and Variational Inference with Auxiliary Variables [64.25762042361839]
Recursive auxiliary-variable inference (RAVI) is a new framework for exploiting flexible proposals.
RAVI generalizes and unifies several existing methods for inference with expressive expressive families.
We show RAVI's design framework and theorems by using them to analyze and improve upon Salimans et al.'s Markov Chain Variational Inference.
arXiv Detail & Related papers (2022-03-05T23:52:40Z) - Variational Inference MPC using Tsallis Divergence [10.013572514839082]
We provide a framework for Variational Inference-Stochastic Optimal Control by using thenon-extensive Tsallis divergence.
A novel Tsallis Variational Inference-Model Predictive Control algorithm is derived.
arXiv Detail & Related papers (2021-04-01T04:00:49Z) - Stein Variational Model Predictive Control [130.60527864489168]
Decision making under uncertainty is critical to real-world, autonomous systems.
Model Predictive Control (MPC) methods have demonstrated favorable performance in practice, but remain limited when dealing with complex distributions.
We show that this framework leads to successful planning in challenging, non optimal control problems.
arXiv Detail & Related papers (2020-11-15T22:36:59Z) - A Framework for Sample Efficient Interval Estimation with Control
Variates [94.32811054797148]
We consider the problem of estimating confidence intervals for the mean of a random variable.
Under certain conditions, we show improved efficiency compared to existing estimation algorithms.
arXiv Detail & Related papers (2020-06-18T05:42:30Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z) - Amortized variance reduction for doubly stochastic objectives [17.064916635597417]
Approximate inference in complex probabilistic models requires optimisation of doubly objective functions.
Current approaches do not take into account how mini-batchity affects samplingity, resulting in sub-optimal variance reduction.
We propose a new approach in which we use a recognition network to cheaply approximate the optimal control variate for each mini-batch, with no additional gradient computations.
arXiv Detail & Related papers (2020-03-09T13:23:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.