Optimality in importance sampling: a gentle survey
- URL: http://arxiv.org/abs/2502.07396v1
- Date: Tue, 11 Feb 2025 09:23:26 GMT
- Title: Optimality in importance sampling: a gentle survey
- Authors: Fernando Llorente, Luca Martino,
- Abstract summary: The performance of Monte Carlo sampling methods relies on the crucial choice of a proposal density.
This work is an exhaustive review around the concept of optimality in importance sampling.
- Score: 50.79602839359522
- License:
- Abstract: The performance of the Monte Carlo sampling methods relies on the crucial choice of a proposal density. The notion of optimality is fundamental to design suitable adaptive procedures of the proposal density within Monte Carlo schemes. This work is an exhaustive review around the concept of optimality in importance sampling. Several frameworks are described and analyzed, such as the marginal likelihood approximation for model selection, the use of multiple proposal densities, a sequence of tempered posteriors, and noisy scenarios including the applications to approximate Bayesian computation (ABC) and reinforcement learning, to name a few. Some theoretical and empirical comparisons are also provided.
Related papers
- An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Optimal Budgeted Rejection Sampling for Generative Models [54.050498411883495]
Rejection sampling methods have been proposed to improve the performance of discriminator-based generative models.
We first propose an Optimal Budgeted Rejection Sampling scheme that is provably optimal.
Second, we propose an end-to-end method that incorporates the sampling scheme into the training procedure to further enhance the model's overall performance.
arXiv Detail & Related papers (2023-11-01T11:52:41Z) - Efficient Learning for Selecting Top-m Context-Dependent Designs [0.7646713951724012]
We consider a simulation optimization problem for a context-dependent decision-making.
We develop a sequential sampling policy to efficiently learn the performance of each design under each context.
Numerical experiments demonstrate that the proposed method improves the efficiency for selection of top-m context-dependent designs.
arXiv Detail & Related papers (2023-05-06T16:11:49Z) - Bayesian Experimental Design for Symbolic Discovery [12.855710007840479]
We apply constrained first-order methods to optimize an appropriate selection criterion, using Hamiltonian Monte Carlo to sample from the prior.
A step for computing the predictive distribution, involving convolution, is computed via either numerical integration, or via fast transform methods.
arXiv Detail & Related papers (2022-11-29T01:25:29Z) - Recursive Monte Carlo and Variational Inference with Auxiliary Variables [64.25762042361839]
Recursive auxiliary-variable inference (RAVI) is a new framework for exploiting flexible proposals.
RAVI generalizes and unifies several existing methods for inference with expressive expressive families.
We show RAVI's design framework and theorems by using them to analyze and improve upon Salimans et al.'s Markov Chain Variational Inference.
arXiv Detail & Related papers (2022-03-05T23:52:40Z) - Optimality in Noisy Importance Sampling [66.94202101538939]
We derive optimal proposal densities for noisy IS estimators.
We compare the use of the optimal proposals with previous optimality approaches considered in a noisy IS framework.
arXiv Detail & Related papers (2022-01-07T12:32:25Z) - Adaptive Importance Sampling meets Mirror Descent: a Bias-variance
tradeoff [7.538482310185135]
A major drawback of adaptive importance sampling is the large variance of the weights.
This paper investigates a regularization strategy whose basic principle is to raise the importance weights at a certain power.
arXiv Detail & Related papers (2021-10-29T07:45:24Z) - Variational Refinement for Importance Sampling Using the Forward
Kullback-Leibler Divergence [77.06203118175335]
Variational Inference (VI) is a popular alternative to exact sampling in Bayesian inference.
Importance sampling (IS) is often used to fine-tune and de-bias the estimates of approximate Bayesian inference procedures.
We propose a novel combination of optimization and sampling techniques for approximate Bayesian inference.
arXiv Detail & Related papers (2021-06-30T11:00:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.