Optimality in Noisy Importance Sampling
- URL: http://arxiv.org/abs/2201.02432v1
- Date: Fri, 7 Jan 2022 12:32:25 GMT
- Title: Optimality in Noisy Importance Sampling
- Authors: Fernando Llorente, Luca Martino, Jesse Read, David Delgado-G\'omez
- Abstract summary: We derive optimal proposal densities for noisy IS estimators.
We compare the use of the optimal proposals with previous optimality approaches considered in a noisy IS framework.
- Score: 66.94202101538939
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we analyze the noisy importance sampling (IS), i.e., IS working
with noisy evaluations of the target density. We present the general framework
and derive optimal proposal densities for noisy IS estimators. The optimal
proposals incorporate the information of the variance of the noisy
realizations, proposing points in regions where the noise power is higher. We
also compare the use of the optimal proposals with previous optimality
approaches considered in a noisy IS framework.
Related papers
- ROPO: Robust Preference Optimization for Large Language Models [59.10763211091664]
We propose an iterative alignment approach that integrates noise-tolerance and filtering of noisy samples without the aid of external models.
Experiments on three widely-used datasets with Mistral-7B and Llama-2-7B demonstrate that ROPO significantly outperforms existing preference alignment methods.
arXiv Detail & Related papers (2024-04-05T13:58:51Z) - Rate-Optimal Policy Optimization for Linear Markov Decision Processes [65.5958446762678]
We obtain rate-optimal $widetilde O (sqrt K)$ regret where $K$ denotes the number of episodes.
Our work is the first to establish the optimal (w.r.t.$K$) rate of convergence in the setting with bandit feedback.
No algorithm with an optimal rate guarantee is currently known.
arXiv Detail & Related papers (2023-08-28T15:16:09Z) - Optimal distributed multiparameter estimation in noisy environments [0.3093890460224435]
We study how to find and improve noise-insensitive strategies.
We show that sequentially probing GHZ states is optimal up to a factor of at most 4.
arXiv Detail & Related papers (2023-06-01T18:32:53Z) - Efficient Learning for Selecting Top-m Context-Dependent Designs [0.7646713951724012]
We consider a simulation optimization problem for a context-dependent decision-making.
We develop a sequential sampling policy to efficiently learn the performance of each design under each context.
Numerical experiments demonstrate that the proposed method improves the efficiency for selection of top-m context-dependent designs.
arXiv Detail & Related papers (2023-05-06T16:11:49Z) - Optimizing the Noise in Self-Supervised Learning: from Importance
Sampling to Noise-Contrastive Estimation [80.07065346699005]
It is widely assumed that the optimal noise distribution should be made equal to the data distribution, as in Generative Adversarial Networks (GANs)
We turn to Noise-Contrastive Estimation which grounds this self-supervised task as an estimation problem of an energy-based model of the data.
We soberly conclude that the optimal noise may be hard to sample from, and the gain in efficiency can be modest compared to choosing the noise distribution equal to the data's.
arXiv Detail & Related papers (2023-01-23T19:57:58Z) - Neighbor Regularized Bayesian Optimization for Hyperparameter
Optimization [12.544312247050236]
We propose a novel BO algorithm called Neighbor Regularized Bayesian Optimization (NRBO) to solve the problem.
We first propose a neighbor-based regularization to smooth each sample observation, which could reduce the observation noise efficiently without any extra training cost.
We conduct experiments on the bayesmark benchmark and important computer vision benchmarks such as ImageNet and COCO.
arXiv Detail & Related papers (2022-10-07T12:08:01Z) - Gaussian Blue Noise [49.45731879857138]
We show that a framework for producing point distributions with blue noise spectrum attains unprecedented quality.
Our algorithm scales smoothly and feasibly to high dimensions while maintaining the same quality.
arXiv Detail & Related papers (2022-06-15T20:22:16Z) - The Optimal Noise in Noise-Contrastive Learning Is Not What You Think [80.07065346699005]
We show that deviating from this assumption can actually lead to better statistical estimators.
In particular, the optimal noise distribution is different from the data's and even from a different family.
arXiv Detail & Related papers (2022-03-02T13:59:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.