Theoretical Analysis of Explicit Averaging and Novel Sign Averaging in
Comparison-Based Search
- URL: http://arxiv.org/abs/2401.14014v1
- Date: Thu, 25 Jan 2024 08:35:50 GMT
- Title: Theoretical Analysis of Explicit Averaging and Novel Sign Averaging in
Comparison-Based Search
- Authors: Daiki Morinaga, Youhei Akimoto
- Abstract summary: In black-box optimization, noise in the objective function is inevitable.
Explicit averaging is widely used as a simple and versatile noise-handling technique.
Alternatively, sign averaging is proposed as a simple but robust noise-handling technique.
- Score: 6.883986852278248
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In black-box optimization, noise in the objective function is inevitable.
Noise disrupts the ranking of candidate solutions in comparison-based
optimization, possibly deteriorating the search performance compared with a
noiseless scenario. Explicit averaging takes the sample average of noisy
objective function values and is widely used as a simple and versatile
noise-handling technique. Although it is suitable for various applications, it
is ineffective if the mean is not finite. We theoretically reveal that explicit
averaging has a negative effect on the estimation of ground-truth rankings when
assuming stably distributed noise without a finite mean. Alternatively, sign
averaging is proposed as a simple but robust noise-handling technique. We
theoretically prove that the sign averaging estimates the order of the medians
of the noisy objective function values of a pair of points with arbitrarily
high probability as the number of samples increases. Its advantages over
explicit averaging and its robustness are also confirmed through numerical
experiments.
Related papers
- An Adaptive Re-evaluation Method for Evolution Strategy under Additive Noise [3.92625489118339]
We propose a novel method to adaptively choose the optimal re-evaluation number for function values corrupted by additive Gaussian white noise.
We experimentally compare our method to the state-of-the-art noise-handling methods for CMA-ES on a set of artificial test functions.
arXiv Detail & Related papers (2024-09-25T09:10:21Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Optimizing the Noise in Self-Supervised Learning: from Importance
Sampling to Noise-Contrastive Estimation [80.07065346699005]
It is widely assumed that the optimal noise distribution should be made equal to the data distribution, as in Generative Adversarial Networks (GANs)
We turn to Noise-Contrastive Estimation which grounds this self-supervised task as an estimation problem of an energy-based model of the data.
We soberly conclude that the optimal noise may be hard to sample from, and the gain in efficiency can be modest compared to choosing the noise distribution equal to the data's.
arXiv Detail & Related papers (2023-01-23T19:57:58Z) - The Optimal Noise in Noise-Contrastive Learning Is Not What You Think [80.07065346699005]
We show that deviating from this assumption can actually lead to better statistical estimators.
In particular, the optimal noise distribution is different from the data's and even from a different family.
arXiv Detail & Related papers (2022-03-02T13:59:20Z) - Adaptive Sampling for Heterogeneous Rank Aggregation from Noisy Pairwise
Comparisons [85.5955376526419]
In rank aggregation problems, users exhibit various accuracy levels when comparing pairs of items.
We propose an elimination-based active sampling strategy, which estimates the ranking of items via noisy pairwise comparisons.
We prove that our algorithm can return the true ranking of items with high probability.
arXiv Detail & Related papers (2021-10-08T13:51:55Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Denoising Score Matching with Random Fourier Features [11.60130641443281]
We derive analytical expression for the Denoising Score matching using the Kernel Exponential Family as a model distribution.
The obtained expression explicitly depends on the noise variance, so the validation loss can be straightforwardly used to tune the noise level.
arXiv Detail & Related papers (2021-01-13T18:02:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.