Evolutionary Algorithms Are Significantly More Robust to Noise When They Ignore It
- URL: http://arxiv.org/abs/2409.00306v2
- Date: Wed, 07 May 2025 08:44:45 GMT
- Title: Evolutionary Algorithms Are Significantly More Robust to Noise When They Ignore It
- Authors: Denis Antipov, Benjamin Doerr,
- Abstract summary: We conduct the first mathematical runtime analysis of an evolutionary algorithm solving a single-objective noisy problem without re-evaluations.<n>We prove that the algorithm without re-evaluations can optimize the classic LeadingOnes benchmark with up to constant noise rates.
- Score: 10.165640083594573
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized search heuristics (RSHs) are known to have a certain robustness to noise. Mathematical analyses trying to quantify rigorously how robust RSHs are to a noisy access to the objective function typically assume that each solution is re-evaluated whenever it is compared to others. This aims at preventing that a single noisy evaluation has a lasting negative effect, but is computationally expensive and requires the user to foresee that noise is present (as in a noise-free setting, one would never re-evaluate solutions). In this work, we conduct the first mathematical runtime analysis of an evolutionary algorithm solving a single-objective noisy problem without re-evaluations. We prove that the $(1+1)$ evolutionary algorithm without re-evaluations can optimize the classic LeadingOnes benchmark with up to constant noise rates, in sharp contrast to the version with re-evaluations, where only noise with rates $O(n^{-2} \log n)$ can be tolerated. This result suggests that re-evaluations are much less needed than what was previously thought, and that they actually can be highly detrimental. The insights from our mathematical proofs indicate that this similar results are plausible for other classic benchmarks.
Related papers
- Already Moderate Population Sizes Provably Yield Strong Robustness to Noise [53.27802701790209]
We show that two evolutionary algorithms can tolerate constant noise probabilities without increasing the runtime on the OneMax benchmark.
Our results are based on a novel proof argument that the noiseless offspring can be seen as a biased uniform crossover between the parent and the noisy offspring.
arXiv Detail & Related papers (2024-04-02T16:35:52Z) - Some Constructions of Private, Efficient, and Optimal $K$-Norm and Elliptic Gaussian Noise [54.34628844260993]
Differentially private computation often begins with a bound on some $d$-dimensional statistic's sensitivity.
For pure differential privacy, the $K$-norm mechanism can improve on this approach using a norm tailored to the statistic's sensitivity space.
This paper solves both problems for the simple statistics of sum, count, and vote.
arXiv Detail & Related papers (2023-09-27T17:09:36Z) - Runtime Analyses of Multi-Objective Evolutionary Algorithms in the
Presence of Noise [7.421135890148154]
We conduct the first analysis of a classic benchmark in the presence of noise in the objective functions.
We prove that when bit-wise prior noise with rate $p le alpha/n$, $alpha$ a suitable constant is present.
We prove that when all solutions are reevaluated in each iteration, then any noise rate $p = omega(log(n)/n2)$ leads to a super-polynomial runtime.
arXiv Detail & Related papers (2023-05-17T14:48:06Z) - Larger Offspring Populations Help the $(1 + (\lambda, \lambda))$ Genetic
Algorithm to Overcome the Noise [76.24156145566425]
Evolutionary algorithms are known to be robust to noise in the evaluation of the fitness.
We analyze to what extent the $(lambda,lambda)$ genetic algorithm is robust to noise.
arXiv Detail & Related papers (2023-05-08T08:49:01Z) - Higher degree sum-of-squares relaxations robust against oblivious
outliers [14.58610686291355]
We consider estimation models of the form $Y=X*+N$, where $X*$ is some $m$-dimensional signal we wish to recover.
We introduce a family of algorithms that under mild assumptions recover the signal $X*$ in all estimation problems for which there exists a sum-of-squares algorithm.
arXiv Detail & Related papers (2022-11-14T13:09:12Z) - Regret Minimization with Noisy Observations [18.24975895643299]
In a typical optimization problem, the task is to pick one of a number of options with the lowest cost or the highest value.
In this paper, we study such scenarios using a minimization regret model.
We show that the na"ive algorithm of picking the highest observed value has regret arbitrarily worse than the optimum.
arXiv Detail & Related papers (2022-07-19T17:48:54Z) - The Optimal Noise in Noise-Contrastive Learning Is Not What You Think [80.07065346699005]
We show that deviating from this assumption can actually lead to better statistical estimators.
In particular, the optimal noise distribution is different from the data's and even from a different family.
arXiv Detail & Related papers (2022-03-02T13:59:20Z) - ReLU Regression with Massart Noise [52.10842036932169]
We study the fundamental problem of ReLU regression, where the goal is to fit Rectified Linear Units (ReLUs) to data.
We focus on ReLU regression in the Massart noise model, a natural and well-studied semi-random noise model.
We develop an efficient algorithm that achieves exact parameter recovery in this model.
arXiv Detail & Related papers (2021-09-10T02:13:22Z) - Learning based signal detection for MIMO systems with unknown noise
statistics [84.02122699723536]
This paper aims to devise a generalized maximum likelihood (ML) estimator to robustly detect signals with unknown noise statistics.
In practice, there is little or even no statistical knowledge on the system noise, which in many cases is non-Gaussian, impulsive and not analyzable.
Our framework is driven by an unsupervised learning approach, where only the noise samples are required.
arXiv Detail & Related papers (2021-01-21T04:48:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.