Already Moderate Population Sizes Provably Yield Strong Robustness to Noise
- URL: http://arxiv.org/abs/2404.02090v4
- Date: Mon, 13 May 2024 05:01:01 GMT
- Title: Already Moderate Population Sizes Provably Yield Strong Robustness to Noise
- Authors: Denis Antipov, Benjamin Doerr, Alexandra Ivanova,
- Abstract summary: We show that two evolutionary algorithms can tolerate constant noise probabilities without increasing the runtime on the OneMax benchmark.
Our results are based on a novel proof argument that the noiseless offspring can be seen as a biased uniform crossover between the parent and the noisy offspring.
- Score: 53.27802701790209
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Experience shows that typical evolutionary algorithms can cope well with stochastic disturbances such as noisy function evaluations. In this first mathematical runtime analysis of the $(1+\lambda)$ and $(1,\lambda)$ evolutionary algorithms in the presence of prior bit-wise noise, we show that both algorithms can tolerate constant noise probabilities without increasing the asymptotic runtime on the OneMax benchmark. For this, a population size $\lambda$ suffices that is at least logarithmic in the problem size $n$. The only previous result in this direction regarded the less realistic one-bit noise model, required a population size super-linear in the problem size, and proved a runtime guarantee roughly cubic in the noiseless runtime for the OneMax benchmark. Our significantly stronger results are based on the novel proof argument that the noiseless offspring can be seen as a biased uniform crossover between the parent and the noisy offspring. We are optimistic that the technical lemmas resulting from this insight will find applications also in future mathematical runtime analyses of evolutionary algorithms.
Related papers
- Evolutionary Algorithms Are Significantly More Robust to Noise When They Ignore It [10.165640083594573]
We show the need for re-evaluations could be overestimated, and in fact, detrimental.
This first analysis of an evolutionary algorithm solving a single-objective noisy problem without re-evaluations could indicate that such algorithms cope with noise much better than previously thought.
arXiv Detail & Related papers (2024-08-31T00:10:10Z) - Gradient-free optimization of highly smooth functions: improved analysis
and a new algorithm [87.22224691317766]
This work studies problems with zero-order noisy oracle information under the assumption that the objective function is highly smooth.
We consider two kinds of zero-order projected gradient descent algorithms.
arXiv Detail & Related papers (2023-06-03T17:05:13Z) - Runtime Analyses of Multi-Objective Evolutionary Algorithms in the
Presence of Noise [7.421135890148154]
We conduct the first analysis of a classic benchmark in the presence of noise in the objective functions.
We prove that when bit-wise prior noise with rate $p le alpha/n$, $alpha$ a suitable constant is present.
We prove that when all solutions are reevaluated in each iteration, then any noise rate $p = omega(log(n)/n2)$ leads to a super-polynomial runtime.
arXiv Detail & Related papers (2023-05-17T14:48:06Z) - Larger Offspring Populations Help the $(1 + (\lambda, \lambda))$ Genetic
Algorithm to Overcome the Noise [76.24156145566425]
Evolutionary algorithms are known to be robust to noise in the evaluation of the fitness.
We analyze to what extent the $(lambda,lambda)$ genetic algorithm is robust to noise.
arXiv Detail & Related papers (2023-05-08T08:49:01Z) - Higher degree sum-of-squares relaxations robust against oblivious
outliers [14.58610686291355]
We consider estimation models of the form $Y=X*+N$, where $X*$ is some $m$-dimensional signal we wish to recover.
We introduce a family of algorithms that under mild assumptions recover the signal $X*$ in all estimation problems for which there exists a sum-of-squares algorithm.
arXiv Detail & Related papers (2022-11-14T13:09:12Z) - Hessian Averaging in Stochastic Newton Methods Achieves Superlinear
Convergence [69.65563161962245]
We consider a smooth and strongly convex objective function using a Newton method.
We show that there exists a universal weighted averaging scheme that transitions to local convergence at an optimal stage.
arXiv Detail & Related papers (2022-04-20T07:14:21Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Learning Halfspaces with Tsybakov Noise [50.659479930171585]
We study the learnability of halfspaces in the presence of Tsybakov noise.
We give an algorithm that achieves misclassification error $epsilon$ with respect to the true halfspace.
arXiv Detail & Related papers (2020-06-11T14:25:02Z) - Exponential Upper Bounds for the Runtime of Randomized Search Heuristics [9.853329403413701]
We show that any of the algorithms randomized local search, algorithm, simulated annealing, and (1+1) evolutionary algorithm can optimize any pseudo-Boolean weakly monotonic function.
We also prove an exponential upper bound for the runtime of the mutation-based version of the simple genetic algorithm on the OneMax benchmark.
arXiv Detail & Related papers (2020-04-13T00:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.