Larger Offspring Populations Help the $(1 + (\lambda, \lambda))$ Genetic
Algorithm to Overcome the Noise
- URL: http://arxiv.org/abs/2305.04553v1
- Date: Mon, 8 May 2023 08:49:01 GMT
- Title: Larger Offspring Populations Help the $(1 + (\lambda, \lambda))$ Genetic
Algorithm to Overcome the Noise
- Authors: Alexandra Ivanova, Denis Antipov, Benjamin Doerr
- Abstract summary: Evolutionary algorithms are known to be robust to noise in the evaluation of the fitness.
We analyze to what extent the $(lambda,lambda)$ genetic algorithm is robust to noise.
- Score: 76.24156145566425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Evolutionary algorithms are known to be robust to noise in the evaluation of
the fitness. In particular, larger offspring population sizes often lead to
strong robustness. We analyze to what extent the $(1+(\lambda,\lambda))$
genetic algorithm is robust to noise. This algorithm also works with larger
offspring population sizes, but an intermediate selection step and a
non-standard use of crossover as repair mechanism could render this algorithm
less robust than, e.g., the simple $(1+\lambda)$ evolutionary algorithm. Our
experimental analysis on several classic benchmark problems shows that this
difficulty does not arise. Surprisingly, in many situations this algorithm is
even more robust to noise than the $(1+\lambda)$~EA.
Related papers
- Evolutionary Algorithms Are Significantly More Robust to Noise When They Ignore It [10.165640083594573]
We show the need for re-evaluations could be overestimated, and in fact, detrimental.
This first analysis of an evolutionary algorithm solving a single-objective noisy problem without re-evaluations could indicate that such algorithms cope with noise much better than previously thought.
arXiv Detail & Related papers (2024-08-31T00:10:10Z) - Already Moderate Population Sizes Provably Yield Strong Robustness to Noise [53.27802701790209]
We show that two evolutionary algorithms can tolerate constant noise probabilities without increasing the runtime on the OneMax benchmark.
Our results are based on a novel proof argument that the noiseless offspring can be seen as a biased uniform crossover between the parent and the noisy offspring.
arXiv Detail & Related papers (2024-04-02T16:35:52Z) - Some Constructions of Private, Efficient, and Optimal $K$-Norm and Elliptic Gaussian Noise [54.34628844260993]
Differentially private computation often begins with a bound on some $d$-dimensional statistic's sensitivity.
For pure differential privacy, the $K$-norm mechanism can improve on this approach using a norm tailored to the statistic's sensitivity space.
This paper solves both problems for the simple statistics of sum, count, and vote.
arXiv Detail & Related papers (2023-09-27T17:09:36Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Runtime Analyses of Multi-Objective Evolutionary Algorithms in the
Presence of Noise [7.421135890148154]
We conduct the first analysis of a classic benchmark in the presence of noise in the objective functions.
We prove that when bit-wise prior noise with rate $p le alpha/n$, $alpha$ a suitable constant is present.
We prove that when all solutions are reevaluated in each iteration, then any noise rate $p = omega(log(n)/n2)$ leads to a super-polynomial runtime.
arXiv Detail & Related papers (2023-05-17T14:48:06Z) - Private estimation algorithms for stochastic block models and mixture
models [63.07482515700984]
General tools for designing efficient private estimation algorithms.
First efficient $(epsilon, delta)$-differentially private algorithm for both weak recovery and exact recovery.
arXiv Detail & Related papers (2023-01-11T09:12:28Z) - Clustering Mixture Models in Almost-Linear Time via List-Decodable Mean
Estimation [58.24280149662003]
We study the problem of list-decodable mean estimation, where an adversary can corrupt a majority of the dataset.
We develop new algorithms for list-decodable mean estimation, achieving nearly-optimal statistical guarantees.
arXiv Detail & Related papers (2021-06-16T03:34:14Z) - The $(1+(\lambda,\lambda))$ Genetic Algorithm for Permutations [0.0]
We show that the $(lambda,lambda)$ genetic algorithm finds the optimum in $O(n2)$ fitness queries.
We also present the first analysis of this algorithm on a permutation-based problem called Ham.
arXiv Detail & Related papers (2020-04-18T17:04:57Z) - Exponential Upper Bounds for the Runtime of Randomized Search Heuristics [9.853329403413701]
We show that any of the algorithms randomized local search, algorithm, simulated annealing, and (1+1) evolutionary algorithm can optimize any pseudo-Boolean weakly monotonic function.
We also prove an exponential upper bound for the runtime of the mutation-based version of the simple genetic algorithm on the OneMax benchmark.
arXiv Detail & Related papers (2020-04-13T00:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.