Self-Adjusting Population Sizes for Non-Elitist Evolutionary Algorithms:
Why Success Rates Matter
- URL: http://arxiv.org/abs/2104.05624v3
- Date: Wed, 12 Oct 2022 14:40:46 GMT
- Title: Self-Adjusting Population Sizes for Non-Elitist Evolutionary Algorithms:
Why Success Rates Matter
- Authors: Mario Alejandro Hevia Fajardo and Dirk Sudholt
- Abstract summary: Evolutionary algorithms (EAs) are general-purpose optimisers that come with several parameters like the sizes of parent and offspring populations or the mutation rate.
Recent theoretical studies have shown that self-adjusting parameter control mechanisms can outperform the best static parameters in EAs on discrete problems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Evolutionary algorithms (EAs) are general-purpose optimisers that come with
several parameters like the sizes of parent and offspring populations or the
mutation rate. It is well known that the performance of EAs may depend
drastically on these parameters. Recent theoretical studies have shown that
self-adjusting parameter control mechanisms that tune parameters during the
algorithm run can provably outperform the best static parameters in EAs on
discrete problems. However, the majority of these studies concerned elitist EAs
and we do not have a clear answer on whether the same mechanisms can be applied
for non-elitist EAs.
We study one of the best-known parameter control mechanisms, the one-fifth
success rule, to control the offspring population size $\lambda$ in the
non-elitist $(1,\lambda)$ EA. It is known that the $(1,\lambda)$ EA has a sharp
threshold with respect to the choice of $\lambda$ where the expected runtime on
the benchmark function OneMax changes from polynomial to exponential time.
Hence, it is not clear whether parameter control mechanisms are able to find
and maintain suitable values of $\lambda$.
For OneMax we show that the answer crucially depends on the success rate $s$
(i.\,e.\ a one-$(s+1)$-th success rule). We prove that, if the success rate is
appropriately small, the self-adjusting $(1,\lambda)$ EA optimises OneMax in
$O(n)$ expected generations and $O(n \log n)$ expected evaluations, the best
possible runtime for any unary unbiased black-box algorithm. A small success
rate is crucial: we also show that if the success rate is too large, the
algorithm has an exponential runtime on OneMax and other functions with similar
characteristics.
Related papers
- Self-Adjusting Evolutionary Algorithms Are Slow on Multimodal Landscapes [0.0]
We show that the positive results do not extend to other types of local optima.
On the distorted OneMax benchmark, the self-adjusting $(1, lambda)$-EA is slowed down just as elitist algorithms because self-adaptation prevents the algorithm from escaping from local optima.
arXiv Detail & Related papers (2024-04-18T10:01:08Z) - Already Moderate Population Sizes Provably Yield Strong Robustness to Noise [53.27802701790209]
We show that two evolutionary algorithms can tolerate constant noise probabilities without increasing the runtime on the OneMax benchmark.
Our results are based on a novel proof argument that the noiseless offspring can be seen as a biased uniform crossover between the parent and the noisy offspring.
arXiv Detail & Related papers (2024-04-02T16:35:52Z) - Kernelized Normalizing Constant Estimation: Bridging Bayesian Quadrature
and Bayesian Optimization [51.533164528799084]
We show that to estimate the normalizing constant within a small relative error, the level of difficulty depends on the value of $lambda$.
We find that this pattern holds true even when the function evaluations are noisy.
arXiv Detail & Related papers (2024-01-11T07:45:09Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - Larger Offspring Populations Help the $(1 + (\lambda, \lambda))$ Genetic
Algorithm to Overcome the Noise [76.24156145566425]
Evolutionary algorithms are known to be robust to noise in the evaluation of the fitness.
We analyze to what extent the $(lambda,lambda)$ genetic algorithm is robust to noise.
arXiv Detail & Related papers (2023-05-08T08:49:01Z) - Refined Regret for Adversarial MDPs with Linear Function Approximation [50.00022394876222]
We consider learning in an adversarial Decision Process (MDP) where the loss functions can change arbitrarily over $K$ episodes.
This paper provides two algorithms that improve the regret to $tildemathcal O(K2/3)$ in the same setting.
arXiv Detail & Related papers (2023-01-30T14:37:21Z) - Hard Problems are Easier for Success-based Parameter Control [0.0]
Self-adjusting parameters in evolutionary algorithms can match or outperform the best fixed parameters on discrete problems.
We show that self-adjustment works as intended in the absence of easy slopes.
We discuss implications for the everywhere hard function LeadingOnes and a new class OneMaxBlocks of everywhere hard functions with tunable difficulty.
arXiv Detail & Related papers (2022-04-12T13:56:29Z) - Bayesian Optimistic Optimisation with Exponentially Decaying Regret [58.02542541410322]
The current practical BO algorithms have regret bounds ranging from $mathcalO(fraclogNsqrtN)$ to $mathcal O(e-sqrtN)$, where $N$ is the number of evaluations.
This paper explores the possibility of improving the regret bound in the noiseless setting by intertwining concepts from BO and tree-based optimistic optimisation.
We propose the BOO algorithm, a first practical approach which can achieve an exponential regret bound with order $mathcal O(N-sqrt
arXiv Detail & Related papers (2021-05-10T13:07:44Z) - When Non-Elitism Meets Time-Linkage Problems [19.798298260257432]
We analyze on the influence of the non-elitism via comparing the performance of the elitist (1+$lambda$)EA and its non-elitist counterpart (1,$lambda$)EA.
We prove that with probability $1$, (1,$lambda$)EA can reach the global optimum and its expected runtime is $O(n3+clog n)$ with $lambda=c log_fracee-1 n$ for the constant $cge 1$.
arXiv Detail & Related papers (2021-04-14T13:03:42Z) - Optimal Mutation Rates for the $(1+\lambda)$ EA on OneMax [1.0965065178451106]
We extend the analysis of optimal mutation rates to two variants of the OneMax problem.
We compute for all population sizes $lambda in 2i mid 0 le i le 18$ which mutation rates minimize the expected running time.
Our results do not only provide a lower bound against which we can measure common evolutionary approaches.
arXiv Detail & Related papers (2020-06-20T01:23:14Z) - Does Comma Selection Help To Cope With Local Optima [9.853329403413701]
We show that the $(mu,lambda)$EA does not lead to a runtime advantage over the $(mu+lambda)$EA.
This is the first runtime result for a non-elitist algorithm on a multi-modal problem that is tight apart from lower order terms.
arXiv Detail & Related papers (2020-04-02T21:39:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.