Self-Adjusting Evolutionary Algorithms Are Slow on Multimodal Landscapes
- URL: http://arxiv.org/abs/2404.12047v1
- Date: Thu, 18 Apr 2024 10:01:08 GMT
- Title: Self-Adjusting Evolutionary Algorithms Are Slow on Multimodal Landscapes
- Authors: Johannes Lengler, Konstantin Sturm,
- Abstract summary: We show that the positive results do not extend to other types of local optima.
On the distorted OneMax benchmark, the self-adjusting $(1, lambda)$-EA is slowed down just as elitist algorithms because self-adaptation prevents the algorithm from escaping from local optima.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The one-fifth rule and its generalizations are a classical parameter control mechanism in discrete domains. They have also been transferred to control the offspring population size of the $(1, \lambda)$-EA. This has been shown to work very well for hill-climbing, and combined with a restart mechanism it was recently shown by Hevia Fajardo and Sudholt to improve performance on the multi-modal problem Cliff drastically. In this work we show that the positive results do not extend to other types of local optima. On the distorted OneMax benchmark, the self-adjusting $(1, \lambda)$-EA is slowed down just as elitist algorithms because self-adaptation prevents the algorithm from escaping from local optima. This makes the self-adaptive algorithm considerably worse than good static parameter choices, which do allow to escape from local optima efficiently. We show this theoretically and complement the result with empirical runtime results.
Related papers
- AlphaZeroES: Direct score maximization outperforms planning loss minimization [61.17702187957206]
Planning at execution time has been shown to dramatically improve performance for agents in both single-agent and multi-agent settings.
A family of approaches to planning at execution time are AlphaZero and its variants, which use Monte Carlo Tree Search together with a neural network that guides the search by predicting state values and action probabilities.
We show that, across multiple environments, directly maximizing the episode score outperforms minimizing the planning loss.
arXiv Detail & Related papers (2024-06-12T23:00:59Z) - ELRA: Exponential learning rate adaption gradient descent optimization
method [83.88591755871734]
We present a novel, fast (exponential rate), ab initio (hyper-free) gradient based adaption.
The main idea of the method is to adapt the $alpha by situational awareness.
It can be applied to problems of any dimensions n and scales only linearly.
arXiv Detail & Related papers (2023-09-12T14:36:13Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - Self-Adjusting Population Sizes for Non-Elitist Evolutionary Algorithms:
Why Success Rates Matter [0.0]
Evolutionary algorithms (EAs) are general-purpose optimisers that come with several parameters like the sizes of parent and offspring populations or the mutation rate.
Recent theoretical studies have shown that self-adjusting parameter control mechanisms can outperform the best static parameters in EAs on discrete problems.
arXiv Detail & Related papers (2021-04-12T16:44:54Z) - Optimal Static Mutation Strength Distributions for the $(1+\lambda)$
Evolutionary Algorithm on OneMax [1.0965065178451106]
We show that, for large enough population sizes, such optimal distributions may be surprisingly complicated and counter-intuitive.
We show that, for large enough population sizes, such optimal distributions may be surprisingly complicated and counter-intuitive.
arXiv Detail & Related papers (2021-02-09T16:56:25Z) - Byzantine-Resilient Non-Convex Stochastic Gradient Descent [61.6382287971982]
adversary-resilient distributed optimization, in which.
machines can independently compute gradients, and cooperate.
Our algorithm is based on a new concentration technique, and its sample complexity.
It is very practical: it improves upon the performance of all prior methods when no.
setting machines are present.
arXiv Detail & Related papers (2020-12-28T17:19:32Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z) - Self-Adjusting Evolutionary Algorithms for Multimodal Optimization [0.0]
Self-adjusting and self-adaptive mechanisms can provably outperform static settings in evolutionary algorithms for binary search spaces.
We suggest a mechanism called stagnation detection that can be added as a module to existing evolutionary algorithms.
arXiv Detail & Related papers (2020-04-07T11:02:10Z) - Does Comma Selection Help To Cope With Local Optima [9.853329403413701]
We show that the $(mu,lambda)$EA does not lead to a runtime advantage over the $(mu+lambda)$EA.
This is the first runtime result for a non-elitist algorithm on a multi-modal problem that is tight apart from lower order terms.
arXiv Detail & Related papers (2020-04-02T21:39:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.