A Rigorous Runtime Analysis of the $(1 + (\lambda, \lambda))$ GA on Jump
Functions
- URL: http://arxiv.org/abs/2004.06702v4
- Date: Tue, 23 Nov 2021 10:22:16 GMT
- Title: A Rigorous Runtime Analysis of the $(1 + (\lambda, \lambda))$ GA on Jump
Functions
- Authors: Denis Antipov, Benjamin Doerr, Vitalii Karavaev
- Abstract summary: We conduct the first runtime analysis of this algorithm on a multimodal problem class, the jump functions benchmark.
For the isolated problem of leaving the local optimum of jump functions, we determine provably optimal parameters that lead to a runtime of $(n/k)k/2 eTheta(k)$.
- Score: 8.34061303235504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The $(1 + (\lambda,\lambda))$ genetic algorithm is a younger evolutionary
algorithm trying to profit also from inferior solutions. Rigorous runtime
analyses on unimodal fitness functions showed that it can indeed be faster than
classical evolutionary algorithms, though on these simple problems the gains
were only moderate.
In this work, we conduct the first runtime analysis of this algorithm on a
multimodal problem class, the jump functions benchmark. We show that with the
right parameters, the \ollga optimizes any jump function with jump size $2 \le
k \le n/4$ in expected time $O(n^{(k+1)/2} e^{O(k)} k^{-k/2})$, which
significantly and already for constant~$k$ outperforms standard mutation-based
algorithms with their $\Theta(n^k)$ runtime and standard crossover-based
algorithms with their $\tilde{O}(n^{k-1})$ runtime guarantee.
For the isolated problem of leaving the local optimum of jump functions, we
determine provably optimal parameters that lead to a runtime of $(n/k)^{k/2}
e^{\Theta(k)}$. This suggests some general advice on how to set the parameters
of the \ollga, which might ease the further use of this algorithm.
Related papers
- A Scalable Algorithm for Individually Fair K-means Clustering [77.93955971520549]
We present a scalable algorithm for the individually fair ($p$, $k$)-clustering problem introduced by Jung et al. and Mahabadi et al.
A clustering is then called individually fair if it has centers within distance $delta(x)$ of $x$ for each $xin P$.
We show empirically that not only is our algorithm much faster than prior work, but it also produces lower-cost solutions.
arXiv Detail & Related papers (2024-02-09T19:01:48Z) - Run Time Bounds for Integer-Valued OneMax Functions [0.9012198585960443]
We consider the search space $mathbbZn$ in terms of multi-valued decision variables $0,ldots,r-1n$.
We show that RLS with step size adaptation achieves an optimization time of $Theta(n cdot log(|a|_1))$.
We conclude with an empirical analysis, comparing the above algorithms also with a variant of CMA-ES for discrete search spaces.
arXiv Detail & Related papers (2023-07-21T18:49:06Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - Larger Offspring Populations Help the $(1 + (\lambda, \lambda))$ Genetic
Algorithm to Overcome the Noise [76.24156145566425]
Evolutionary algorithms are known to be robust to noise in the evaluation of the fitness.
We analyze to what extent the $(lambda,lambda)$ genetic algorithm is robust to noise.
arXiv Detail & Related papers (2023-05-08T08:49:01Z) - Mind the gap: Achieving a super-Grover quantum speedup by jumping to the
end [114.3957763744719]
We present a quantum algorithm that has rigorous runtime guarantees for several families of binary optimization problems.
We show that the algorithm finds the optimal solution in time $O*(2(0.5-c)n)$ for an $n$-independent constant $c$.
We also show that for a large fraction of random instances from the $k$-spin model and for any fully satisfiable or slightly frustrated $k$-CSP formula, statement (a) is the case.
arXiv Detail & Related papers (2022-12-03T02:45:23Z) - Bayesian Optimistic Optimisation with Exponentially Decaying Regret [58.02542541410322]
The current practical BO algorithms have regret bounds ranging from $mathcalO(fraclogNsqrtN)$ to $mathcal O(e-sqrtN)$, where $N$ is the number of evaluations.
This paper explores the possibility of improving the regret bound in the noiseless setting by intertwining concepts from BO and tree-based optimistic optimisation.
We propose the BOO algorithm, a first practical approach which can achieve an exponential regret bound with order $mathcal O(N-sqrt
arXiv Detail & Related papers (2021-05-10T13:07:44Z) - An Extended Jump Function Benchmark for the Analysis of Randomized
Search Heuristics [4.38301148531795]
We propose an extended class $textscJump_k,delta$ of jump functions that contain a valley of low fitness of width.
We show that the new class allows experiments with wider fitness valleys, especially when they lie further away from the global optimum.
arXiv Detail & Related papers (2021-05-07T07:21:10Z) - Parameter-free Stochastic Optimization of Variationally Coherent
Functions [19.468067110814808]
We design and analyze an algorithm for first-order optimization of a class of functions on $mathbbRdilon.
It is the first to achieve both these at the same time.
arXiv Detail & Related papers (2021-01-30T15:05:34Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - Runtime Analysis of a Heavy-Tailed $(1+(\lambda,\lambda))$ Genetic
Algorithm on Jump Functions [9.853329403413701]
The jump function can be optimized with jump size $k$ in an expected runtime of only $n(k + 1)/2k-k/2eO(k)$ fitness evaluations.
We show that this algorithm with natural instance-independent choices of the distribution parameters on all jump functions with jump size at most $n/4$ has a performance close to what the best instance-specific parameters in the previous work obtained.
arXiv Detail & Related papers (2020-06-05T15:57:55Z) - The $(1+(\lambda,\lambda))$ Genetic Algorithm for Permutations [0.0]
We show that the $(lambda,lambda)$ genetic algorithm finds the optimum in $O(n2)$ fitness queries.
We also present the first analysis of this algorithm on a permutation-based problem called Ham.
arXiv Detail & Related papers (2020-04-18T17:04:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.