Runtime Analysis of a Heavy-Tailed $(1+(\lambda,\lambda))$ Genetic
Algorithm on Jump Functions
- URL: http://arxiv.org/abs/2006.03523v1
- Date: Fri, 5 Jun 2020 15:57:55 GMT
- Title: Runtime Analysis of a Heavy-Tailed $(1+(\lambda,\lambda))$ Genetic
Algorithm on Jump Functions
- Authors: Denis Antipov, Benjamin Doerr
- Abstract summary: The jump function can be optimized with jump size $k$ in an expected runtime of only $n(k + 1)/2k-k/2eO(k)$ fitness evaluations.
We show that this algorithm with natural instance-independent choices of the distribution parameters on all jump functions with jump size at most $n/4$ has a performance close to what the best instance-specific parameters in the previous work obtained.
- Score: 9.853329403413701
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It was recently observed that the $(1+(\lambda,\lambda))$ genetic algorithm
can comparably easily escape the local optimum of the jump functions benchmark.
Consequently, this algorithm can optimize the jump function with jump size $k$
in an expected runtime of only $n^{(k + 1)/2}k^{-k/2}e^{O(k)}$ fitness
evaluations (Antipov, Doerr, Karavaev (GECCO 2020)). To obtain this
performance, however, a non-standard parameter setting depending on the jump
size $k$ was used.
To overcome this difficulty, we propose to choose two parameters of the
$(1+(\lambda,\lambda))$ genetic algorithm randomly from a power-law
distribution. Via a mathematical runtime analysis, we show that this algorithm
with natural instance-independent choices of the distribution parameters on all
jump functions with jump size at most $n/4$ has a performance close to what the
best instance-specific parameters in the previous work obtained. This price for
instance-independence can be made as small as an $O(n\log(n))$ factor. Given
the difficulty of the jump problem and the runtime losses from using mildly
suboptimal fixed parameters (also discussed in this work), this appears to be a
fair price.
Related papers
- Kernelized Normalizing Constant Estimation: Bridging Bayesian Quadrature
and Bayesian Optimization [51.533164528799084]
We show that to estimate the normalizing constant within a small relative error, the level of difficulty depends on the value of $lambda$.
We find that this pattern holds true even when the function evaluations are noisy.
arXiv Detail & Related papers (2024-01-11T07:45:09Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - Mind the gap: Achieving a super-Grover quantum speedup by jumping to the
end [114.3957763744719]
We present a quantum algorithm that has rigorous runtime guarantees for several families of binary optimization problems.
We show that the algorithm finds the optimal solution in time $O*(2(0.5-c)n)$ for an $n$-independent constant $c$.
We also show that for a large fraction of random instances from the $k$-spin model and for any fully satisfiable or slightly frustrated $k$-CSP formula, statement (a) is the case.
arXiv Detail & Related papers (2022-12-03T02:45:23Z) - Extra-Newton: A First Approach to Noise-Adaptive Accelerated
Second-Order Methods [57.050204432302195]
This work proposes a universal and adaptive second-order method for minimizing second-order smooth, convex functions.
Our algorithm achieves $O(sigma / sqrtT)$ convergence when the oracle feedback is with variance $sigma2$, and improves its convergence to $O( 1 / T3)$ with deterministic oracles.
arXiv Detail & Related papers (2022-11-03T14:12:51Z) - Near-Optimal Lower Bounds For Convex Optimization For All Orders of
Smoothness [26.71898403195793]
We study the complexity of optimizing highly smooth convex functions.
For a positive integer $p$, we want to find an $epsilon$-approximate minimum of a convex function $f$.
We prove a new lower bound that matches this bound (up to log factors), and holds not only for randomized algorithms, but also for quantum algorithms.
arXiv Detail & Related papers (2021-12-02T10:51:43Z) - Bayesian Optimistic Optimisation with Exponentially Decaying Regret [58.02542541410322]
The current practical BO algorithms have regret bounds ranging from $mathcalO(fraclogNsqrtN)$ to $mathcal O(e-sqrtN)$, where $N$ is the number of evaluations.
This paper explores the possibility of improving the regret bound in the noiseless setting by intertwining concepts from BO and tree-based optimistic optimisation.
We propose the BOO algorithm, a first practical approach which can achieve an exponential regret bound with order $mathcal O(N-sqrt
arXiv Detail & Related papers (2021-05-10T13:07:44Z) - An Extended Jump Function Benchmark for the Analysis of Randomized
Search Heuristics [4.38301148531795]
We propose an extended class $textscJump_k,delta$ of jump functions that contain a valley of low fitness of width.
We show that the new class allows experiments with wider fitness valleys, especially when they lie further away from the global optimum.
arXiv Detail & Related papers (2021-05-07T07:21:10Z) - Parameter-free Stochastic Optimization of Variationally Coherent
Functions [19.468067110814808]
We design and analyze an algorithm for first-order optimization of a class of functions on $mathbbRdilon.
It is the first to achieve both these at the same time.
arXiv Detail & Related papers (2021-01-30T15:05:34Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - Optimal Mutation Rates for the $(1+\lambda)$ EA on OneMax [1.0965065178451106]
We extend the analysis of optimal mutation rates to two variants of the OneMax problem.
We compute for all population sizes $lambda in 2i mid 0 le i le 18$ which mutation rates minimize the expected running time.
Our results do not only provide a lower bound against which we can measure common evolutionary approaches.
arXiv Detail & Related papers (2020-06-20T01:23:14Z) - A Rigorous Runtime Analysis of the $(1 + (\lambda, \lambda))$ GA on Jump
Functions [8.34061303235504]
We conduct the first runtime analysis of this algorithm on a multimodal problem class, the jump functions benchmark.
For the isolated problem of leaving the local optimum of jump functions, we determine provably optimal parameters that lead to a runtime of $(n/k)k/2 eTheta(k)$.
arXiv Detail & Related papers (2020-04-14T17:54:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.