Runtime analysis of the (mu+1)-EA on the Dynamic BinVal function
- URL: http://arxiv.org/abs/2010.13428v2
- Date: Thu, 8 Jul 2021 06:56:01 GMT
- Title: Runtime analysis of the (mu+1)-EA on the Dynamic BinVal function
- Authors: Johannes Lengler, Simone Riedi
- Abstract summary: We study evolutionary algorithms in a dynamic setting, where for each generation a different fitness function is chosen.
In particular, the hardest region for optimization is NOT around the optimum.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study evolutionary algorithms in a dynamic setting, where for each
generation a different fitness function is chosen, and selection is performed
with respect to the current fitness function. Specifically, we consider Dynamic
BinVal, in which the fitness functions for each generation is given by the
linear function BinVal, but in each generation the order of bits is randomly
permuted. For the (1+1)-EA it was known that there is an efficiency threshold
$c_0$ for the mutation parameter, at which the runtime switches from
quasilinear to exponential. A previous empirical evidence suggested that for
larger population size $\mu$, the threshold may increase. We prove that this is
at least the case in an $\varepsilon$-neighborhood around the optimum: the
threshold of the (\mu+1)-EA becomes arbitrarily large if the $\mu$ is chosen
large enough.
However, the most surprising result is obtained by a second order analysis
for $\mu=2$: the threshold INcreases with increasing proximity to the optimum.
In particular, the hardest region for optimization is NOT around the optimum.
Related papers
- Hardest Monotone Functions for Evolutionary Algorithms [0.0]
We conjecture that for the self-adjusting $(1,lambda)$-EA, Adrial Dynamic BinVal (ADBV) is the hardest dynamic monotone function to optimize.
We introduce the function Switching Dynamic BinVal (SDBV) which coincides with ADBV whenever the number of remaining zeros in the search point is strictly less than $n/2$.
arXiv Detail & Related papers (2023-11-13T16:13:30Z) - Gradient-free optimization of highly smooth functions: improved analysis
and a new algorithm [87.22224691317766]
This work studies problems with zero-order noisy oracle information under the assumption that the objective function is highly smooth.
We consider two kinds of zero-order projected gradient descent algorithms.
arXiv Detail & Related papers (2023-06-03T17:05:13Z) - On Convergence of Incremental Gradient for Non-Convex Smooth Functions [63.51187646914962]
In machine learning and network optimization, algorithms like shuffle SGD are popular due to minimizing the number of misses and good cache.
This paper delves into the convergence properties SGD algorithms with arbitrary data ordering.
arXiv Detail & Related papers (2023-05-30T17:47:27Z) - Fourier Analysis Meets Runtime Analysis: Precise Runtimes on Plateaus [9.853329403413701]
We propose a new method based on discrete Fourier analysis to analyze the time evolutionary algorithms spend on plateaus.
We also use this method to analyze the runtime of the $(1+1)$ evolutionary algorithm on a new benchmark consisting of $n/ell$ plateaus of effective size $2ell-1$.
arXiv Detail & Related papers (2023-02-16T01:46:06Z) - Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with
Variance Reduction and its Application to Optimization [50.83356836818667]
gradient Langevin Dynamics is one of the most fundamental algorithms to solve non-eps optimization problems.
In this paper, we show two variants of this kind, namely the Variance Reduced Langevin Dynamics and the Recursive Gradient Langevin Dynamics.
arXiv Detail & Related papers (2022-03-30T11:39:00Z) - Bayesian Optimistic Optimisation with Exponentially Decaying Regret [58.02542541410322]
The current practical BO algorithms have regret bounds ranging from $mathcalO(fraclogNsqrtN)$ to $mathcal O(e-sqrtN)$, where $N$ is the number of evaluations.
This paper explores the possibility of improving the regret bound in the noiseless setting by intertwining concepts from BO and tree-based optimistic optimisation.
We propose the BOO algorithm, a first practical approach which can achieve an exponential regret bound with order $mathcal O(N-sqrt
arXiv Detail & Related papers (2021-05-10T13:07:44Z) - An Extended Jump Function Benchmark for the Analysis of Randomized
Search Heuristics [4.38301148531795]
We propose an extended class $textscJump_k,delta$ of jump functions that contain a valley of low fitness of width.
We show that the new class allows experiments with wider fitness valleys, especially when they lie further away from the global optimum.
arXiv Detail & Related papers (2021-05-07T07:21:10Z) - Finding Global Minima via Kernel Approximations [90.42048080064849]
We consider the global minimization of smooth functions based solely on function evaluations.
In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum.
arXiv Detail & Related papers (2020-12-22T12:59:30Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - Optimal Mutation Rates for the $(1+\lambda)$ EA on OneMax [1.0965065178451106]
We extend the analysis of optimal mutation rates to two variants of the OneMax problem.
We compute for all population sizes $lambda in 2i mid 0 le i le 18$ which mutation rates minimize the expected running time.
Our results do not only provide a lower bound against which we can measure common evolutionary approaches.
arXiv Detail & Related papers (2020-06-20T01:23:14Z) - Large Population Sizes and Crossover Help in Dynamic Environments [0.0]
We study the effect of larger population sizes for Dynamic BinVal, the extremal form of dynamic linear functions.
We find that moderately increased population sizes extend the range of efficient algorithm configurations.
arXiv Detail & Related papers (2020-04-21T12:26:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.