Analysis of Evolutionary Algorithms on Fitness Function with
Time-linkage Property
- URL: http://arxiv.org/abs/2004.12304v4
- Date: Wed, 24 Feb 2021 01:13:45 GMT
- Title: Analysis of Evolutionary Algorithms on Fitness Function with
Time-linkage Property
- Authors: Weijie Zheng, Huanhuan Chen, Xin Yao
- Abstract summary: In real-world applications, many optimization problems have the time-linkage property, that is, the objective function value relies on the current solution as well as the historical solutions.
This paper takes the first step to rigorously analyze evolutionary algorithms for time-linkage functions.
- Score: 27.660240128423176
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In real-world applications, many optimization problems have the time-linkage
property, that is, the objective function value relies on the current solution
as well as the historical solutions. Although the rigorous theoretical analysis
on evolutionary algorithms has rapidly developed in recent two decades, it
remains an open problem to theoretically understand the behaviors of
evolutionary algorithms on time-linkage problems. This paper takes the first
step to rigorously analyze evolutionary algorithms for time-linkage functions.
Based on the basic OneMax function, we propose a time-linkage function where
the first bit value of the last time step is integrated but has a different
preference from the current first bit. We prove that with probability $1-o(1)$,
randomized local search and $(1+1)$ EA cannot find the optimum, and with
probability $1-o(1)$, $(\mu+1)$ EA is able to reach the optimum.
Related papers
- Analyzing the Runtime of the Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) on the Concatenated Trap Function [2.038038953957366]
GOMEA is an evolutionary algorithm that leverages linkage learning to efficiently exploit problem structure.
We show that GOMEA can solve the problem in $O(m32k)$ with high probability, where $m$ is the number of subfunctions and $k$ is the subfunction length.
This is a significant speedup compared to the (1+1) Evolutionary EA, which requires $O(ln(m)(mk)k)$ expected evaluations.
arXiv Detail & Related papers (2024-07-11T09:37:21Z) - Already Moderate Population Sizes Provably Yield Strong Robustness to Noise [53.27802701790209]
We show that two evolutionary algorithms can tolerate constant noise probabilities without increasing the runtime on the OneMax benchmark.
Our results are based on a novel proof argument that the noiseless offspring can be seen as a biased uniform crossover between the parent and the noisy offspring.
arXiv Detail & Related papers (2024-04-02T16:35:52Z) - Time Complexity Analysis of Evolutionary Algorithms for 2-Hop
(1,2)-Minimum Spanning Tree Problem [20.624629608537894]
We consider a constrained version of the 2-Hop (1,2)-Minimum Spanning Tree problem (abbr. MSTP) in the context of evolutionary algorithms, which has been shown to be NP-hard.
Inspired by the special structure of 2-hop spanning trees, we also consider the (1+1) EA with search points in edge-based representation that seems not so natural for the problem and give an upper bound on its expected time to obtain a $frac32$-approximate solution.
arXiv Detail & Related papers (2021-10-10T04:35:02Z) - When Non-Elitism Meets Time-Linkage Problems [19.798298260257432]
We analyze on the influence of the non-elitism via comparing the performance of the elitist (1+$lambda$)EA and its non-elitist counterpart (1,$lambda$)EA.
We prove that with probability $1$, (1,$lambda$)EA can reach the global optimum and its expected runtime is $O(n3+clog n)$ with $lambda=c log_fracee-1 n$ for the constant $cge 1$.
arXiv Detail & Related papers (2021-04-14T13:03:42Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - Improved Runtime Results for Simple Randomised Search Heuristics on
Linear Functions with a Uniform Constraint [11.228244128564512]
We study the class of linear functions under uniform constraint and investigate the expected optimisation time of Randomised Local Search (RLS)
We prove a tight bound of $Theta(n2)$ for RLS and improve the previously best known upper bound of (1+1) EA.
arXiv Detail & Related papers (2020-10-21T10:42:39Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - First Steps Towards a Runtime Analysis When Starting With a Good
Solution [8.34061303235504]
In practical applications it may be possible to guess solutions that are better than random ones.
We show that different algorithms profit to a very different degree from a better initial solution.
This could suggest that evolutionary algorithms better exploiting good initial solutions are still to be found.
arXiv Detail & Related papers (2020-06-22T11:46:42Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Incorporating Expert Prior in Bayesian Optimisation via Space Warping [54.412024556499254]
In big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function.
One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation.
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum.
arXiv Detail & Related papers (2020-03-27T06:18:49Z) - A new regret analysis for Adam-type algorithms [78.825194932103]
In theory, regret guarantees for online convex optimization require a rapidly decaying $beta_1to0$ schedule.
We propose a novel framework that allows us to derive optimal, data-dependent regret bounds with a constant $beta_1$.
arXiv Detail & Related papers (2020-03-21T19:19:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.