Large Population Sizes and Crossover Help in Dynamic Environments
- URL: http://arxiv.org/abs/2004.09949v1
- Date: Tue, 21 Apr 2020 12:26:39 GMT
- Title: Large Population Sizes and Crossover Help in Dynamic Environments
- Authors: Johannes Lengler, Jonas Meier
- Abstract summary: We study the effect of larger population sizes for Dynamic BinVal, the extremal form of dynamic linear functions.
We find that moderately increased population sizes extend the range of efficient algorithm configurations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic linear functions on the hypercube are functions which assign to each
bit a positive weight, but the weights change over time. Throughout
optimization, these functions maintain the same global optimum, and never have
defecting local optima. Nevertheless, it was recently shown [Lengler, Schaller,
FOCI 2019] that the $(1+1)$-Evolutionary Algorithm needs exponential time to
find or approximate the optimum for some algorithm configurations. In this
paper, we study the effect of larger population sizes for Dynamic BinVal, the
extremal form of dynamic linear functions. We find that moderately increased
population sizes extend the range of efficient algorithm configurations, and
that crossover boosts this positive effect substantially. Remarkably, similar
to the static setting of monotone functions in [Lengler, Zou, FOGA 2019], the
hardest region of optimization for $(\mu+1)$-EA is not close the optimum, but
far away from it. In contrast, for the $(\mu+1)$-GA, the region around the
optimum is the hardest region in all studied cases.
Related papers
- Self-Adjusting Evolutionary Algorithms Are Slow on Multimodal Landscapes [0.0]
We show that the positive results do not extend to other types of local optima.
On the distorted OneMax benchmark, the self-adjusting $(1, lambda)$-EA is slowed down just as elitist algorithms because self-adaptation prevents the algorithm from escaping from local optima.
arXiv Detail & Related papers (2024-04-18T10:01:08Z) - Gradient-free optimization of highly smooth functions: improved analysis
and a new algorithm [87.22224691317766]
This work studies problems with zero-order noisy oracle information under the assumption that the objective function is highly smooth.
We consider two kinds of zero-order projected gradient descent algorithms.
arXiv Detail & Related papers (2023-06-03T17:05:13Z) - On Convergence of Incremental Gradient for Non-Convex Smooth Functions [63.51187646914962]
In machine learning and network optimization, algorithms like shuffle SGD are popular due to minimizing the number of misses and good cache.
This paper delves into the convergence properties SGD algorithms with arbitrary data ordering.
arXiv Detail & Related papers (2023-05-30T17:47:27Z) - Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with
Variance Reduction and its Application to Optimization [50.83356836818667]
gradient Langevin Dynamics is one of the most fundamental algorithms to solve non-eps optimization problems.
In this paper, we show two variants of this kind, namely the Variance Reduced Langevin Dynamics and the Recursive Gradient Langevin Dynamics.
arXiv Detail & Related papers (2022-03-30T11:39:00Z) - Ada-BKB: Scalable Gaussian Process Optimization on Continuous Domain by
Adaptive Discretization [21.859940486704264]
An algorithm such as GPUCB has prohibitive computational complexity.
A norere algorithm for functions corroborates the real problem of continuous optimization.
arXiv Detail & Related papers (2021-06-16T07:55:45Z) - Finding Global Minima via Kernel Approximations [90.42048080064849]
We consider the global minimization of smooth functions based solely on function evaluations.
In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum.
arXiv Detail & Related papers (2020-12-22T12:59:30Z) - Runtime analysis of the (mu+1)-EA on the Dynamic BinVal function [0.0]
We study evolutionary algorithms in a dynamic setting, where for each generation a different fitness function is chosen.
In particular, the hardest region for optimization is NOT around the optimum.
arXiv Detail & Related papers (2020-10-26T08:55:53Z) - A Two-Timescale Framework for Bilevel Optimization: Complexity Analysis
and Application to Actor-Critic [142.1492359556374]
Bilevel optimization is a class of problems which exhibit a two-level structure.
We propose a two-timescale approximation (TTSA) algorithm for tackling such a bilevel problem.
We show that a two-timescale natural actor-critic policy optimization algorithm can be viewed as a special case of our TTSA framework.
arXiv Detail & Related papers (2020-07-10T05:20:02Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.