The Univariate Marginal Distribution Algorithm Copes Well With Deception
and Epistasis
- URL: http://arxiv.org/abs/2007.08277v2
- Date: Fri, 27 Nov 2020 09:19:31 GMT
- Title: The Univariate Marginal Distribution Algorithm Copes Well With Deception
and Epistasis
- Authors: Benjamin Doerr and Martin S. Krejca
- Abstract summary: We show that the negative finding is caused by an unfortunate choice of the parameters of the UMDA.
Our result shows that the UMDA can cope better with local optima than evolutionary algorithms.
- Score: 9.853329403413701
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In their recent work, Lehre and Nguyen (FOGA 2019) show that the univariate
marginal distribution algorithm (UMDA) needs time exponential in the parent
populations size to optimize the DeceptiveLeadingBlocks (DLB) problem. They
conclude from this result that univariate EDAs have difficulties with deception
and epistasis.
In this work, we show that this negative finding is caused by an unfortunate
choice of the parameters of the UMDA. When the population sizes are chosen
large enough to prevent genetic drift, then the UMDA optimizes the DLB problem
with high probability with at most $\lambda(\frac{n}{2} + 2 e \ln n)$ fitness
evaluations. Since an offspring population size $\lambda$ of order $n \log n$
can prevent genetic drift, the UMDA can solve the DLB problem with $O(n^2 \log
n)$ fitness evaluations. In contrast, for classic evolutionary algorithms no
better run time guarantee than $O(n^3)$ is known (which we prove to be tight
for the ${(1+1)}$ EA), so our result rather suggests that the UMDA can cope
well with deception and epistatis.
From a broader perspective, our result shows that the UMDA can cope better
with local optima than evolutionary algorithms; such a result was previously
known only for the compact genetic algorithm. Together with the lower bound of
Lehre and Nguyen, our result for the first time rigorously proves that running
EDAs in the regime with genetic drift can lead to drastic performance losses.
Related papers
- Already Moderate Population Sizes Provably Yield Strong Robustness to Noise [53.27802701790209]
We show that two evolutionary algorithms can tolerate constant noise probabilities without increasing the runtime on the OneMax benchmark.
Our results are based on a novel proof argument that the noiseless offspring can be seen as a biased uniform crossover between the parent and the noisy offspring.
arXiv Detail & Related papers (2024-04-02T16:35:52Z) - Larger Offspring Populations Help the $(1 + (\lambda, \lambda))$ Genetic
Algorithm to Overcome the Noise [76.24156145566425]
Evolutionary algorithms are known to be robust to noise in the evaluation of the fitness.
We analyze to what extent the $(lambda,lambda)$ genetic algorithm is robust to noise.
arXiv Detail & Related papers (2023-05-08T08:49:01Z) - Estimation-of-Distribution Algorithms for Multi-Valued Decision
Variables [10.165640083594573]
We extend the known quantitative analysis of genetic drift to estimation-of-distribution algorithms for multi-valued variables.
Our work shows that our good understanding of binary EDAs naturally extends to the multi-valued setting.
arXiv Detail & Related papers (2023-02-28T08:52:40Z) - Variance-Dependent Regret Bounds for Linear Bandits and Reinforcement
Learning: Adaptivity and Computational Efficiency [90.40062452292091]
We present the first computationally efficient algorithm for linear bandits with heteroscedastic noise.
Our algorithm is adaptive to the unknown variance of noise and achieves an $tildeO(d sqrtsum_k = 1K sigma_k2 + d)$ regret.
We also propose a variance-adaptive algorithm for linear mixture Markov decision processes (MDPs) in reinforcement learning.
arXiv Detail & Related papers (2023-02-21T00:17:24Z) - Causal Bandits for Linear Structural Equation Models [58.2875460517691]
This paper studies the problem of designing an optimal sequence of interventions in a causal graphical model.
It is assumed that the graph's structure is known and has $N$ nodes.
Two algorithms are proposed for the frequentist (UCB-based) and Bayesian settings.
arXiv Detail & Related papers (2022-08-26T16:21:31Z) - From Understanding Genetic Drift to a Smart-Restart Mechanism for
Estimation-of-Distribution Algorithms [16.904475483445452]
We develop a smart-restart mechanism for Estimation-of-distribution algorithms (EDAs)
By stopping runs when the risk for genetic drift is high, it automatically runs the EDA in good parameter regimes.
We show that the smart-restart mechanism finds much better values for the population size than those suggested in the literature.
arXiv Detail & Related papers (2022-06-18T02:46:52Z) - Computationally Efficient Horizon-Free Reinforcement Learning for Linear
Mixture MDPs [111.75736569611159]
We propose the first computationally efficient horizon-free algorithm for linear mixture MDPs.
Our algorithm adapts a weighted least square estimator for the unknown transitional dynamic.
This also improves upon the best-known algorithms in this setting when $sigma_k2$'s are known.
arXiv Detail & Related papers (2022-05-23T17:59:18Z) - Provably Breaking the Quadratic Error Compounding Barrier in Imitation
Learning, Optimally [58.463668865380946]
We study the statistical limits of Imitation Learning in episodic Markov Decision Processes (MDPs) with a state space $mathcalS$.
We establish an upper bound $O(|mathcalS|H3/2/N)$ for the suboptimality using the Mimic-MD algorithm in Rajaraman et al ( 2020)
We show the minimax suboptimality grows as $Omega( H3/2/N)$ when $mathcalS|geq 3$ while the unknown-transition setting suffers from a larger sharp rate
arXiv Detail & Related papers (2021-02-25T15:50:19Z) - Theoretical Analyses of Multiobjective Evolutionary Algorithms on
Multimodal Objectives [15.56430085052365]
OJZJ problem is a bi-objective problem composed of two objectives isomorphic to the classic jump function benchmark.
We prove that SEMO with probability one does not compute the full Pareto front, regardless of the runtime.
We also show the tighter bound $frac 32 e nk+1 pm o(nk+1)$, which might be the first runtime bound for an MOEA that is tight apart from lower-order terms.
arXiv Detail & Related papers (2020-12-14T03:07:39Z) - From Understanding Genetic Drift to a Smart-Restart Parameter-less
Compact Genetic Algorithm [15.56430085052365]
In the regime with no genetic drift, the runtime is roughly proportional to the population size.
We propose a parameter-less version of the compact genetic algorithm that automatically finds a suitable population size.
arXiv Detail & Related papers (2020-04-15T15:12:01Z) - A Simplified Run Time Analysis of the Univariate Marginal Distribution
Algorithm on LeadingOnes [9.853329403413701]
We prove a stronger run time guarantee for the univariate distribution algorithm (UMDA)
We show further run time gains from small selection rates.
Under similar assumptions, we prove that a bound that matches our upper bound up to constant factors holds with high probability.
arXiv Detail & Related papers (2020-04-10T10:20:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.