Hard Problems are Easier for Success-based Parameter Control
- URL: http://arxiv.org/abs/2204.05817v1
- Date: Tue, 12 Apr 2022 13:56:29 GMT
- Title: Hard Problems are Easier for Success-based Parameter Control
- Authors: Mario Alejandro Hevia Fajardo and Dirk Sudholt
- Abstract summary: Self-adjusting parameters in evolutionary algorithms can match or outperform the best fixed parameters on discrete problems.
We show that self-adjustment works as intended in the absence of easy slopes.
We discuss implications for the everywhere hard function LeadingOnes and a new class OneMaxBlocks of everywhere hard functions with tunable difficulty.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent works showed that simple success-based rules for self-adjusting
parameters in evolutionary algorithms (EAs) can match or outperform the best
fixed parameters on discrete problems. Non-elitism in a (1,$\lambda$) EA
combined with a self-adjusting offspring population size $\lambda$ outperforms
common EAs on the multimodal Cliff problem. However, it was shown that this
only holds if the success rate $s$ that governs self-adjustment is small
enough. Otherwise, even on OneMax, the self-adjusting (1,$\lambda$) EA
stagnates on an easy slope, where frequent successes drive down the offspring
population size. We show that self-adjustment works as intended in the absence
of easy slopes. We define everywhere hard functions, for which successes are
never easy to find and show that the self-adjusting (1,$\lambda$) EA is robust
with respect to the choice of success rates $s$. We give a general
fitness-level upper bound on the number of evaluations and show that the
expected number of generations is at most $O(d + \log(1/p_{\min}))$ where $d$
is the number of non-optimal fitness values and $p_{\min}$ is the smallest
probability of finding an improvement from a non-optimal search point. We
discuss implications for the everywhere hard function LeadingOnes and a new
class OneMaxBlocks of everywhere hard functions with tunable difficulty.
Related papers
- Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - Revisiting Inexact Fixed-Point Iterations for Min-Max Problems: Stochasticity and Structured Nonconvexity [18.427215139020632]
We show that $1L 1$ can be used to improve some state-of-the-art problems even for a multilevel Carlo iteration.
We provide an analysis for inexact Halperness estimators for $1L 1$ when the only hold with respect to a solution is a new $1L 1$ theory.
arXiv Detail & Related papers (2024-02-07T18:22:41Z) - Agnostically Learning Multi-index Models with Queries [54.290489524576756]
We study the power of query access for the task of agnostic learning under the Gaussian distribution.
We show that query access gives significant runtime improvements over random examples for agnostically learning MIMs.
arXiv Detail & Related papers (2023-12-27T15:50:47Z) - Hardest Monotone Functions for Evolutionary Algorithms [0.0]
We conjecture that for the self-adjusting $(1,lambda)$-EA, Adrial Dynamic BinVal (ADBV) is the hardest dynamic monotone function to optimize.
We introduce the function Switching Dynamic BinVal (SDBV) which coincides with ADBV whenever the number of remaining zeros in the search point is strictly less than $n/2$.
arXiv Detail & Related papers (2023-11-13T16:13:30Z) - Larger Offspring Populations Help the $(1 + (\lambda, \lambda))$ Genetic
Algorithm to Overcome the Noise [76.24156145566425]
Evolutionary algorithms are known to be robust to noise in the evaluation of the fitness.
We analyze to what extent the $(lambda,lambda)$ genetic algorithm is robust to noise.
arXiv Detail & Related papers (2023-05-08T08:49:01Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Multi-block-Single-probe Variance Reduced Estimator for Coupled
Compositional Optimization [49.58290066287418]
We propose a novel method named Multi-block-probe Variance Reduced (MSVR) to alleviate the complexity of compositional problems.
Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on strongity.
arXiv Detail & Related papers (2022-07-18T12:03:26Z) - Focused Jump-and-Repair Constraint Handling for Fixed-Parameter
Tractable Graph Problems Closed Under Induced Subgraphs [3.495114525631289]
We investigate a (1+1)EA equipped with a tailored jump-and-repair operation that can be used to probabilistically repair infeasible offspring in graph problems.
arXiv Detail & Related papers (2022-03-25T19:36:02Z) - Self-Adjusting Population Sizes for Non-Elitist Evolutionary Algorithms:
Why Success Rates Matter [0.0]
Evolutionary algorithms (EAs) are general-purpose optimisers that come with several parameters like the sizes of parent and offspring populations or the mutation rate.
Recent theoretical studies have shown that self-adjusting parameter control mechanisms can outperform the best static parameters in EAs on discrete problems.
arXiv Detail & Related papers (2021-04-12T16:44:54Z) - Optimal Mutation Rates for the $(1+\lambda)$ EA on OneMax [1.0965065178451106]
We extend the analysis of optimal mutation rates to two variants of the OneMax problem.
We compute for all population sizes $lambda in 2i mid 0 le i le 18$ which mutation rates minimize the expected running time.
Our results do not only provide a lower bound against which we can measure common evolutionary approaches.
arXiv Detail & Related papers (2020-06-20T01:23:14Z) - Nonconvex Zeroth-Order Stochastic ADMM Methods with Lower Function Query
Complexity [109.54166127479093]
Zeroth-order (a.k.a, derivative-free) methods are a class of effective optimization methods for solving machine learning problems.
In this paper, we propose a class faster faster zerothorder alternating gradient method multipliers (MMADMM) to solve the non finitesum problems.
We show that ZOMMAD methods can achieve a lower function $O(frac13nfrac1)$ for finding an $epsilon$-stationary point.
At the same time, we propose a class faster zerothorder online ADM methods (M
arXiv Detail & Related papers (2019-07-30T02:21:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.