Hybridizing the 1/5-th Success Rule with Q-Learning for Controlling the
Mutation Rate of an Evolutionary Algorithm
- URL: http://arxiv.org/abs/2006.11026v1
- Date: Fri, 19 Jun 2020 09:12:49 GMT
- Title: Hybridizing the 1/5-th Success Rule with Q-Learning for Controlling the
Mutation Rate of an Evolutionary Algorithm
- Authors: Arina Buzdalova, Carola Doerr, Anna Rodionova
- Abstract summary: We introduce in this work a new hybrid parameter control technique, which combines the well-known one-fifth success rule with Q-learning.
We demonstrate that our HQL mechanism achieves equal or superior performance to all techniques tested in [Rodionova et al., GECCO'19] and this -- in contrast to previous parameter control methods -- simultaneously for all offspring population sizes $lambda$.
- Score: 0.9281671380673306
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is well known that evolutionary algorithms (EAs) achieve peak performance
only when their parameters are suitably tuned to the given problem. Even more,
it is known that the best parameter values can change during the optimization
process. Parameter control mechanisms are techniques developed to identify and
to track these values.
Recently, a series of rigorous theoretical works confirmed the superiority of
several parameter control techniques over EAs with best possible static
parameters. Among these results are examples for controlling the mutation rate
of the $(1+\lambda)$~EA when optimizing the OneMax problem. However, it was
shown in [Rodionova et al., GECCO'19] that the quality of these techniques
strongly depends on the offspring population size $\lambda$.
We introduce in this work a new hybrid parameter control technique, which
combines the well-known one-fifth success rule with Q-learning. We demonstrate
that our HQL mechanism achieves equal or superior performance to all techniques
tested in [Rodionova et al., GECCO'19] and this -- in contrast to previous
parameter control methods -- simultaneously for all offspring population sizes
$\lambda$. We also show that the promising performance of HQL is not restricted
to OneMax, but extends to several other benchmark problems.
Related papers
- Towards Generalized Parameter Tuning in Coherent Ising Machines: A Portfolio-Based Approach [0.0]
Coherent Ising Machines (CIMs) have recently gained attention as a promising computing model for solving optimization problems.<n>We present an algorithm portfolio approach for hyperparameter tuning in CIMs employing Chaotic Amplitude Control with momentum (CACm) algorithm.
arXiv Detail & Related papers (2025-07-27T14:18:54Z) - ALoRE: Efficient Visual Adaptation via Aggregating Low Rank Experts [71.91042186338163]
ALoRE is a novel PETL method that reuses the hypercomplex parameterized space constructed by Kronecker product to Aggregate Low Rank Experts.
Thanks to the artful design, ALoRE maintains negligible extra parameters and can be effortlessly merged into the frozen backbone.
arXiv Detail & Related papers (2024-12-11T12:31:30Z) - On the Effectiveness of Parameter-Efficient Fine-Tuning [79.6302606855302]
Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters shared across different tasks.
We show that all of the methods are actually sparse fine-tuned models and conduct a novel theoretical analysis of them.
Despite the effectiveness of sparsity grounded by our theory, it still remains an open problem of how to choose the tunable parameters.
arXiv Detail & Related papers (2022-11-28T17:41:48Z) - A Parameter Setting Heuristic for the Quantum Alternating Operator
Ansatz [0.0]
We introduce a strategy for parameter setting suitable for common cases in which the number of distinct cost values grows onlyly with the problem size.
We define a Classical Homogeneous Proxy for QAOA in which Perfect Homogeneity holds exactly, and which yields information describing both states and expectation values.
For up to $3$ QAOA levels we are easily able to find parameters that match approximation ratios returned by previous globally optimized approaches.
arXiv Detail & Related papers (2022-11-17T00:18:06Z) - META-STORM: Generalized Fully-Adaptive Variance Reduced SGD for
Unbounded Functions [23.746620619512573]
Recent work overcomes the effect of having to compute gradients of "megabatches"
Work is widely used after update with competitive deep learning tasks.
arXiv Detail & Related papers (2022-09-29T15:12:54Z) - Twisted hybrid algorithms for combinatorial optimization [68.8204255655161]
Proposed hybrid algorithms encode a cost function into a problem Hamiltonian and optimize its energy by varying over a set of states with low circuit complexity.
We show that for levels $p=2,ldots, 6$, the level $p$ can be reduced by one while roughly maintaining the expected approximation ratio.
arXiv Detail & Related papers (2022-03-01T19:47:16Z) - STORM+: Fully Adaptive SGD with Momentum for Nonconvex Optimization [74.1615979057429]
We investigate non-batch optimization problems where the objective is an expectation over smooth loss functions.
Our work builds on the STORM algorithm, in conjunction with a novel approach to adaptively set the learning rate and momentum parameters.
arXiv Detail & Related papers (2021-11-01T15:43:36Z) - Online Hyperparameter Meta-Learning with Hypergradient Distillation [59.973770725729636]
gradient-based meta-learning methods assume a set of parameters that do not participate in inner-optimization.
We propose a novel HO method that can overcome these limitations, by approximating the second-order term with knowledge distillation.
arXiv Detail & Related papers (2021-10-06T05:14:53Z) - Lazy Parameter Tuning and Control: Choosing All Parameters Randomly From
a Power-Law Distribution [8.34061303235504]
Most evolutionary algorithms have multiple parameters and their values drastically affect the performance.
We propose a lazy but effective solution, namely choosing all parameter values in each iteration randomly from a suitably scaled power-law distribution.
We prove a performance guarantee that is comparable to the best performance known for static parameters.
arXiv Detail & Related papers (2021-04-14T09:17:18Z) - Self-Adjusting Population Sizes for Non-Elitist Evolutionary Algorithms:
Why Success Rates Matter [0.0]
Evolutionary algorithms (EAs) are general-purpose optimisers that come with several parameters like the sizes of parent and offspring populations or the mutation rate.
Recent theoretical studies have shown that self-adjusting parameter control mechanisms can outperform the best static parameters in EAs on discrete problems.
arXiv Detail & Related papers (2021-04-12T16:44:54Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z) - Optimal Static Mutation Strength Distributions for the $(1+\lambda)$
Evolutionary Algorithm on OneMax [1.0965065178451106]
We show that, for large enough population sizes, such optimal distributions may be surprisingly complicated and counter-intuitive.
We show that, for large enough population sizes, such optimal distributions may be surprisingly complicated and counter-intuitive.
arXiv Detail & Related papers (2021-02-09T16:56:25Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.