From Understanding Genetic Drift to a Smart-Restart Mechanism for
Estimation-of-Distribution Algorithms
- URL: http://arxiv.org/abs/2206.09090v5
- Date: Fri, 3 Nov 2023 13:36:56 GMT
- Title: From Understanding Genetic Drift to a Smart-Restart Mechanism for
Estimation-of-Distribution Algorithms
- Authors: Weijie Zheng, Benjamin Doerr
- Abstract summary: We develop a smart-restart mechanism for Estimation-of-distribution algorithms (EDAs)
By stopping runs when the risk for genetic drift is high, it automatically runs the EDA in good parameter regimes.
We show that the smart-restart mechanism finds much better values for the population size than those suggested in the literature.
- Score: 16.904475483445452
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimation-of-distribution algorithms (EDAs) are optimization algorithms that
learn a distribution on the search space from which good solutions can be
sampled easily. A key parameter of most EDAs is the sample size (population
size). If the population size is too small, the update of the probabilistic
model builds on few samples, leading to the undesired effect of genetic drift.
Too large population sizes avoid genetic drift, but slow down the process.
Building on a recent quantitative analysis of how the population size leads
to genetic drift, we design a smart-restart mechanism for EDAs. By stopping
runs when the risk for genetic drift is high, it automatically runs the EDA in
good parameter regimes.
Via a mathematical runtime analysis, we prove a general performance guarantee
for this smart-restart scheme. This in particular shows that in many situations
where the optimal (problem-specific) parameter values are known, the restart
scheme automatically finds these, leading to the asymptotically optimal
performance.
We also conduct an extensive experimental analysis. On four classic benchmark
problems, we clearly observe the critical influence of the population size on
the performance, and we find that the smart-restart scheme leads to a
performance close to the one obtainable with optimal parameter values. Our
results also show that previous theory-based suggestions for the optimal
population size can be far from the optimal ones, leading to a performance
clearly inferior to the one obtained via the smart-restart scheme. We also
conduct experiments with PBIL (cross-entropy algorithm) on two combinatorial
optimization problems from the literature, the max-cut problem and the
bipartition problem. Again, we observe that the smart-restart mechanism finds
much better values for the population size than those suggested in the
literature, leading to a much better performance.
Related papers
- Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Runtime Analysis of a Multi-Valued Compact Genetic Algorithm on Generalized OneMax [2.07180164747172]
We provide a first runtime analysis of a generalized OneMax function.
We show that the r-cGA solves this r-valued OneMax problem efficiently.
At the end of experiments, we state one conjecture related to the expected runtime of another variant of multi-valued OneMax function.
arXiv Detail & Related papers (2024-04-17T10:40:12Z) - Improving genetic algorithms performance via deterministic population
shrinkage [9.334663477968027]
This paper presents an empirical study on the possible benefits of a Simple Variable Population Sizing scheme on the performance of Genetic Algorithms (GAs)
It consists in decreasing the population for a GA run following a predetermined schedule, configured by a speed and a severity parameter.
Results show several combinations of speed-severity where SVPS-GA preserves the solution quality while improving performances, by reducing the number of evaluations needed for success.
arXiv Detail & Related papers (2024-01-22T17:05:16Z) - ELRA: Exponential learning rate adaption gradient descent optimization
method [83.88591755871734]
We present a novel, fast (exponential rate), ab initio (hyper-free) gradient based adaption.
The main idea of the method is to adapt the $alpha by situational awareness.
It can be applied to problems of any dimensions n and scales only linearly.
arXiv Detail & Related papers (2023-09-12T14:36:13Z) - Accelerating genetic optimization of nonlinear model predictive control
by learning optimal search space size [0.8057006406834467]
This paper proposes an approach to accelerate the optimization of NMPC by learning optimal space size.
The proposed approach was compared on two nonlinear systems and compared with two other-based NMPC approaches.
arXiv Detail & Related papers (2023-05-14T08:10:49Z) - A Particle-based Sparse Gaussian Process Optimizer [5.672919245950197]
We present a new swarm-swarm-based framework utilizing the underlying dynamical process of descent.
The biggest advantage of this approach is greater exploration around the current state before deciding descent descent.
arXiv Detail & Related papers (2022-11-26T09:06:15Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - From Understanding Genetic Drift to a Smart-Restart Parameter-less
Compact Genetic Algorithm [15.56430085052365]
In the regime with no genetic drift, the runtime is roughly proportional to the population size.
We propose a parameter-less version of the compact genetic algorithm that automatically finds a suitable population size.
arXiv Detail & Related papers (2020-04-15T15:12:01Z) - Optimistic Exploration even with a Pessimistic Initialisation [57.41327865257504]
Optimistic initialisation is an effective strategy for efficient exploration in reinforcement learning (RL)
In particular, in scenarios with only positive rewards, Q-values are initialised at their lowest possible values.
We propose a simple count-based augmentation to pessimistically initialised Q-values that separates the source of optimism from the neural network.
arXiv Detail & Related papers (2020-02-26T17:15:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.