Improving genetic algorithms performance via deterministic population
shrinkage
- URL: http://arxiv.org/abs/2401.12121v1
- Date: Mon, 22 Jan 2024 17:05:16 GMT
- Title: Improving genetic algorithms performance via deterministic population
shrinkage
- Authors: Juan Luis Jim\'enez Laredo and Carlos Fernandes and Juan Juli\'an
Merelo and Christian Gagn\'e
- Abstract summary: This paper presents an empirical study on the possible benefits of a Simple Variable Population Sizing scheme on the performance of Genetic Algorithms (GAs)
It consists in decreasing the population for a GA run following a predetermined schedule, configured by a speed and a severity parameter.
Results show several combinations of speed-severity where SVPS-GA preserves the solution quality while improving performances, by reducing the number of evaluations needed for success.
- Score: 9.334663477968027
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the intuition that the same population size is not needed throughout
the run of an Evolutionary Algorithm (EA), most EAs use a fixed population
size. This paper presents an empirical study on the possible benefits of a
Simple Variable Population Sizing (SVPS) scheme on the performance of Genetic
Algorithms (GAs). It consists in decreasing the population for a GA run
following a predetermined schedule, configured by a speed and a severity
parameter. The method uses as initial population size an estimation of the
minimum size needed to supply enough building blocks, using a fixed-size
selectorecombinative GA converging within some confidence interval toward good
solutions for a particular problem. Following this methodology, a scalability
analysis is conducted on deceptive, quasi-deceptive, and non-deceptive trap
functions in order to assess whether SVPS-GA improves performances compared to
a fixed-size GA under different problem instances and difficulty levels.
Results show several combinations of speed-severity where SVPS-GA preserves the
solution quality while improving performances, by reducing the number of
evaluations needed for success.
Related papers
- Improving Time and Memory Efficiency of Genetic Algorithms by Storing
Populations as Minimum Spanning Trees of Patches [0.0]
In evolutionary algorithms the computational cost of applying operators and storing populations is comparable to the cost of fitness evaluation.
We show that storing the population as a minimum spanning tree, where vertices correspond to individuals but only contain meta-information about them, is a viable alternative to the straightforward implementation.
Our experiments suggest that significant, even improvements -- including execution of crossover operators! -- can be achieved in terms of both memory usage and computational costs.
arXiv Detail & Related papers (2023-06-29T05:12:14Z) - From Understanding Genetic Drift to a Smart-Restart Mechanism for
Estimation-of-Distribution Algorithms [16.904475483445452]
We develop a smart-restart mechanism for Estimation-of-distribution algorithms (EDAs)
By stopping runs when the risk for genetic drift is high, it automatically runs the EDA in good parameter regimes.
We show that the smart-restart mechanism finds much better values for the population size than those suggested in the literature.
arXiv Detail & Related papers (2022-06-18T02:46:52Z) - The Benefits of Implicit Regularization from SGD in Least Squares
Problems [116.85246178212616]
gradient descent (SGD) exhibits strong algorithmic regularization effects in practice.
We make comparisons of the implicit regularization afforded by (unregularized) average SGD with the explicit regularization of ridge regression.
arXiv Detail & Related papers (2021-08-10T09:56:47Z) - Population network structure impacts genetic algorithm optimisation
performance [0.0]
Genetic algorithm (GA) is a search method that optimises a population of solutions by simulating natural evolution.
Social networks can condition the likelihood that two individuals mate.
We introduce the Networked Genetic Algorithm (NGA) to evaluate how various random and scale-free population networks influence the optimisation performance of GAs.
arXiv Detail & Related papers (2021-04-09T09:06:04Z) - Stochastic Reweighted Gradient Descent [4.355567556995855]
We propose an importance-sampling-based algorithm we call SRG (stochastic reweighted gradient)
We pay particular attention to the time and memory overhead of our proposed method.
We present empirical results to support our findings.
arXiv Detail & Related papers (2021-03-23T04:09:43Z) - Balancing Rates and Variance via Adaptive Batch-Size for Stochastic
Optimization Problems [120.21685755278509]
In this work, we seek to balance the fact that attenuating step-size is required for exact convergence with the fact that constant step-size learns faster in time up to an error.
Rather than fixing the minibatch the step-size at the outset, we propose to allow parameters to evolve adaptively.
arXiv Detail & Related papers (2020-07-02T16:02:02Z) - Private Stochastic Non-Convex Optimization: Adaptive Algorithms and
Tighter Generalization Bounds [72.63031036770425]
We propose differentially private (DP) algorithms for bound non-dimensional optimization.
We demonstrate two popular deep learning methods on the empirical advantages over standard gradient methods.
arXiv Detail & Related papers (2020-06-24T06:01:24Z) - A Framework for Sample Efficient Interval Estimation with Control
Variates [94.32811054797148]
We consider the problem of estimating confidence intervals for the mean of a random variable.
Under certain conditions, we show improved efficiency compared to existing estimation algorithms.
arXiv Detail & Related papers (2020-06-18T05:42:30Z) - From Understanding Genetic Drift to a Smart-Restart Parameter-less
Compact Genetic Algorithm [15.56430085052365]
In the regime with no genetic drift, the runtime is roughly proportional to the population size.
We propose a parameter-less version of the compact genetic algorithm that automatically finds a suitable population size.
arXiv Detail & Related papers (2020-04-15T15:12:01Z) - Detached Error Feedback for Distributed SGD with Random Sparsification [98.98236187442258]
Communication bottleneck has been a critical problem in large-scale deep learning.
We propose a new distributed error feedback (DEF) algorithm, which shows better convergence than error feedback for non-efficient distributed problems.
We also propose DEFA to accelerate the generalization of DEF, which shows better bounds than DEF.
arXiv Detail & Related papers (2020-04-11T03:50:59Z) - Variance Reduction with Sparse Gradients [82.41780420431205]
Variance reduction methods such as SVRG and SpiderBoost use a mixture of large and small batch gradients.
We introduce a new sparsity operator: The random-top-k operator.
Our algorithm consistently outperforms SpiderBoost on various tasks including image classification, natural language processing, and sparse matrix factorization.
arXiv Detail & Related papers (2020-01-27T08:23:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.