Tip the Balance: Improving Exploration of Balanced Crossover Operators
by Adaptive Bias
- URL: http://arxiv.org/abs/2004.11331v1
- Date: Thu, 23 Apr 2020 17:26:43 GMT
- Title: Tip the Balance: Improving Exploration of Balanced Crossover Operators
by Adaptive Bias
- Authors: Luca Manzoni, Luca Mariot, Eva Tuba
- Abstract summary: The use of balanced crossover operators in Genetic Algorithms (GA) ensures that the binary strings generated as offsprings have the same Hamming weight of the parents.
Although this method reduces the size of the search space, the resulting fitness landscape often becomes more difficult for the GA to explore.
This issue has been studied in this paper by applying an adaptive bias strategy to a counter-based crossover operator.
- Score: 2.610470075814367
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The use of balanced crossover operators in Genetic Algorithms (GA) ensures
that the binary strings generated as offsprings have the same Hamming weight of
the parents, a constraint which is sought in certain discrete optimization
problems. Although this method reduces the size of the search space, the
resulting fitness landscape often becomes more difficult for the GA to explore
and to discover optimal solutions. This issue has been studied in this paper by
applying an adaptive bias strategy to a counter-based crossover operator that
introduces unbalancedness in the offspring with a certain probability, which is
decreased throughout the evolutionary process. Experiments show that improving
the exploration of the search space with this adaptive bias strategy is
beneficial for the GA performances in terms of the number of optimal solutions
found for the balanced nonlinear Boolean functions problem.
Related papers
- Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - Genetic Engineering Algorithm (GEA): An Efficient Metaheuristic
Algorithm for Solving Combinatorial Optimization Problems [1.8434042562191815]
Genetic Algorithms (GAs) are known for their efficiency in solving optimization problems.
This paper proposes a new metaheuristic algorithm called the Genetic Engineering Algorithm (GEA) that draws inspiration from genetic engineering concepts.
arXiv Detail & Related papers (2023-09-28T13:05:30Z) - The FAIRy Tale of Genetic Algorithms [1.0957528713294875]
We have extended Findable, Accessible, Interoperable and Reusable (FAIR) data principles to enable Genetic and reusability of algorithms.
We have presented an overview of methodological developments and variants of GA that makes it challenging to reproduce or even find the right source.
This work can be extended to numerous machine learning algorithms/methods.
arXiv Detail & Related papers (2023-04-29T11:36:09Z) - Designing Biological Sequences via Meta-Reinforcement Learning and
Bayesian Optimization [68.28697120944116]
We train an autoregressive generative model via Meta-Reinforcement Learning to propose promising sequences for selection.
We pose this problem as that of finding an optimal policy over a distribution of MDPs induced by sampling subsets of the data.
Our in-silico experiments show that meta-learning over such ensembles provides robustness against reward misspecification and achieves competitive results.
arXiv Detail & Related papers (2022-09-13T18:37:27Z) - The Influence of Local Search over Genetic Algorithms with Balanced
Representations [2.610470075814367]
We show that adding local search actually emphincreases the diversity among the individuals in the population.
We link these findings to some recent results on fitness landscape analysis for problems on Boolean functions.
arXiv Detail & Related papers (2022-06-22T10:59:26Z) - Clipped Stochastic Methods for Variational Inequalities with
Heavy-Tailed Noise [64.85879194013407]
We prove the first high-probability results with logarithmic dependence on the confidence level for methods for solving monotone and structured non-monotone VIPs.
Our results match the best-known ones in the light-tails case and are novel for structured non-monotone problems.
In addition, we numerically validate that the gradient noise of many practical formulations is heavy-tailed and show that clipping improves the performance of SEG/SGDA.
arXiv Detail & Related papers (2022-06-02T15:21:55Z) - Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth
Games: Convergence Analysis under Expected Co-coercivity [49.66890309455787]
We introduce the expected co-coercivity condition, explain its benefits, and provide the first last-iterate convergence guarantees of SGDA and SCO.
We prove linear convergence of both methods to a neighborhood of the solution when they use constant step-size.
Our convergence guarantees hold under the arbitrary sampling paradigm, and we give insights into the complexity of minibatching.
arXiv Detail & Related papers (2021-06-30T18:32:46Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - A Rank based Adaptive Mutation in Genetic Algorithm [0.0]
This paper presents an alternate approach of mutation probability generation using chromosome rank to avoid any susceptibility to fitness distribution.
Experiments are done to compare results of simple genetic algorithm (SGA) with constant mutation probability and adaptive approaches within a limited resource constraint.
arXiv Detail & Related papers (2021-04-18T12:41:33Z) - The Strength of Nesterov's Extrapolation in the Individual Convergence
of Nonsmooth Optimization [0.0]
We prove that Nesterov's extrapolation has the strength to make the individual convergence of gradient descent methods optimal for nonsmooth problems.
We give an extension of the derived algorithms to solve regularized learning tasks with nonsmooth losses in settings.
Our method is applicable as an efficient tool for solving large-scale $l$1-regularized hinge-loss learning problems.
arXiv Detail & Related papers (2020-06-08T03:35:41Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.