Genetic Engineering Algorithm (GEA): An Efficient Metaheuristic
Algorithm for Solving Combinatorial Optimization Problems
- URL: http://arxiv.org/abs/2309.16413v1
- Date: Thu, 28 Sep 2023 13:05:30 GMT
- Title: Genetic Engineering Algorithm (GEA): An Efficient Metaheuristic
Algorithm for Solving Combinatorial Optimization Problems
- Authors: Majid Sohrabi, Amir M. Fathollahi-Fard, and Vasilii A. Gromov
- Abstract summary: Genetic Algorithms (GAs) are known for their efficiency in solving optimization problems.
This paper proposes a new metaheuristic algorithm called the Genetic Engineering Algorithm (GEA) that draws inspiration from genetic engineering concepts.
- Score: 1.8434042562191815
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Genetic Algorithms (GAs) are known for their efficiency in solving
combinatorial optimization problems, thanks to their ability to explore diverse
solution spaces, handle various representations, exploit parallelism, preserve
good solutions, adapt to changing dynamics, handle combinatorial diversity, and
provide heuristic search. However, limitations such as premature convergence,
lack of problem-specific knowledge, and randomness of crossover and mutation
operators make GAs generally inefficient in finding an optimal solution. To
address these limitations, this paper proposes a new metaheuristic algorithm
called the Genetic Engineering Algorithm (GEA) that draws inspiration from
genetic engineering concepts. GEA redesigns the traditional GA while
incorporating new search methods to isolate, purify, insert, and express new
genes based on existing ones, leading to the emergence of desired traits and
the production of specific chromosomes based on the selected genes. Comparative
evaluations against state-of-the-art algorithms on benchmark instances
demonstrate the superior performance of GEA, showcasing its potential as an
innovative and efficient solution for combinatorial optimization problems.
Related papers
- GARA: A novel approach to Improve Genetic Algorithms' Accuracy and Efficiency by Utilizing Relationships among Genes [1.7226572355808027]
We propose Gene Regulatory Genetic Algorithm (GRGA), which is the first time to utilize relationships among genes for improving GA's accuracy and efficiency.
We use a directed multipartite graph encapsulating the solution space, called RGGR, where each node corresponds to a gene in the solution and the edge represents the relationship between adjacent nodes.
The obtained RGGR is then employed to determine appropriate loci of crossover and mutation operators, thereby directing the evolutionary process toward faster and better convergence.
arXiv Detail & Related papers (2024-04-28T08:33:39Z) - A Review on Quantum Approximate Optimization Algorithm and its Variants [47.89542334125886]
The Quantum Approximate Optimization Algorithm (QAOA) is a highly promising variational quantum algorithm that aims to solve intractable optimization problems.
This comprehensive review offers an overview of the current state of QAOA, encompassing its performance analysis in diverse scenarios.
We conduct a comparative study of selected QAOA extensions and variants, while exploring future prospects and directions for the algorithm.
arXiv Detail & Related papers (2023-06-15T15:28:12Z) - The FAIRy Tale of Genetic Algorithms [1.0957528713294875]
We have extended Findable, Accessible, Interoperable and Reusable (FAIR) data principles to enable Genetic and reusability of algorithms.
We have presented an overview of methodological developments and variants of GA that makes it challenging to reproduce or even find the right source.
This work can be extended to numerous machine learning algorithms/methods.
arXiv Detail & Related papers (2023-04-29T11:36:09Z) - Discovering Attention-Based Genetic Algorithms via Meta-Black-Box
Optimization [13.131971623143622]
We discover entirely new genetic algorithms in a data-driven fashion.
We parametrize selection and mutation rate adaptation as cross- and self-attention modules.
The learned algorithm can be applied to previously unseen optimization problems, search dimensions & evaluation budgets.
arXiv Detail & Related papers (2023-04-08T12:14:15Z) - Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient
Methods [73.35353358543507]
Gradient Descent-Ascent (SGDA) is one of the most prominent algorithms for solving min-max optimization and variational inequalities problems (VIP)
In this paper, we propose a unified convergence analysis that covers a large variety of descent-ascent methods.
We develop several new variants of SGDA such as a new variance-reduced method (L-SVRGDA), new distributed methods with compression (QSGDA, DIANA-SGDA, VR-DIANA-SGDA), and a new method with coordinate randomization (SEGA-SGDA)
arXiv Detail & Related papers (2022-02-15T09:17:39Z) - Result Diversification by Multi-objective Evolutionary Algorithms with
Theoretical Guarantees [94.72461292387146]
We propose to reformulate the result diversification problem as a bi-objective search problem, and solve it by a multi-objective evolutionary algorithm (EA)
We theoretically prove that the GSEMO can achieve the optimal-time approximation ratio, $1/2$.
When the objective function changes dynamically, the GSEMO can maintain this approximation ratio in running time, addressing the open question proposed by Borodin et al.
arXiv Detail & Related papers (2021-10-18T14:00:22Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Hybrid Henry Gas Solubility Optimization Algorithm with Dynamic
Cluster-to-Algorithm Mapping for Search-based Software Engineering Problems [1.0323063834827413]
This paper discusses a new variant of the Henry Gas Solubility Optimization (HGSO) Algorithm, called Hybrid HGSO (HHGSO)
Unlike its predecessor, HHGSO allows multiple clusters serving different individual meta-heuristic algorithms to coexist within the same population.
Exploiting the dynamic cluster-to-algorithm mapping via penalized and reward model with adaptive switching factor, HHGSO offers a novel approach for meta-heuristic hybridization.
arXiv Detail & Related papers (2021-05-31T12:42:15Z) - Practical Schemes for Finding Near-Stationary Points of Convex
Finite-Sums [45.91933657088324]
We conduct a systematic study of the algorithmic techniques in finding near-stationary points of convex finite-sums.
Our main contributions are several algorithmic discoveries.
We put an emphasis on the simplicity and practicality of the new schemes, which could facilitate future developments.
arXiv Detail & Related papers (2021-05-25T16:46:35Z) - cMLSGA: A Co-Evolutionary Multi-Level Selection Genetic Algorithm for
Multi-Objective Optimization [0.0]
Multi-Level Selection Genetic Algorithm (MLSGA) already shows good performance on range of problems.
This paper proposes a distinct set of co-evolutionary mechanisms, which defines co-evolution as competition between collectives rather than individuals.
arXiv Detail & Related papers (2021-04-22T13:52:21Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.