Biased Random-Key Genetic Algorithms: A Review
- URL: http://arxiv.org/abs/2312.00961v2
- Date: Wed, 6 Dec 2023 16:20:27 GMT
- Title: Biased Random-Key Genetic Algorithms: A Review
- Authors: Mariana A. Londe, Luciana S. Pessoa, Carlos E. Andrade, Mauricio G. C.
Resende
- Abstract summary: The review encompasses over 150 papers with a wide range of applications.
Scheduling is by far the most prevalent application area in this review, followed by network design and location problems.
The most frequent hybridization method employed is local search, and new features aim to increase population diversity.
- Score: 2.4578723416255754
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper is a comprehensive literature review of Biased Random-Key Genetic
Algorithms (BRKGA). BRKGA is a metaheuristic that employs random-key-based
chromosomes with biased, uniform, and elitist mating strategies in a genetic
algorithm framework. The review encompasses over 150 papers with a wide range
of applications, including classical combinatorial optimization problems,
real-world industrial use cases, and non-orthodox applications such as neural
network hyperparameter tuning in machine learning. Scheduling is by far the
most prevalent application area in this review, followed by network design and
location problems. The most frequent hybridization method employed is local
search, and new features aim to increase population diversity. Overall, this
survey provides a comprehensive overview of the BRKGA metaheuristic and its
applications and highlights important areas for future research.
Related papers
- A Learning Search Algorithm for the Restricted Longest Common Subsequence Problem [40.64116457007417]
The Restricted Longest Common Subsequence (RLCS) problem has significant applications in bioinformatics.
This paper introduces two novel approaches designed to enhance the search process by steering it towards promising regions.
An important contribution of this paper is found in the generation of real-world instances where scientific abstracts serve as input strings.
arXiv Detail & Related papers (2024-10-15T20:02:15Z) - Early years of Biased Random-Key Genetic Algorithms: A systematic review [2.249916681499244]
This paper presents a systematic literature review and bibliometric analysis focusing on Biased Random-Key Genetic Algorithms (BRKGA)
BRKGA is a metaheuristic framework that uses random-key-based chromosomes with biased, uniform, and elitist mating strategies alongside a genetic algorithm.
arXiv Detail & Related papers (2024-05-02T22:22:41Z) - Quantum search algorithm on weighted databases [5.229564709919574]
Grover algorithm is a crucial solution for addressing unstructured search problems.
This research extensively investigates Grover's search methodology within non-uniformly distributed databases.
It is observed that the search process facilitated by this evolution does not consistently result in a speed-up.
arXiv Detail & Related papers (2023-12-04T03:15:02Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Frequent Itemset-driven Search for Finding Minimum Node Separators in
Complex Networks [61.2383572324176]
We propose a frequent itemset-driven search approach, which integrates the concept of frequent itemset mining in data mining into the well-known memetic search framework.
It iteratively employs the frequent itemset recombination operator to generate promising offspring solution based on itemsets that frequently occur in high-quality solutions.
In particular, it discovers 29 new upper bounds and matches 18 previous best-known bounds.
arXiv Detail & Related papers (2022-01-18T11:16:40Z) - Hybrid Random Features [60.116392415715275]
We propose a new class of random feature methods for linearizing softmax and Gaussian kernels called hybrid random features (HRFs)
HRFs automatically adapt the quality of kernel estimation to provide most accurate approximation in the defined regions of interest.
arXiv Detail & Related papers (2021-10-08T20:22:59Z) - Combinatorial Pure Exploration with Full-bandit Feedback and Beyond:
Solving Combinatorial Optimization under Uncertainty with Limited Observation [70.41056265629815]
When developing an algorithm for optimization, it is commonly assumed that parameters such as edge weights are exactly known as inputs.
In this article, we review recently proposed techniques for pure exploration problems with limited feedback.
arXiv Detail & Related papers (2020-12-31T12:40:52Z) - A Population-based Hybrid Approach to Hyperparameter Optimization for
Neural Networks [0.0]
HBRKGA is a hybrid approach that combines the Biased Random Key Genetic Algorithm with a Random Walk technique to search the hyper parameter space efficiently.
Results showed that HBRKGA could find hyper parameter configurations that outperformed the baseline methods in six out of eight datasets.
arXiv Detail & Related papers (2020-11-22T17:12:31Z) - Complexity-based speciation and genotype representation for
neuroevolution [81.21462458089142]
This paper introduces a speciation principle for neuroevolution where evolving networks are grouped into species based on the number of hidden neurons.
The proposed speciation principle is employed in several techniques designed to promote and preserve diversity within species and in the ecosystem as a whole.
arXiv Detail & Related papers (2020-10-11T06:26:56Z) - Random Features for Kernel Approximation: A Survey on Algorithms,
Theory, and Beyond [35.32894170512829]
In this survey, we systematically review the work on random features from the past ten years.
First, the motivations, characteristics and contributions of representative random features based algorithms are summarized.
Second, we review theoretical results that center around the following key question: how many random features are needed to ensure a high approximation quality.
Third, we provide a comprehensive evaluation of popular random features based algorithms on several large-scale benchmark datasets.
arXiv Detail & Related papers (2020-04-23T13:44:48Z) - Stochastic batch size for adaptive regularization in deep network
optimization [63.68104397173262]
We propose a first-order optimization algorithm incorporating adaptive regularization applicable to machine learning problems in deep learning framework.
We empirically demonstrate the effectiveness of our algorithm using an image classification task based on conventional network models applied to commonly used benchmark datasets.
arXiv Detail & Related papers (2020-04-14T07:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.