Improved Compact Genetic Algorithms with Efficient Caching
- URL: http://arxiv.org/abs/2504.02972v1
- Date: Thu, 03 Apr 2025 18:47:26 GMT
- Title: Improved Compact Genetic Algorithms with Efficient Caching
- Authors: Prasanta Dutta, Anirban Mukhopadhyay,
- Abstract summary: We introduce the concept of caching in cGAs as a means of avoiding redundant evaluations of the same chromosomes.<n>Our proposed approach operates equivalently to cGAs, but enhances the algorithm's time efficiency by reducing the number of function evaluations.<n>The proposed method further generalizes the caching mechanism with higher selection pressure for elitism-based cGAs.
- Score: 1.3270838622986498
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Compact Genetic Algorithms (cGAs) are condensed variants of classical Genetic Algorithms (GAs) that use a probability vector representation of the population instead of the complete population. cGAs have been shown to significantly reduce the number of function evaluations required while producing outcomes similar to those of classical GAs. However, cGAs have a tendency to repeatedly generate the same chromosomes as they approach convergence, resulting in unnecessary evaluations of identical chromosomes. This article introduces the concept of caching in cGAs as a means of avoiding redundant evaluations of the same chromosomes. Our proposed approach operates equivalently to cGAs, but enhances the algorithm's time efficiency by reducing the number of function evaluations. We also present a data structure for efficient cache maintenance to ensure low overhead. The proposed caching approach has an asymptotically constant time complexity on average. The proposed method further generalizes the caching mechanism with higher selection pressure for elitism-based cGAs. We conduct a rigorous analysis based on experiments on benchmark optimization problems using two well-known cache replacement strategies. The results demonstrate that caching significantly reduces the number of function evaluations required while maintaining the same level of performance accuracy.
Related papers
- A Greedy Strategy for Graph Cut [95.2841574410968]
We propose a greedy strategy to solve the problem of Graph Cut, called GGC.<n>It starts from the state where each data sample is regarded as a cluster and dynamically merges the two clusters.<n>GGC has a nearly linear computational complexity with respect to the number of samples.
arXiv Detail & Related papers (2024-12-28T05:49:42Z) - Benchmarking of GPU-optimized Quantum-Inspired Evolutionary Optimization Algorithm using Functional Analysis [0.0]
This article presents a comparative analysis of GPU-parallelized implementations of evolutionary optimization (QIEO)<n>The results demonstrate that QIEO performs better for these functions than GA.
arXiv Detail & Related papers (2024-12-12T06:47:23Z) - Variance-Reducing Couplings for Random Features [57.73648780299374]
Random features (RFs) are a popular technique to scale up kernel methods in machine learning.
We find couplings to improve RFs defined on both Euclidean and discrete input spaces.
We reach surprising conclusions about the benefits and limitations of variance reduction as a paradigm.
arXiv Detail & Related papers (2024-05-26T12:25:09Z) - Rethinking and Accelerating Graph Condensation: A Training-Free Approach with Class Partition [49.41718583061147]
Graph condensation is a data-centric solution to replace the large graph with a small yet informative condensed graph.<n>Existing GC methods suffer from intricate optimization processes, necessitating excessive computing resources and training time.<n>We propose a training-free GC framework termed Class-partitioned Graph Condensation (CGC)<n>CGC condenses the Ogbn-products graph within 30 seconds, achieving a speedup ranging from $102$X to $104$X and increasing accuracy by up to 4.2%.
arXiv Detail & Related papers (2024-05-22T14:57:09Z) - Improving genetic algorithms performance via deterministic population
shrinkage [9.334663477968027]
This paper presents an empirical study on the possible benefits of a Simple Variable Population Sizing scheme on the performance of Genetic Algorithms (GAs)
It consists in decreasing the population for a GA run following a predetermined schedule, configured by a speed and a severity parameter.
Results show several combinations of speed-severity where SVPS-GA preserves the solution quality while improving performances, by reducing the number of evaluations needed for success.
arXiv Detail & Related papers (2024-01-22T17:05:16Z) - A Discrete Particle Swarm Optimizer for the Design of Cryptographic
Boolean Functions [1.6574413179773761]
The algorithm is a modified version of the permutation PSO by Hu, Eberhart and Shi.
The parameters for the PSO velocity equation are tuned by means of two meta-optimization techniques.
arXiv Detail & Related papers (2024-01-09T14:08:42Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Automatic and effective discovery of quantum kernels [41.61572387137452]
Quantum computing can empower machine learning models by enabling kernel machines to leverage quantum kernels for representing similarity measures between data.<n>We present an approach to this problem, which employs optimization techniques, similar to those used in neural architecture search and AutoML.<n>The results obtained by testing our approach on a high-energy physics problem demonstrate that, in the best-case scenario, we can either match or improve testing accuracy with respect to the manual design approach.
arXiv Detail & Related papers (2022-09-22T16:42:14Z) - Stochastic Reweighted Gradient Descent [4.355567556995855]
We propose an importance-sampling-based algorithm we call SRG (stochastic reweighted gradient)
We pay particular attention to the time and memory overhead of our proposed method.
We present empirical results to support our findings.
arXiv Detail & Related papers (2021-03-23T04:09:43Z) - Gradient Coding with Dynamic Clustering for Straggler-Tolerant
Distributed Learning [55.052517095437]
gradient descent (GD) is widely employed to parallelize the learning task by distributing the dataset across multiple workers.
A significant performance bottleneck for the per-iteration completion time in distributed synchronous GD is $straggling$ workers.
Coded distributed techniques have been introduced recently to mitigate stragglers and to speed up GD iterations by assigning redundant computations to workers.
We propose a novel dynamic GC scheme, which assigns redundant data to workers to acquire the flexibility to choose from among a set of possible codes depending on the past straggling behavior.
arXiv Detail & Related papers (2021-03-01T18:51:29Z) - Gradient Coding with Dynamic Clustering for Straggler Mitigation [57.9123881133818]
GC-DC regulates the number of straggling workers in each cluster based on the straggler behavior in the previous iteration.
We numerically show that GC-DC provides significant improvements in the average completion time (of each iteration) with no increase in the communication load compared to the original GC scheme.
arXiv Detail & Related papers (2020-11-03T18:52:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.