Improving Time and Memory Efficiency of Genetic Algorithms by Storing
Populations as Minimum Spanning Trees of Patches
- URL: http://arxiv.org/abs/2306.16686v1
- Date: Thu, 29 Jun 2023 05:12:14 GMT
- Title: Improving Time and Memory Efficiency of Genetic Algorithms by Storing
Populations as Minimum Spanning Trees of Patches
- Authors: Maxim Buzdalov
- Abstract summary: In evolutionary algorithms the computational cost of applying operators and storing populations is comparable to the cost of fitness evaluation.
We show that storing the population as a minimum spanning tree, where vertices correspond to individuals but only contain meta-information about them, is a viable alternative to the straightforward implementation.
Our experiments suggest that significant, even improvements -- including execution of crossover operators! -- can be achieved in terms of both memory usage and computational costs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In many applications of evolutionary algorithms the computational cost of
applying operators and storing populations is comparable to the cost of fitness
evaluation. Furthermore, by knowing what exactly has changed in an individual
by an operator, it is possible to recompute fitness value much more efficiently
than from scratch. The associated time and memory improvements have been
available for simple evolutionary algorithms, few specific genetic algorithms
and in the context of gray-box optimization, but not for all algorithms, and
the main reason is that it is difficult to achieve in algorithms using large
arbitrarily structured populations.
This paper makes a first step towards improving this situation. We show that
storing the population as a minimum spanning tree, where vertices correspond to
individuals but only contain meta-information about them, and edges store
structural differences, or patches, between the individuals, is a viable
alternative to the straightforward implementation. Our experiments suggest that
significant, even asymptotic, improvements -- including execution of crossover
operators! -- can be achieved in terms of both memory usage and computational
costs.
Related papers
- Improving genetic algorithms performance via deterministic population
shrinkage [9.334663477968027]
This paper presents an empirical study on the possible benefits of a Simple Variable Population Sizing scheme on the performance of Genetic Algorithms (GAs)
It consists in decreasing the population for a GA run following a predetermined schedule, configured by a speed and a severity parameter.
Results show several combinations of speed-severity where SVPS-GA preserves the solution quality while improving performances, by reducing the number of evaluations needed for success.
arXiv Detail & Related papers (2024-01-22T17:05:16Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - Careful seeding for the k-medoids algorithm with incremental k++ cluster
construction [4.981260380070016]
An improved k-medoids algorithm (INCKM) was recently proposed to overcome this drawback.
We propose a novel incremental k-medoids algorithm (INCKPP) which dynamically increases the number of clusters.
Our algorithm can overcome the parameter selection problem in the improved k-medoids algorithm, improve clustering performance, and deal with imbalanced datasets very well.
arXiv Detail & Related papers (2022-07-06T02:25:35Z) - Efficient algorithms for implementing incremental proximal-point methods [0.3263412255491401]
In machine learning, model training algorithms observe a small portion of the training set in each computational step.
Several streams of research attempt to exploit more information about the cost functions than just their gradients via the well-known proximal operators.
We devise a novel algorithmic framework, which exploits convex duality theory to achieve both algorithmic efficiency and software modularity of proximal operator.
arXiv Detail & Related papers (2022-05-03T12:43:26Z) - Breaking the Linear Iteration Cost Barrier for Some Well-known
Conditional Gradient Methods Using MaxIP Data-structures [49.73889315176884]
Conditional gradient methods (CGM) are widely used in modern machine learning.
Most efforts focus on reducing the number of iterations as a means to reduce the overall running time.
We show the first algorithm, where the cost per iteration is sublinear in the number of parameters, for many fundamental optimization algorithms.
arXiv Detail & Related papers (2021-11-30T05:40:14Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Genetic Programming is Naturally Suited to Evolve Bagging Ensembles [0.0]
We show that minor changes to fitness evaluation and selection are sufficient to make a simple and otherwise-traditional GP algorithm evolve efficiently.
Our algorithm fares very well against state-of-the-art ensemble and non-ensemble GP algorithms.
arXiv Detail & Related papers (2020-09-13T16:28:11Z) - Variance Reduction with Sparse Gradients [82.41780420431205]
Variance reduction methods such as SVRG and SpiderBoost use a mixture of large and small batch gradients.
We introduce a new sparsity operator: The random-top-k operator.
Our algorithm consistently outperforms SpiderBoost on various tasks including image classification, natural language processing, and sparse matrix factorization.
arXiv Detail & Related papers (2020-01-27T08:23:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.