Massively Parallel Genetic Optimization through Asynchronous Propagation
of Populations
- URL: http://arxiv.org/abs/2301.08713v1
- Date: Fri, 20 Jan 2023 18:17:34 GMT
- Title: Massively Parallel Genetic Optimization through Asynchronous Propagation
of Populations
- Authors: Oskar Taubert, Marie Weiel, Daniel Coquelin, Anis Farshian, Charlotte
Debus, Alexander Schug, Achim Streit and Markus G\"otz
- Abstract summary: Propulate is an evolutionary optimization algorithm and software package for global optimization.
We provide an MPI-based implementation of our algorithm, which features variants of selection, mutation, crossover, and migration.
We find that Propulate is up to three orders of magnitude faster without sacrificing solution accuracy.
- Score: 50.591267188664666
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present Propulate, an evolutionary optimization algorithm and software
package for global optimization and in particular hyperparameter search. For
efficient use of HPC resources, Propulate omits the synchronization after each
generation as done in conventional genetic algorithms. Instead, it steers the
search with the complete population present at time of breeding new
individuals. We provide an MPI-based implementation of our algorithm, which
features variants of selection, mutation, crossover, and migration and is easy
to extend with custom functionality. We compare Propulate to the established
optimization tool Optuna. We find that Propulate is up to three orders of
magnitude faster without sacrificing solution accuracy, demonstrating the
efficiency and efficacy of our lazy synchronization approach. Code and
documentation are available at https://github.com/Helmholtz-AI-Energy/propulate
Related papers
- Enhancing Machine Learning Model Performance with Hyper Parameter
Optimization: A Comparative Study [0.0]
One of the most critical issues in machine learning is the selection of appropriate hyper parameters for training models.
Hyper parameter optimization (HPO) is a popular topic that artificial intelligence studies have focused on recently.
In this study, classical methods, such as grid, random search and Bayesian optimization, and population-based algorithms, such as genetic algorithms and particle swarm optimization, are discussed.
arXiv Detail & Related papers (2023-02-14T10:12:10Z) - Accelerating the Evolutionary Algorithms by Gaussian Process Regression
with $\epsilon$-greedy acquisition function [2.7716102039510564]
We propose a novel method to estimate the elite individual to accelerate the convergence of optimization.
Our proposal has a broad prospect to estimate the elite individual and accelerate the convergence of optimization.
arXiv Detail & Related papers (2022-10-13T07:56:47Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - Towards Learning Universal Hyperparameter Optimizers with Transformers [57.35920571605559]
We introduce the OptFormer, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction.
Our experiments demonstrate that the OptFormer can imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates.
arXiv Detail & Related papers (2022-05-26T12:51:32Z) - Automatic tuning of hyper-parameters of reinforcement learning
algorithms using Bayesian optimization with behavioral cloning [0.0]
In reinforcement learning (RL), the information content of data gathered by the learning agent is dependent on the setting of many hyper- parameters.
In this work, a novel approach for autonomous hyper- parameter setting using Bayesian optimization is proposed.
Experiments reveal promising results compared to other manual tweaking and optimization-based approaches.
arXiv Detail & Related papers (2021-12-15T13:10:44Z) - Highly Parallel Autoregressive Entity Linking with Discriminative
Correction [51.947280241185]
We propose a very efficient approach that parallelizes autoregressive linking across all potential mentions.
Our model is >70 times faster and more accurate than the previous generative method.
arXiv Detail & Related papers (2021-09-08T17:28:26Z) - Automated Configuration of Genetic Algorithms by Tuning for Anytime
Performance [4.33419118449588]
We show that it might be preferable to use anytime performance measures for the configuration task.
tuning for expected running time is much more sensitive with respect to the budget that is allocated to the target algorithms.
arXiv Detail & Related papers (2021-06-11T10:44:51Z) - Optimal Static Mutation Strength Distributions for the $(1+\lambda)$
Evolutionary Algorithm on OneMax [1.0965065178451106]
We show that, for large enough population sizes, such optimal distributions may be surprisingly complicated and counter-intuitive.
We show that, for large enough population sizes, such optimal distributions may be surprisingly complicated and counter-intuitive.
arXiv Detail & Related papers (2021-02-09T16:56:25Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.