evosax: JAX-based Evolution Strategies
- URL: http://arxiv.org/abs/2212.04180v1
- Date: Thu, 8 Dec 2022 10:34:42 GMT
- Title: evosax: JAX-based Evolution Strategies
- Authors: Robert Tjarko Lange
- Abstract summary: We release evosax: a JAX-based library of evolutionary optimization algorithms.
evosax implements 30 evolutionary optimization algorithms including finite-difference-based, estimation-of-distribution evolution strategies and various genetic algorithms.
It is designed in a modular fashion and allows for flexible usage via a simple ask-evaluate-tell API.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The deep learning revolution has greatly been accelerated by the 'hardware
lottery': Recent advances in modern hardware accelerators and compilers paved
the way for large-scale batch gradient optimization. Evolutionary optimization,
on the other hand, has mainly relied on CPU-parallelism, e.g. using Dask
scheduling and distributed multi-host infrastructure. Here we argue that also
modern evolutionary computation can significantly benefit from the massive
computational throughput provided by GPUs and TPUs. In order to better harness
these resources and to enable the next generation of black-box optimization
algorithms, we release evosax: A JAX-based library of evolution strategies
which allows researchers to leverage powerful function transformations such as
just-in-time compilation, automatic vectorization and hardware parallelization.
evosax implements 30 evolutionary optimization algorithms including
finite-difference-based, estimation-of-distribution evolution strategies and
various genetic algorithms. Every single algorithm can directly be executed on
hardware accelerators and automatically vectorized or parallelized across
devices using a single line of code. It is designed in a modular fashion and
allows for flexible usage via a simple ask-evaluate-tell API. We thereby hope
to facilitate a new wave of scalable evolutionary optimization algorithms.
Related papers
- Evolution Transformer: In-Context Evolutionary Optimization [6.873777465945062]
We introduce Evolution Transformer, a causal Transformer architecture, which can flexibly characterize a family of Evolution Strategies.
We train the model weights using Evolutionary Algorithm Distillation, a technique for supervised optimization of sequence models.
We analyze the resulting properties of the Evolution Transformer and propose a technique to fully self-referentially train the Evolution Transformer.
arXiv Detail & Related papers (2024-03-05T14:04:13Z) - Guided Evolution with Binary Discriminators for ML Program Search [64.44893463120584]
We propose guiding evolution with a binary discriminator, trained online to distinguish which program is better given a pair of programs.
We demonstrate our method can speed up evolution across a set of diverse problems including a 3.7x speedup on the symbolic search for MLs and a 4x speedup for RL loss functions.
arXiv Detail & Related papers (2024-02-08T16:59:24Z) - EvoTorch: Scalable Evolutionary Computation in Python [1.8514314381314885]
EvoTorch is an evolutionary computation library designed to work with high-dimensional optimization problems.
EvoTorch is based on and seamlessly works with the PyTorch library, and therefore, allows the users to define their optimization problems using a well-known API.
arXiv Detail & Related papers (2023-02-24T12:37:45Z) - EvoX: A Distributed GPU-accelerated Framework for Scalable Evolutionary
Computation [40.71953374838183]
EvoX is a computing framework tailored for automated, distributed, and heterogeneous execution of EC algorithms.
At the core of EvoX lies a unique programming model to streamline the development of parallelizable EC algorithms.
EvoX offers comprehensive support for a diverse set of benchmark problems, ranging from dozens of numerical test functions to hundreds of reinforcement learning tasks.
arXiv Detail & Related papers (2023-01-29T15:00:16Z) - Massively Parallel Genetic Optimization through Asynchronous Propagation
of Populations [50.591267188664666]
Propulate is an evolutionary optimization algorithm and software package for global optimization.
We provide an MPI-based implementation of our algorithm, which features variants of selection, mutation, crossover, and migration.
We find that Propulate is up to three orders of magnitude faster without sacrificing solution accuracy.
arXiv Detail & Related papers (2023-01-20T18:17:34Z) - EvoJAX: Hardware-Accelerated Neuroevolution [11.835051811090672]
We present EvoJAX, a hardware-accelerated neuroevolution toolkit.
It enables neuroevolution algorithms to work with neural networks running in parallel across multiple TPU/GPUs.
It can significantly shorten the iteration cycle of evolutionary computation experiments.
arXiv Detail & Related papers (2022-02-10T13:06:47Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z) - Kernel methods through the roof: handling billions of points efficiently [94.31450736250918]
Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems.
Recent advances have shown the benefits of a number of algorithmic ideas, for example combining optimization, numerical linear algebra and random projections.
Here, we push these efforts further to develop and test a solver that takes full advantage of GPU hardware.
arXiv Detail & Related papers (2020-06-18T08:16:25Z) - Predictive Coding Approximates Backprop along Arbitrary Computation
Graphs [68.8204255655161]
We develop a strategy to translate core machine learning architectures into their predictive coding equivalents.
Our models perform equivalently to backprop on challenging machine learning benchmarks.
Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry.
arXiv Detail & Related papers (2020-06-07T15:35:47Z) - PolyDL: Polyhedral Optimizations for Creation of High Performance DL
primitives [55.79741270235602]
We present compiler algorithms to automatically generate high performance implementations of Deep Learning primitives.
We develop novel data reuse analysis algorithms using the polyhedral model.
We also show that such a hybrid compiler plus a minimal library-use approach results in state-of-the-art performance.
arXiv Detail & Related papers (2020-06-02T06:44:09Z) - Heterogeneous CPU+GPU Stochastic Gradient Descent Algorithms [1.3249453757295084]
We study training algorithms for deep learning on heterogeneous CPU+GPU architectures.
Our two-fold objective -- maximize convergence rate and resource utilization simultaneously -- makes the problem challenging.
We show that the implementation of these algorithms achieves both faster convergence and higher resource utilization than on several real datasets.
arXiv Detail & Related papers (2020-04-19T05:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.