EvoJAX: Hardware-Accelerated Neuroevolution
- URL: http://arxiv.org/abs/2202.05008v2
- Date: Tue, 5 Apr 2022 21:01:06 GMT
- Title: EvoJAX: Hardware-Accelerated Neuroevolution
- Authors: Yujin Tang, Yingtao Tian, David Ha
- Abstract summary: We present EvoJAX, a hardware-accelerated neuroevolution toolkit.
It enables neuroevolution algorithms to work with neural networks running in parallel across multiple TPU/GPUs.
It can significantly shorten the iteration cycle of evolutionary computation experiments.
- Score: 11.835051811090672
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Evolutionary computation has been shown to be a highly effective method for
training neural networks, particularly when employed at scale on CPU clusters.
Recent work have also showcased their effectiveness on hardware accelerators,
such as GPUs, but so far such demonstrations are tailored for very specific
tasks, limiting applicability to other domains. We present EvoJAX, a scalable,
general purpose, hardware-accelerated neuroevolution toolkit. Building on top
of the JAX library, our toolkit enables neuroevolution algorithms to work with
neural networks running in parallel across multiple TPU/GPUs. EvoJAX achieves
very high performance by implementing the evolution algorithm, neural network
and task all in NumPy, which is compiled just-in-time to run on accelerators.
We provide extensible examples of EvoJAX for a wide range of tasks, including
supervised learning, reinforcement learning and generative art. Since EvoJAX
can find solutions to most of these tasks within minutes on a single
accelerator, compared to hours or days when using CPUs, our toolkit can
significantly shorten the iteration cycle of evolutionary computation
experiments. EvoJAX is available at https://github.com/google/evojax
Related papers
- Spyx: A Library for Just-In-Time Compiled Optimization of Spiking Neural
Networks [0.08965418284317034]
Spiking Neural Networks (SNNs) offer to enhance energy efficiency through a reduced and low-power hardware footprint.
This paper introduces Spyx, a new and lightweight SNN simulation and optimization library designed in JAX.
arXiv Detail & Related papers (2024-02-29T09:46:44Z) - DARLEI: Deep Accelerated Reinforcement Learning with Evolutionary
Intelligence [77.78795329701367]
We present DARLEI, a framework that combines evolutionary algorithms with parallelized reinforcement learning.
We characterize DARLEI's performance under various conditions, revealing factors impacting diversity of evolved morphologies.
We hope to extend DARLEI in future work to include interactions between diverse morphologies in richer environments.
arXiv Detail & Related papers (2023-12-08T16:51:10Z) - JaxMARL: Multi-Agent RL Environments and Algorithms in JAX [105.343918678781]
We present JaxMARL, the first open-source, Python-based library that combines GPU-enabled efficiency with support for a large number of commonly used MARL environments.
Our experiments show that, in terms of wall clock time, our JAX-based training pipeline is around 14 times faster than existing approaches.
We also introduce and benchmark SMAX, a JAX-based approximate reimplementation of the popular StarCraft Multi-Agent Challenge.
arXiv Detail & Related papers (2023-11-16T18:58:43Z) - EvoX: A Distributed GPU-accelerated Framework for Scalable Evolutionary
Computation [40.71953374838183]
EvoX is a computing framework tailored for automated, distributed, and heterogeneous execution of EC algorithms.
At the core of EvoX lies a unique programming model to streamline the development of parallelizable EC algorithms.
EvoX offers comprehensive support for a diverse set of benchmark problems, ranging from dozens of numerical test functions to hundreds of reinforcement learning tasks.
arXiv Detail & Related papers (2023-01-29T15:00:16Z) - evosax: JAX-based Evolution Strategies [0.0]
We release evosax: a JAX-based library of evolutionary optimization algorithms.
evosax implements 30 evolutionary optimization algorithms including finite-difference-based, estimation-of-distribution evolution strategies and various genetic algorithms.
It is designed in a modular fashion and allows for flexible usage via a simple ask-evaluate-tell API.
arXiv Detail & Related papers (2022-12-08T10:34:42Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Efficient Visual Tracking via Hierarchical Cross-Attention Transformer [82.92565582642847]
We present an efficient tracking method via a hierarchical cross-attention transformer named HCAT.
Our model runs about 195 fps on GPU, 45 fps on CPU, and 55 fps on the edge AI platform of NVidia Jetson AGX Xavier.
arXiv Detail & Related papers (2022-03-25T09:45:27Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Accelerating SLIDE Deep Learning on Modern CPUs: Vectorization,
Quantizations, Memory Optimizations, and More [26.748770505062378]
SLIDE is a C++ implementation of a sparse hash table based back-propagation.
We show how SLIDE's computations allow for a unique possibility of vectorization via AVX (Advanced Vector Extensions-512)
Our experiments are focused on large (hundreds of millions of parameters) recommendation and NLP models.
arXiv Detail & Related papers (2021-03-06T02:13:43Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z) - Towards High Performance Java-based Deep Learning Frameworks [0.22940141855172028]
Modern cloud services have set the demand for fast and efficient data processing.
This demand is common among numerous application domains, such as deep learning, data mining, and computer vision.
In this paper we have employed TornadoVM, a state-of-the-art programming framework to transparently accelerate Deep Netts; a Java-based deep learning framework.
arXiv Detail & Related papers (2020-01-13T13:03:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.