Tensorized NeuroEvolution of Augmenting Topologies for GPU Acceleration
- URL: http://arxiv.org/abs/2404.01817v3
- Date: Thu, 11 Apr 2024 11:30:47 GMT
- Title: Tensorized NeuroEvolution of Augmenting Topologies for GPU Acceleration
- Authors: Lishuang Wang, Mengfei Zhao, Enyu Liu, Kebin Sun, Ran Cheng,
- Abstract summary: The NeuroEvolution of Augmenting Topologies (NEAT) algorithm has received considerable recognition in the field of neuroevolution.
This paper introduces a tensorization method for the NEAT algorithm, enabling the transformation of its diverse network topologies.
NEAT library supports various benchmark environments including Gym, Brax, and gymnax.
- Score: 6.784939343811732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The NeuroEvolution of Augmenting Topologies (NEAT) algorithm has received considerable recognition in the field of neuroevolution. Its effectiveness is derived from initiating with simple networks and incrementally evolving both their topologies and weights. Although its capability across various challenges is evident, the algorithm's computational efficiency remains an impediment, limiting its scalability potential. In response, this paper introduces a tensorization method for the NEAT algorithm, enabling the transformation of its diverse network topologies and associated operations into uniformly shaped tensors for computation. This advancement facilitates the execution of the NEAT algorithm in a parallelized manner across the entire population. Furthermore, we develop TensorNEAT, a library that implements the tensorized NEAT algorithm and its variants, such as CPPN and HyperNEAT. Building upon JAX, TensorNEAT promotes efficient parallel computations via automated function vectorization and hardware acceleration. Moreover, the TensorNEAT library supports various benchmark environments including Gym, Brax, and gymnax. Through evaluations across a spectrum of robotics control environments in Brax, TensorNEAT achieves up to 500x speedups compared to the existing implementations such as NEAT-Python. Source codes are available at: https://github.com/EMI-Group/tensorneat.
Related papers
- Slax: A Composable JAX Library for Rapid and Flexible Prototyping of Spiking Neural Networks [0.19427883580687189]
We introduce Slax, a JAX-based library designed to accelerate SNN algorithm design.
Slax provides optimized implementations of diverse training algorithms, allowing direct performance comparison.
arXiv Detail & Related papers (2024-04-08T18:15:13Z) - Forward Direct Feedback Alignment for Online Gradient Estimates of Spiking Neural Networks [0.0]
Spiking neural networks can be simulated energy efficiently on neuromorphic hardware platforms.
We propose a novel neuromorphic algorithm, the textitSpiking Forward Direct Feedback Alignment (SFDFA) algorithm.
arXiv Detail & Related papers (2024-02-06T09:07:12Z) - Neural Functional Transformers [99.98750156515437]
This paper uses the attention mechanism to define a novel set of permutation equivariant weight-space layers called neural functional Transformers (NFTs)
NFTs respect weight-space permutation symmetries while incorporating the advantages of attention, which have exhibited remarkable success across multiple domains.
We also leverage NFTs to develop Inr2Array, a novel method for computing permutation invariant representations from the weights of implicit neural representations (INRs)
arXiv Detail & Related papers (2023-05-22T23:38:27Z) - Tensor Slicing and Optimization for Multicore NPUs [2.670309629218727]
This paper proposes a compiler optimization pass for Multicore NPUs, called Slicing Optimization (TSO)
TSO identifies the best tensor slicing that minimizes execution time for a set of CNN models.
Results show that TSO is capable of identifying the best tensor slicing that minimizes execution time for a set of CNN models.
arXiv Detail & Related papers (2023-04-06T12:03:03Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - NeuroEvo: A Cloud-based Platform for Automated Design and Training of
Neural Networks using Evolutionary and Particle Swarm Algorithms [0.0]
This paper introduces a new web platform, NeuroEvo, that allows users to interactively design and train neural network classifiers.
The classification problem and training data are provided by the user and, upon completion of the training process, the best classifier is made available to download and implement in Python, Java, and JavaScript.
arXiv Detail & Related papers (2022-10-01T14:10:43Z) - Phase Retrieval using Expectation Consistent Signal Recovery Algorithm
based on Hypernetwork [73.94896986868146]
Phase retrieval is an important component in modern computational imaging systems.
Recent advances in deep learning have opened up a new possibility for robust and fast PR.
We develop a novel framework for deep unfolding to overcome the existing limitations.
arXiv Detail & Related papers (2021-01-12T08:36:23Z) - Connecting Weighted Automata, Tensor Networks and Recurrent Neural
Networks through Spectral Learning [58.14930566993063]
We present connections between three models used in different research fields: weighted finite automata(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks.
We introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous vectors input.
arXiv Detail & Related papers (2020-10-19T15:28:00Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z) - Evolving Neural Networks through a Reverse Encoding Tree [9.235550900581764]
This paper advances a method which incorporates a type of topological edge coding, named Reverse HANG Tree (RET), for evolving scalable neural networks efficiently.
Using RET, two types of approaches -- NEAT with Binary search encoding (Bi-NEAT) and NEAT with Golden-Section search encoding (GS-NEAT) -- have been designed to solve problems in benchmark continuous learning environments.
arXiv Detail & Related papers (2020-02-03T02:29:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.