TensorNEAT: A GPU-accelerated Library for NeuroEvolution of Augmenting Topologies
- URL: http://arxiv.org/abs/2504.08339v1
- Date: Fri, 11 Apr 2025 08:10:09 GMT
- Title: TensorNEAT: A GPU-accelerated Library for NeuroEvolution of Augmenting Topologies
- Authors: Lishuang Wang, Mengfei Zhao, Enyu Liu, Kebin Sun, Ran Cheng,
- Abstract summary: NEAT is a GPU-accelerated library that applies tensorization to the NEAT algorithm.<n>NEAT is built upon JAX, leveraging automatic function vectorization and hardware acceleration.<n>The library supports variants such as CPPN and HyperNEAT, and integrates with benchmark environments like Gym, Brax, and gymnax.
- Score: 6.784939343811732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The NeuroEvolution of Augmenting Topologies (NEAT) algorithm has received considerable recognition in the field of neuroevolution. Its effectiveness is derived from initiating with simple networks and incrementally evolving both their topologies and weights. Although its capability across various challenges is evident, the algorithm's computational efficiency remains an impediment, limiting its scalability potential. To address these limitations, this paper introduces TensorNEAT, a GPU-accelerated library that applies tensorization to the NEAT algorithm. Tensorization reformulates NEAT's diverse network topologies and operations into uniformly shaped tensors, enabling efficient parallel execution across entire populations. TensorNEAT is built upon JAX, leveraging automatic function vectorization and hardware acceleration to significantly enhance computational efficiency. In addition to NEAT, the library supports variants such as CPPN and HyperNEAT, and integrates with benchmark environments like Gym, Brax, and gymnax. Experimental evaluations across various robotic control environments in Brax demonstrate that TensorNEAT delivers up to 500x speedups compared to existing implementations, such as NEAT-Python. The source code for TensorNEAT is publicly available at: https://github.com/EMI-Group/tensorneat.
Related papers
- Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.<n>A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.<n>The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - Tensorized NeuroEvolution of Augmenting Topologies for GPU Acceleration [6.784939343811732]
The NeuroEvolution of Augmenting Topologies (NEAT) algorithm has received considerable recognition in the field of neuroevolution.
This paper introduces a tensorization method for the NEAT algorithm, enabling the transformation of its diverse network topologies.
NEAT library supports various benchmark environments including Gym, Brax, and gymnax.
arXiv Detail & Related papers (2024-04-02T10:20:12Z) - GPU-accelerated Evolutionary Multiobjective Optimization Using Tensorized RVEA [13.319536515278191]
We introduce a large-scale Evolutionary Reference Vector Guided Algorithm (TensorRVEA) for harnessing the advancements of the GPU acceleration.
In numerical benchmark tests involving large-scale populations and problem dimensions,RVEA consistently demonstrates high computational performance, achieving up to over 1000$times$ speedups.
arXiv Detail & Related papers (2024-04-01T15:04:24Z) - Forward Direct Feedback Alignment for Online Gradient Estimates of Spiking Neural Networks [0.0]
Spiking neural networks can be simulated energy efficiently on neuromorphic hardware platforms.
We propose a novel neuromorphic algorithm, the textitSpiking Forward Direct Feedback Alignment (SFDFA) algorithm.
arXiv Detail & Related papers (2024-02-06T09:07:12Z) - Neural Functional Transformers [99.98750156515437]
This paper uses the attention mechanism to define a novel set of permutation equivariant weight-space layers called neural functional Transformers (NFTs)
NFTs respect weight-space permutation symmetries while incorporating the advantages of attention, which have exhibited remarkable success across multiple domains.
We also leverage NFTs to develop Inr2Array, a novel method for computing permutation invariant representations from the weights of implicit neural representations (INRs)
arXiv Detail & Related papers (2023-05-22T23:38:27Z) - Tensor Slicing and Optimization for Multicore NPUs [2.670309629218727]
This paper proposes a compiler optimization pass for Multicore NPUs, called Slicing Optimization (TSO)
TSO identifies the best tensor slicing that minimizes execution time for a set of CNN models.
Results show that TSO is capable of identifying the best tensor slicing that minimizes execution time for a set of CNN models.
arXiv Detail & Related papers (2023-04-06T12:03:03Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - NeuroEvo: A Cloud-based Platform for Automated Design and Training of
Neural Networks using Evolutionary and Particle Swarm Algorithms [0.0]
This paper introduces a new web platform, NeuroEvo, that allows users to interactively design and train neural network classifiers.
The classification problem and training data are provided by the user and, upon completion of the training process, the best classifier is made available to download and implement in Python, Java, and JavaScript.
arXiv Detail & Related papers (2022-10-01T14:10:43Z) - Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch [75.69506249886622]
Sparsity in Deep Neural Networks (DNNs) has been widely studied to compress and accelerate the models on resource-constrained environments.
In this paper, we are the first to study training from scratch an N:M fine-grained structured sparse network.
arXiv Detail & Related papers (2021-02-08T05:55:47Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.