A multi-agent evolutionary robotics framework to train spiking neural
networks
- URL: http://arxiv.org/abs/2012.03485v1
- Date: Mon, 7 Dec 2020 07:26:52 GMT
- Title: A multi-agent evolutionary robotics framework to train spiking neural
networks
- Authors: Souvik Das, Anirudh Shankar, Vaneet Aggarwal
- Abstract summary: A novel multi-agent evolutionary robotics (ER) based framework is demonstrated for training Spiking Neural Networks (SNNs)
The weights of a population of SNNs along with morphological parameters of bots they control are treated as phenotypes.
Rules of the framework select certain bots and their SNNs for reproduction and others for elimination based on their efficacy in capturing food in a competitive environment.
- Score: 35.90048588096738
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A novel multi-agent evolutionary robotics (ER) based framework, inspired by
competitive evolutionary environments in nature, is demonstrated for training
Spiking Neural Networks (SNN). The weights of a population of SNNs along with
morphological parameters of bots they control in the ER environment are treated
as phenotypes. Rules of the framework select certain bots and their SNNs for
reproduction and others for elimination based on their efficacy in capturing
food in a competitive environment. While the bots and their SNNs are given no
explicit reward to survive or reproduce via any loss function, these drives
emerge implicitly as they evolve to hunt food and survive within these rules.
Their efficiency in capturing food as a function of generations exhibit the
evolutionary signature of punctuated equilibria. Two evolutionary inheritance
algorithms on the phenotypes, Mutation and Crossover with Mutation, are
demonstrated. Performances of these algorithms are compared using ensembles of
100 experiments for each algorithm. We find that Crossover with Mutation
promotes 40% faster learning in the SNN than mere Mutation with a statistically
significant margin.
Related papers
- Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - DARLEI: Deep Accelerated Reinforcement Learning with Evolutionary
Intelligence [77.78795329701367]
We present DARLEI, a framework that combines evolutionary algorithms with parallelized reinforcement learning.
We characterize DARLEI's performance under various conditions, revealing factors impacting diversity of evolved morphologies.
We hope to extend DARLEI in future work to include interactions between diverse morphologies in richer environments.
arXiv Detail & Related papers (2023-12-08T16:51:10Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - IE-GAN: An Improved Evolutionary Generative Adversarial Network Using a
New Fitness Function and a Generic Crossover Operator [20.100388977505002]
We propose an improved E-GAN framework called IE-GAN, which introduces a new fitness function and a generic crossover operator.
In particular, the proposed fitness function can model the evolutionary process of individuals more accurately.
The crossover operator, which has been commonly adopted in evolutionary algorithms, can enable offspring to imitate the superior gene expression of their parents.
arXiv Detail & Related papers (2021-07-25T13:55:07Z) - Neuroevolution of a Recurrent Neural Network for Spatial and Working
Memory in a Simulated Robotic Environment [57.91534223695695]
We evolved weights in a biologically plausible recurrent neural network (RNN) using an evolutionary algorithm to replicate the behavior and neural activity observed in rats.
Our method demonstrates how the dynamic activity in evolved RNNs can capture interesting and complex cognitive behavior.
arXiv Detail & Related papers (2021-02-25T02:13:52Z) - Neuroevolution in Deep Learning: The Role of Neutrality [0.0]
Methods have been applied to the architectural configuration and learning or training of artificial deep neural networks (DNN)
Evolutionary Algorithms (EAs) are gaining momentum as a computationally feasible method for the automated optimisation of DNNs.
This work discusses how neutrality, given certain conditions, can help to speed up the training/design of deep neural networks.
arXiv Detail & Related papers (2021-02-16T22:29:59Z) - SPA: Stochastic Probability Adjustment for System Balance of
Unsupervised SNNs [2.729898906885749]
Spiking neural networks (SNNs) receive widespread attention because of their low-power hardware characteristic and brain-like signal response mechanism.
We build an information theory-inspired system called Probability Adjustment (SPA) to reduce this gap.
The improvements in classification accuracy have reached 1.99% and 6.29% on the MNIST and EMNIST datasets respectively.
arXiv Detail & Related papers (2020-10-19T17:28:38Z) - Using Neural Networks and Diversifying Differential Evolution for
Dynamic Optimisation [11.228244128564512]
We investigate whether neural networks are competitive and the possibility of integrating them to improve the results.
The results show the significance of the improvement when integrating the neural network and diversity mechanisms depends on the type and the frequency of changes.
arXiv Detail & Related papers (2020-08-10T10:07:43Z) - Host-Pathongen Co-evolution Inspired Algorithm Enables Robust GAN
Training [0.0]
Generative adversarial networks (GANs) are pairs of artificial neural networks that are trained one against each other.
GANs have allowed for the generation of impressive imitations of real-life films, images and texts, whose fakeness is barely noticeable to humans.
We propose a more robust algorithm for GANs training. We empirically show the increased stability and a better ability to generate high-quality images while using less computational power.
arXiv Detail & Related papers (2020-05-22T09:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.