Neuroevolution of a Recurrent Neural Network for Spatial and Working
Memory in a Simulated Robotic Environment
- URL: http://arxiv.org/abs/2102.12638v1
- Date: Thu, 25 Feb 2021 02:13:52 GMT
- Title: Neuroevolution of a Recurrent Neural Network for Spatial and Working
Memory in a Simulated Robotic Environment
- Authors: Xinyun Zou, Eric O. Scott, Alexander B. Johnson, Kexin Chen, Douglas
A. Nitz, Kenneth A. De Jong, Jeffrey L. Krichmar
- Abstract summary: We evolved weights in a biologically plausible recurrent neural network (RNN) using an evolutionary algorithm to replicate the behavior and neural activity observed in rats.
Our method demonstrates how the dynamic activity in evolved RNNs can capture interesting and complex cognitive behavior.
- Score: 57.91534223695695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Animals ranging from rats to humans can demonstrate cognitive map
capabilities. We evolved weights in a biologically plausible recurrent neural
network (RNN) using an evolutionary algorithm to replicate the behavior and
neural activity observed in rats during a spatial and working memory task in a
triple T-maze. The rat was simulated in the Webots robot simulator and used
vision, distance and accelerometer sensors to navigate a virtual maze. After
evolving weights from sensory inputs to the RNN, within the RNN, and from the
RNN to the robot's motors, the Webots agent successfully navigated the space to
reach all four reward arms with minimal repeats before time-out. Our current
findings suggest that it is the RNN dynamics that are key to performance, and
that performance is not dependent on any one sensory type, which suggests that
neurons in the RNN are performing mixed selectivity and conjunctive coding.
Moreover, the RNN activity resembles spatial information and
trajectory-dependent coding observed in the hippocampus. Collectively, the
evolved RNN exhibits navigation skills, spatial memory, and working memory. Our
method demonstrates how the dynamic activity in evolved RNNs can capture
interesting and complex cognitive behavior and may be used to create RNN
controllers for robotic applications.
Related papers
- Geometry of naturalistic object representations in recurrent neural network models of working memory [2.028720028008411]
We show how naturalistic object information is maintained in working memory in neural networks.
Our findings indicate that goal-driven RNNs employ chronological memory subspaces to track information over short time spans.
arXiv Detail & Related papers (2024-11-04T23:57:46Z) - Random-coupled Neural Network [17.53731608985241]
Pulse-coupled neural network (PCNN) is a well applicated model for imitating the characteristics of the human brain in computer vision and neural network fields.
In this study, random-coupled neural network (RCNN) is proposed.
It overcomes difficulties in PCNN's neuromorphic computing via a random inactivation process.
arXiv Detail & Related papers (2024-03-26T09:13:06Z) - Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Toward stochastic neural computing [11.955322183964201]
We propose a theory of neural computing in which streams of noisy inputs are transformed and processed through populations of spiking neurons.
We demonstrate the application of our method to Intel's Loihi neuromorphic hardware.
arXiv Detail & Related papers (2023-05-23T12:05:35Z) - Complex Dynamic Neurons Improved Spiking Transformer Network for
Efficient Automatic Speech Recognition [8.998797644039064]
The spiking neural network (SNN) using leaky-integrated-and-fire (LIF) neurons has been commonly used in automatic speech recognition (ASR) tasks.
Here we introduce four types of neuronal dynamics to post-process the sequential patterns generated from the spiking transformer.
We found that the DyTr-SNN could handle the non-toy automatic speech recognition task well, representing a lower phoneme error rate, lower computational cost, and higher robustness.
arXiv Detail & Related papers (2023-02-02T16:20:27Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Effective and Efficient Computation with Multiple-timescale Spiking
Recurrent Neural Networks [0.9790524827475205]
We show how a novel type of adaptive spiking recurrent neural network (SRNN) is able to achieve state-of-the-art performance.
We calculate a $>$100x energy improvement for our SRNNs over classical RNNs on the harder tasks.
arXiv Detail & Related papers (2020-05-24T01:04:53Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.