Rapid Learning of Spatial Representations for Goal-Directed Navigation
Based on a Novel Model of Hippocampal Place Fields
- URL: http://arxiv.org/abs/2206.02249v2
- Date: Tue, 7 Jun 2022 13:19:40 GMT
- Title: Rapid Learning of Spatial Representations for Goal-Directed Navigation
Based on a Novel Model of Hippocampal Place Fields
- Authors: Adedapo Alabi, Dieter Vanderelst and Ali Minai
- Abstract summary: We develop a self-organized model incorporating place cells and replay.
We demonstrate its utility for rapid one-shot learning in non-trivial environments with obstacles.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The discovery of place cells and other spatially modulated neurons in the
hippocampal complex of rodents has been crucial to elucidating the neural basis
of spatial cognition. More recently, the replay of neural sequences encoding
previously experienced trajectories has been observed during consummatory
behavior potentially with implications for quick memory consolidation and
behavioral planning. Several promising models for robotic navigation and
reinforcement learning have been proposed based on these and previous findings.
Most of these models, however, use carefully engineered neural networks and are
tested in simple environments. In this paper, we develop a self-organized model
incorporating place cells and replay, and demonstrate its utility for rapid
one-shot learning in non-trivial environments with obstacles.
Related papers
- Autaptic Synaptic Circuit Enhances Spatio-temporal Predictive Learning of Spiking Neural Networks [23.613277062707844]
Spiking Neural Networks (SNNs) emulate the integrated-fire-leak mechanism found in biological neurons.
Existing SNNs predominantly rely on the Integrate-and-Fire Leaky (LIF) model.
This paper proposes a novel S-patioTemporal Circuit (STC) model.
arXiv Detail & Related papers (2024-06-01T11:17:27Z) - Meta-Learning in Spiking Neural Networks with Reward-Modulated STDP [2.179313476241343]
We propose a bio-plausible meta-learning model inspired by the hippocampus and the prefrontal cortex.
Our new model can easily be applied to spike-based neuromorphic devices and enables fast learning in neuromorphic hardware.
arXiv Detail & Related papers (2023-06-07T13:08:46Z) - From Data-Fitting to Discovery: Interpreting the Neural Dynamics of
Motor Control through Reinforcement Learning [3.6159844753873087]
We study structured neural activity of a virtual robot performing legged locomotion.
We find that embodied agents trained to walk exhibit smooth dynamics that avoid tangling -- or opposing neural trajectories in neighboring neural space.
arXiv Detail & Related papers (2023-05-18T16:52:27Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Neuronal Learning Analysis using Cycle-Consistent Adversarial Networks [4.874780144224057]
We use a variant of deep generative models called - CycleGAN, to learn the unknown mapping between pre- and post-learning neural activities.
We develop an end-to-end pipeline to preprocess, train and evaluate calcium fluorescence signals, and a procedure to interpret the resulting deep learning models.
arXiv Detail & Related papers (2021-11-25T13:24:19Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Intrinsic Motivation and Episodic Memories for Robot Exploration of
High-Dimensional Sensory Spaces [0.0]
This work presents an architecture that generates curiosity-driven goal-directed exploration behaviours for an image sensor of a microfarming robot.
A combination of deep neural networks for offline unsupervised learning of low-dimensional features from images, and of online learning of shallow neural networks representing the inverse and forward kinematics of the system have been used.
The artificial curiosity system assigns interest values to a set of pre-defined goals, and drives the exploration towards those that are expected to maximise the learning progress.
arXiv Detail & Related papers (2020-01-07T11:39:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.