Rapid Learning of Spatial Representations for Goal-Directed Navigation
Based on a Novel Model of Hippocampal Place Fields
- URL: http://arxiv.org/abs/2206.02249v2
- Date: Tue, 7 Jun 2022 13:19:40 GMT
- Title: Rapid Learning of Spatial Representations for Goal-Directed Navigation
Based on a Novel Model of Hippocampal Place Fields
- Authors: Adedapo Alabi, Dieter Vanderelst and Ali Minai
- Abstract summary: We develop a self-organized model incorporating place cells and replay.
We demonstrate its utility for rapid one-shot learning in non-trivial environments with obstacles.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The discovery of place cells and other spatially modulated neurons in the
hippocampal complex of rodents has been crucial to elucidating the neural basis
of spatial cognition. More recently, the replay of neural sequences encoding
previously experienced trajectories has been observed during consummatory
behavior potentially with implications for quick memory consolidation and
behavioral planning. Several promising models for robotic navigation and
reinforcement learning have been proposed based on these and previous findings.
Most of these models, however, use carefully engineered neural networks and are
tested in simple environments. In this paper, we develop a self-organized model
incorporating place cells and replay, and demonstrate its utility for rapid
one-shot learning in non-trivial environments with obstacles.
Related papers
- Langevin Flows for Modeling Neural Latent Dynamics [81.81271685018284]
We introduce LangevinFlow, a sequential Variational Auto-Encoder where the time evolution of latent variables is governed by the underdamped Langevin equation.<n>Our approach incorporates physical priors -- such as inertia, damping, a learned potential function, and forces -- to represent both autonomous and non-autonomous processes in neural systems.<n>Our method outperforms state-of-the-art baselines on synthetic neural populations generated by a Lorenz attractor.
arXiv Detail & Related papers (2025-07-15T17:57:48Z) - POCO: Scalable Neural Forecasting through Population Conditioning [4.781680085499199]
POCO is a unified neural forecasting model that captures both neuron-specific and brain-wide dynamics.<n>Trained across five calcium imaging datasets spanning zebrafish, mice, and C. elegans, POCO achieves state-of-the-art accuracy at cellular resolution in spontaneous behaviors.
arXiv Detail & Related papers (2025-06-17T20:15:04Z) - NOBLE -- Neural Operator with Biologically-informed Latent Embeddings to Capture Experimental Variability in Biological Neuron Models [68.89389652724378]
NOBLE is a neural operator framework that learns a mapping from a continuous frequency-modulated embedding of interpretable neuron features to the somatic voltage response induced by current injection.<n>It predicts distributions of neural dynamics accounting for the intrinsic experimental variability.<n>NOBLE is the first scaled-up deep learning framework validated on real experimental data.
arXiv Detail & Related papers (2025-06-05T01:01:18Z) - Single-neuron deep generative model uncovers underlying physics of neuronal activity in Ca imaging data [0.0]
We propose a novel framework for single-neuron representation learning using autoregressive variational autoencoders (AVAEs)
Our approach embeds individual neurons' signals into a reduced-dimensional space without the need for spike inference algorithms.
The AVAE excels over traditional linear methods by generating more informative and discriminative latent representations.
arXiv Detail & Related papers (2025-01-24T16:33:52Z) - Autaptic Synaptic Circuit Enhances Spatio-temporal Predictive Learning of Spiking Neural Networks [23.613277062707844]
Spiking Neural Networks (SNNs) emulate the integrated-fire-leak mechanism found in biological neurons.
Existing SNNs predominantly rely on the Integrate-and-Fire Leaky (LIF) model.
This paper proposes a novel S-patioTemporal Circuit (STC) model.
arXiv Detail & Related papers (2024-06-01T11:17:27Z) - Meta-Learning in Spiking Neural Networks with Reward-Modulated STDP [2.179313476241343]
We propose a bio-plausible meta-learning model inspired by the hippocampus and the prefrontal cortex.
Our new model can easily be applied to spike-based neuromorphic devices and enables fast learning in neuromorphic hardware.
arXiv Detail & Related papers (2023-06-07T13:08:46Z) - From Data-Fitting to Discovery: Interpreting the Neural Dynamics of
Motor Control through Reinforcement Learning [3.6159844753873087]
We study structured neural activity of a virtual robot performing legged locomotion.
We find that embodied agents trained to walk exhibit smooth dynamics that avoid tangling -- or opposing neural trajectories in neighboring neural space.
arXiv Detail & Related papers (2023-05-18T16:52:27Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Neuronal Learning Analysis using Cycle-Consistent Adversarial Networks [4.874780144224057]
We use a variant of deep generative models called - CycleGAN, to learn the unknown mapping between pre- and post-learning neural activities.
We develop an end-to-end pipeline to preprocess, train and evaluate calcium fluorescence signals, and a procedure to interpret the resulting deep learning models.
arXiv Detail & Related papers (2021-11-25T13:24:19Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Intrinsic Motivation and Episodic Memories for Robot Exploration of
High-Dimensional Sensory Spaces [0.0]
This work presents an architecture that generates curiosity-driven goal-directed exploration behaviours for an image sensor of a microfarming robot.
A combination of deep neural networks for offline unsupervised learning of low-dimensional features from images, and of online learning of shallow neural networks representing the inverse and forward kinematics of the system have been used.
The artificial curiosity system assigns interest values to a set of pre-defined goals, and drives the exploration towards those that are expected to maximise the learning progress.
arXiv Detail & Related papers (2020-01-07T11:39:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.