Persistent learning signals and working memory without continuous
attractors
- URL: http://arxiv.org/abs/2308.12585v1
- Date: Thu, 24 Aug 2023 06:12:41 GMT
- Title: Persistent learning signals and working memory without continuous
attractors
- Authors: Il Memming Park and \'Abel S\'agodi and Piotr Aleksander Sok\'o\l
- Abstract summary: We show that quasi-periodic attractors can support learning arbitrarily long temporal relationships.
Our theory has broad implications for the design of artificial learning systems.
- Score: 6.135577623169029
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural dynamical systems with stable attractor structures, such as point
attractors and continuous attractors, are hypothesized to underlie meaningful
temporal behavior that requires working memory. However, working memory may not
support useful learning signals necessary to adapt to changes in the temporal
structure of the environment. We show that in addition to the continuous
attractors that are widely implicated, periodic and quasi-periodic attractors
can also support learning arbitrarily long temporal relationships. Unlike the
continuous attractors that suffer from the fine-tuning problem, the less
explored quasi-periodic attractors are uniquely qualified for learning to
produce temporally structured behavior. Our theory has broad implications for
the design of artificial learning systems and makes predictions about
observable signatures of biological neural dynamics that can support temporal
dependence learning and working memory. Based on our theory, we developed a new
initialization scheme for artificial recurrent neural networks that outperforms
standard methods for tasks that require learning temporal dynamics. Moreover,
we propose a robust recurrent memory mechanism for integrating and maintaining
head direction without a ring attractor.
Related papers
- LTLZinc: a Benchmarking Framework for Continual Learning and Neuro-Symbolic Temporal Reasoning [12.599235808369112]
Continual learning concerns agents that expand their knowledge over time, improving their skills while avoiding to forget previously learned concepts.<n>Most of the existing approaches for neuro-symbolic artificial intelligence are applied to static scenarios only.<n>We introduceZinc, a benchmarking framework that can be used to generate datasets covering a variety of different problems.
arXiv Detail & Related papers (2025-07-23T13:04:13Z) - State Space Models Naturally Produce Traveling Waves, Time Cells, and Scale to Abstract Cognitive Functions [7.097247619177705]
We propose a framework based on State-Space Models (SSMs), an emerging class of deep learning architectures.<n>We demonstrate that the model spontaneously develops neural representations that strikingly mimic biological 'time cells'<n>Our findings position SSMs as a compelling framework that connects single-neuron dynamics to cognitive phenomena.
arXiv Detail & Related papers (2025-07-18T03:53:16Z) - Langevin Flows for Modeling Neural Latent Dynamics [81.81271685018284]
We introduce LangevinFlow, a sequential Variational Auto-Encoder where the time evolution of latent variables is governed by the underdamped Langevin equation.<n>Our approach incorporates physical priors -- such as inertia, damping, a learned potential function, and forces -- to represent both autonomous and non-autonomous processes in neural systems.<n>Our method outperforms state-of-the-art baselines on synthetic neural populations generated by a Lorenz attractor.
arXiv Detail & Related papers (2025-07-15T17:57:48Z) - Deep reinforcement learning with time-scale invariant memory [1.338174941551702]
We integrate a computational neuroscience model of scale invariant memory into deep reinforcement learning (RL) agents.
We show that such agents can learn robustly across a wide range of temporal scales.
This result illustrates that incorporating computational principles from neuroscience and cognitive science into deep neural networks can enhance adaptability to complex temporal dynamics.
arXiv Detail & Related papers (2024-12-19T07:20:03Z) - The Empirical Impact of Forgetting and Transfer in Continual Visual Odometry [4.704582238028159]
We investigate the impact of catastrophic forgetting and the effectiveness of knowledge transfer in neural networks trained continuously in an embodied setting.
We observe initial satisfactory performance with high transferability between environments, followed by a specialization phase.
These findings emphasize the open challenges of balancing adaptation and memory retention in lifelong robotics.
arXiv Detail & Related papers (2024-06-03T21:32:50Z) - Incorporating Neuro-Inspired Adaptability for Continual Learning in
Artificial Intelligence [59.11038175596807]
Continual learning aims to empower artificial intelligence with strong adaptability to the real world.
Existing advances mainly focus on preserving memory stability to overcome catastrophic forgetting.
We propose a generic approach that appropriately attenuates old memories in parameter distributions to improve learning plasticity.
arXiv Detail & Related papers (2023-08-29T02:43:58Z) - On the Dynamics of Learning Time-Aware Behavior with Recurrent Neural
Networks [2.294014185517203]
We introduce a family of supervised learning tasks dependent on hidden temporal variables.
We train RNNs to emulate temporal flipflops that emphasize the need for time-awareness over long-term memory.
We show that these RNNs learn to switch between periodic orbits that encode time modulo the period of the transition rules.
arXiv Detail & Related papers (2023-06-12T14:01:30Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Critical Learning Periods for Multisensory Integration in Deep Networks [112.40005682521638]
We show that the ability of a neural network to integrate information from diverse sources hinges critically on being exposed to properly correlated signals during the early phases of training.
We show that critical periods arise from the complex and unstable early transient dynamics, which are decisive of final performance of the trained system and their learned representations.
arXiv Detail & Related papers (2022-10-06T23:50:38Z) - Learning reversible symplectic dynamics [0.0]
We propose a new neural network architecture for learning time-reversible dynamical systems from data.
We focus on an adaptation to symplectic systems, because of their importance in physics-informed learning.
arXiv Detail & Related papers (2022-04-26T14:07:40Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Causal Navigation by Continuous-time Neural Networks [108.84958284162857]
We propose a theoretical and experimental framework for learning causal representations using continuous-time neural networks.
We evaluate our method in the context of visual-control learning of drones over a series of complex tasks.
arXiv Detail & Related papers (2021-06-15T17:45:32Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Slow manifolds in recurrent networks encode working memory efficiently
and robustly [0.0]
Working memory is a cognitive function involving the storage and manipulation of latent information over brief intervals of time.
We use a top-down modeling approach to examine network-level mechanisms of working memory.
arXiv Detail & Related papers (2021-01-08T18:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.