Online Memorization of Random Firing Sequences by a Recurrent Neural
Network
- URL: http://arxiv.org/abs/2001.02920v1
- Date: Thu, 9 Jan 2020 11:02:53 GMT
- Title: Online Memorization of Random Firing Sequences by a Recurrent Neural
Network
- Authors: Patrick Murer and Hans-Andrea Loeliger
- Abstract summary: Two modes of learning/memorization are considered: The first mode is strictly online, with a single pass through the data, while the second mode uses multiple passes through the data.
In both modes, the learning is strictly local (quasi-Hebbian): At any given time step, only the weights between the neurons firing (or supposed to be firing) at the previous time step and those firing (or supposed to be firing) at the present time step are modified.
- Score: 12.944868613449218
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies the capability of a recurrent neural network model to
memorize random dynamical firing patterns by a simple local learning rule. Two
modes of learning/memorization are considered: The first mode is strictly
online, with a single pass through the data, while the second mode uses
multiple passes through the data. In both modes, the learning is strictly local
(quasi-Hebbian): At any given time step, only the weights between the neurons
firing (or supposed to be firing) at the previous time step and those firing
(or supposed to be firing) at the present time step are modified. The main
result of the paper is an upper bound on the probability that the single-pass
memorization is not perfect. It follows that the memorization capacity in this
mode asymptotically scales like that of the classical Hopfield model (which, in
contrast, memorizes static patterns). However, multiple-rounds memorization is
shown to achieve a higher capacity (with a nonvanishing number of bits per
connection/synapse). These mathematical findings may be helpful for
understanding the functions of short-term memory and long-term memory in
neuroscience.
Related papers
- Order from chaos: Interplay of development and learning in recurrent
networks of structured neurons [1.6880888629604525]
We introduce a fully local, always-on plasticity rule to learn complex sequences in a recurrent network comprised of two populations.
Our model is resource-efficient, enabling the learning of complex sequences using only a small number of neurons.
We demonstrate these features in a mock-up of birdsong learning, in which our networks first learn a long, non-Markovian sequence.
arXiv Detail & Related papers (2024-02-26T17:30:34Z) - Learning time-scales in two-layers neural networks [11.878594839685471]
We study the gradient flow dynamics of a wide two-layer neural network in high-dimension.
Based on new rigorous results, we propose a scenario for the learning dynamics in this setting.
arXiv Detail & Related papers (2023-02-28T19:52:26Z) - Measures of Information Reflect Memorization Patterns [53.71420125627608]
We show that the diversity in the activation patterns of different neurons is reflective of model generalization and memorization.
Importantly, we discover that information organization points to the two forms of memorization, even for neural activations computed on unlabelled in-distribution examples.
arXiv Detail & Related papers (2022-10-17T20:15:24Z) - An associative memory model with very high memory rate: Image storage by
sequential addition learning [0.0]
This system realizes the bidirectional learning between one cue neuron in the cue ball and the neurons in the recall net.
It can memorize many patterns and recall these patterns or those that are similar at any time.
arXiv Detail & Related papers (2022-10-08T02:56:23Z) - A Meta-Learned Neuron model for Continual Learning [0.0]
Continual learning is the ability to acquire new knowledge without forgetting the previously learned one.
In this work, we replace the standard neuron by a meta-learned neuron model.
Our approach can memorize dataset-length sequences of training samples, and its learning capabilities generalize to any domain.
arXiv Detail & Related papers (2021-11-03T23:39:14Z) - Astrocytes mediate analogous memory in a multi-layer neuron-astrocytic
network [52.77024349608834]
We show how a piece of information can be maintained as a robust activity pattern for several seconds then completely disappear if no other stimuli come.
This kind of short-term memory can keep operative information for seconds, then completely forget it to avoid overlapping with forthcoming patterns.
We show how arbitrary patterns can be loaded, then stored for a certain interval of time, and retrieved if the appropriate clue pattern is applied to the input.
arXiv Detail & Related papers (2021-08-31T16:13:15Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - Encoding-based Memory Modules for Recurrent Neural Networks [79.42778415729475]
We study the memorization subtask from the point of view of the design and training of recurrent neural networks.
We propose a new model, the Linear Memory Network, which features an encoding-based memorization component built with a linear autoencoder for sequences.
arXiv Detail & Related papers (2020-01-31T11:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.