An associative memory model with very high memory rate: Image storage by
sequential addition learning
- URL: http://arxiv.org/abs/2210.03893v1
- Date: Sat, 8 Oct 2022 02:56:23 GMT
- Title: An associative memory model with very high memory rate: Image storage by
sequential addition learning
- Authors: Hiroshi Inazawa
- Abstract summary: This system realizes the bidirectional learning between one cue neuron in the cue ball and the neurons in the recall net.
It can memorize many patterns and recall these patterns or those that are similar at any time.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we present a neural network system related to about memory and
recall that consists of one neuron group (the "cue ball") and a one-layer
neural net (the "recall net"). This system realizes the bidirectional
memorization learning between one cue neuron in the cue ball and the neurons in
the recall net. It can memorize many patterns and recall these patterns or
those that are similar at any time. Furthermore, the patterns are recalled at
most the same time. This model's recall situation seems to resemble human
recall of a variety of similar things almost simultaneously when one thing is
recalled. It is also possible for additional learning to occur in the system
without affecting the patterns memorized in advance. Moreover, the memory rate
(the number of memorized patterns / the total number of neurons) is close to
100%; this system's rate is 0.987. Finally, pattern data constraints become an
important aspect of this system.
Related papers
- What do larger image classifiers memorise? [64.01325988398838]
We show that training examples exhibit an unexpectedly diverse set of memorisation trajectories across model sizes.
We find that knowledge distillation, an effective and popular model compression technique, tends to inhibit memorisation, while also improving generalisation.
arXiv Detail & Related papers (2023-10-09T01:52:07Z) - Measures of Information Reflect Memorization Patterns [53.71420125627608]
We show that the diversity in the activation patterns of different neurons is reflective of model generalization and memorization.
Importantly, we discover that information organization points to the two forms of memorization, even for neural activations computed on unlabelled in-distribution examples.
arXiv Detail & Related papers (2022-10-17T20:15:24Z) - A bio-inspired implementation of a sparse-learning spike-based
hippocampus memory model [0.0]
We propose a novel bio-inspired memory model based on the hippocampus.
It can learn memories, recall them from a cue and even forget memories when trying to learn others with the same cue.
This work presents the first hardware implementation of a fully functional bio-inspired spike-based hippocampus memory model.
arXiv Detail & Related papers (2022-06-10T07:48:29Z) - BayesPCN: A Continually Learnable Predictive Coding Associative Memory [15.090562171434815]
BayesPCN is a hierarchical associative memory capable of performing continual one-shot memory writes without meta-learning.
Experiments show that BayesPCN can recall corrupted i.i.d. high-dimensional data observed hundreds of "timesteps" ago without a significant drop in recall ability.
arXiv Detail & Related papers (2022-05-20T02:28:11Z) - Associative Memories via Predictive Coding [37.59398215921529]
Associative memories in the brain receive and store patterns of activity registered by the sensory neurons.
We present a novel neural model for realizing associative memories based on a hierarchical generative network that receives external stimuli via sensory neurons.
arXiv Detail & Related papers (2021-09-16T15:46:26Z) - Astrocytes mediate analogous memory in a multi-layer neuron-astrocytic
network [52.77024349608834]
We show how a piece of information can be maintained as a robust activity pattern for several seconds then completely disappear if no other stimuli come.
This kind of short-term memory can keep operative information for seconds, then completely forget it to avoid overlapping with forthcoming patterns.
We show how arbitrary patterns can be loaded, then stored for a certain interval of time, and retrieved if the appropriate clue pattern is applied to the input.
arXiv Detail & Related papers (2021-08-31T16:13:15Z) - Hierarchical Associative Memory [2.66512000865131]
Associative Memories or Modern Hopfield Networks have many appealing properties.
They can do pattern completion, store a large number of memories, and can be described using a recurrent neural network.
This paper tackles a gap and describes a fully recurrent model of associative memory with an arbitrary large number of layers.
arXiv Detail & Related papers (2021-07-14T01:38:40Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - Self-Attentive Associative Memory [69.40038844695917]
We propose to separate the storage of individual experiences (item memory) and their occurring relationships (relational memory)
We achieve competitive results with our proposed two-memory model in a diversity of machine learning tasks.
arXiv Detail & Related papers (2020-02-10T03:27:48Z) - Encoding-based Memory Modules for Recurrent Neural Networks [79.42778415729475]
We study the memorization subtask from the point of view of the design and training of recurrent neural networks.
We propose a new model, the Linear Memory Network, which features an encoding-based memorization component built with a linear autoencoder for sequences.
arXiv Detail & Related papers (2020-01-31T11:14:27Z) - Online Memorization of Random Firing Sequences by a Recurrent Neural
Network [12.944868613449218]
Two modes of learning/memorization are considered: The first mode is strictly online, with a single pass through the data, while the second mode uses multiple passes through the data.
In both modes, the learning is strictly local (quasi-Hebbian): At any given time step, only the weights between the neurons firing (or supposed to be firing) at the previous time step and those firing (or supposed to be firing) at the present time step are modified.
arXiv Detail & Related papers (2020-01-09T11:02:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.