Benchmarking Hebbian learning rules for associative memory
- URL: http://arxiv.org/abs/2401.00335v1
- Date: Sat, 30 Dec 2023 21:49:47 GMT
- Title: Benchmarking Hebbian learning rules for associative memory
- Authors: Anders Lansner, Naresh B Ravichandran, Pawel Herman
- Abstract summary: Associative memory is a key concept in cognitive and computational brain science.
We benchmark six different learning rules on storage capacity and prototype extraction.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Associative memory or content addressable memory is an important component
function in computer science and information processing and is a key concept in
cognitive and computational brain science. Many different neural network
architectures and learning rules have been proposed to model associative memory
of the brain while investigating key functions like pattern completion and
rivalry, noise reduction, and storage capacity. A less investigated but
important function is prototype extraction where the training set comprises
pattern instances generated by distorting prototype patterns and the task of
the trained network is to recall the correct prototype pattern given a new
instance. In this paper we characterize these different aspects of associative
memory performance and benchmark six different learning rules on storage
capacity and prototype extraction. We consider only models with Hebbian
plasticity that operate on sparse distributed representations with unit
activities in the interval [0,1]. We evaluate both non-modular and modular
network architectures and compare performance when trained and tested on
different kinds of sparse random binary pattern sets, including correlated
ones. We show that covariance learning has a robust but low storage capacity
under these conditions and that the Bayesian Confidence Propagation learning
rule (BCPNN) is superior with a good margin in all cases except one, reaching a
three times higher composite score than the second best learning rule tested.
Related papers
- AM-MTEEG: Multi-task EEG classification based on impulsive associative memory [6.240145569484483]
We propose a multi-task (MT) classification model, called AM-MTEEG, inspired by the principles of learning and memory in the human hippocampus.
The model treats the EEG classification of each individual as an independent task and facilitates feature sharing across individuals.
Experimental results in two BCI competition datasets show that our model improves average accuracy compared to state-of-the-art models.
arXiv Detail & Related papers (2024-09-27T01:33:45Z) - Demolition and Reinforcement of Memories in Spin-Glass-like Neural
Networks [0.0]
The aim of this thesis is to understand the effectiveness of Unlearning in both associative memory models and generative models.
The selection of structured data enables an associative memory model to retrieve concepts as attractors of a neural dynamics with considerable basins of attraction.
A novel regularization technique for Boltzmann Machines is presented, proving to outperform previously developed methods in learning hidden probability distributions from data-sets.
arXiv Detail & Related papers (2024-03-04T23:12:42Z) - Brain-like combination of feedforward and recurrent network components
achieves prototype extraction and robust pattern recognition [0.0]
Associative memory has been a prominent candidate for the computation performed by the massively recurrent neocortical networks.
We combine a recurrent attractor network with a feedforward network that learns distributed representations using an unsupervised Hebbian-Bayesian learning rule.
We demonstrate that the recurrent attractor component implements associative memory when trained on the feedforward-driven internal (hidden) representations.
arXiv Detail & Related papers (2022-06-30T06:03:11Z) - Memory-Guided Semantic Learning Network for Temporal Sentence Grounding [55.31041933103645]
We propose a memory-augmented network that learns and memorizes the rarely appeared content in TSG tasks.
MGSL-Net consists of three main parts: a cross-modal inter-action module, a memory augmentation module, and a heterogeneous attention module.
arXiv Detail & Related papers (2022-01-03T02:32:06Z) - Hierarchical Variational Memory for Few-shot Learning Across Domains [120.87679627651153]
We introduce a hierarchical prototype model, where each level of the prototype fetches corresponding information from the hierarchical memory.
The model is endowed with the ability to flexibly rely on features at different semantic levels if the domain shift circumstances so demand.
We conduct thorough ablation studies to demonstrate the effectiveness of each component in our model.
arXiv Detail & Related papers (2021-12-15T15:01:29Z) - Fine-grained Classification via Categorical Memory Networks [42.413523046712896]
We present a class-specific memory module for fine-grained feature learning.
The memory module stores the prototypical feature representation for each category as a moving average.
We integrate our class-specific memory module into a standard convolutional neural network, yielding a Categorical Memory Network.
arXiv Detail & Related papers (2020-12-12T11:50:13Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z) - Learning and Memorizing Representative Prototypes for 3D Point Cloud
Semantic and Instance Segmentation [117.29799759864127]
3D point cloud semantic and instance segmentation is crucial and fundamental for 3D scene understanding.
Deep networks can easily forget the non-dominant cases during the learning process, resulting in unsatisfactory performance.
We propose a memory-augmented network to learn and memorize the representative prototypes that cover diverse samples universally.
arXiv Detail & Related papers (2020-01-06T01:07:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.