Associative Memories via Predictive Coding
- URL: http://arxiv.org/abs/2109.08063v1
- Date: Thu, 16 Sep 2021 15:46:26 GMT
- Title: Associative Memories via Predictive Coding
- Authors: Tommaso Salvatori, Yuhang Song, Yujian Hong, Simon Frieder, Lei Sha,
Zhenghua Xu, Rafal Bogacz, Thomas Lukasiewicz
- Abstract summary: Associative memories in the brain receive and store patterns of activity registered by the sensory neurons.
We present a novel neural model for realizing associative memories based on a hierarchical generative network that receives external stimuli via sensory neurons.
- Score: 37.59398215921529
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Associative memories in the brain receive and store patterns of activity
registered by the sensory neurons, and are able to retrieve them when
necessary. Due to their importance in human intelligence, computational models
of associative memories have been developed for several decades now. They
include autoassociative memories, which allow for storing data points and
retrieving a stored data point $s$ when provided with a noisy or partial
variant of $s$, and heteroassociative memories, able to store and recall
multi-modal data. In this paper, we present a novel neural model for realizing
associative memories, based on a hierarchical generative network that receives
external stimuli via sensory neurons. This model is trained using predictive
coding, an error-based learning algorithm inspired by information processing in
the cortex. To test the capabilities of this model, we perform multiple
retrieval experiments from both corrupted and incomplete data points. In an
extensive comparison, we show that this new model outperforms in retrieval
accuracy and robustness popular associative memory models, such as autoencoders
trained via backpropagation, and modern Hopfield networks. In particular, in
completing partial data points, our model achieves remarkable results on
natural image datasets, such as ImageNet, with a surprisingly high accuracy,
even when only a tiny fraction of pixels of the original images is presented.
Furthermore, we show that this method is able to handle multi-modal data,
retrieving images from descriptions, and vice versa. We conclude by discussing
the possible impact of this work in the neuroscience community, by showing that
our model provides a plausible framework to study learning and retrieval of
memories in the brain, as it closely mimics the behavior of the hippocampus as
a memory index and generative model.
Related papers
- Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images [2.4861619769660637]
Image memorability refers to the phenomenon where certain images are more likely to be remembered than others.
We modeled the subjective experience of visual memorability using an autoencoder based on VGG16 Convolutional Neural Networks (CNNs)
We investigated the relationship between memorability and reconstruction error, assessed latent space representations distinctiveness, and developed a Gated Recurrent Unit (GRU) model to predict memorability likelihood.
arXiv Detail & Related papers (2024-10-19T22:58:33Z) - Hierarchical Working Memory and a New Magic Number [1.024113475677323]
We propose a recurrent neural network model for chunking within the framework of the synaptic theory of working memory.
Our work provides a novel conceptual and analytical framework for understanding the on-the-fly organization of information in the brain that is crucial for cognition.
arXiv Detail & Related papers (2024-08-14T16:03:47Z) - Spiking representation learning for associative memories [0.0]
We introduce a novel artificial spiking neural network (SNN) that performs unsupervised representation learning and associative memory operations.
The architecture of our model derives from the neocortical columnar organization and combines feedforward projections for learning hidden representations and recurrent projections for forming associative memories.
arXiv Detail & Related papers (2024-06-05T08:30:11Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Measures of Information Reflect Memorization Patterns [53.71420125627608]
We show that the diversity in the activation patterns of different neurons is reflective of model generalization and memorization.
Importantly, we discover that information organization points to the two forms of memorization, even for neural activations computed on unlabelled in-distribution examples.
arXiv Detail & Related papers (2022-10-17T20:15:24Z) - Classification and Generation of real-world data with an Associative
Memory Model [0.0]
We extend the capabilities of the basic Associative Memory Model by using a Multiple-Modality framework.
By storing both the images and labels as modalities, a single Memory can be used to retrieve and complete patterns.
arXiv Detail & Related papers (2022-07-11T12:51:27Z) - Drop, Swap, and Generate: A Self-Supervised Approach for Generating
Neural Activity [33.06823702945747]
We introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE.
Our approach combines a generative modeling framework with an instance-specific alignment loss.
We show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.
arXiv Detail & Related papers (2021-11-03T16:39:43Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.