Classification and Generation of real-world data with an Associative
Memory Model
- URL: http://arxiv.org/abs/2207.04827v4
- Date: Thu, 13 Jul 2023 14:06:40 GMT
- Title: Classification and Generation of real-world data with an Associative
Memory Model
- Authors: Rodrigo Simas, Luis Sa-Couto, and Andreas Wichert
- Abstract summary: We extend the capabilities of the basic Associative Memory Model by using a Multiple-Modality framework.
By storing both the images and labels as modalities, a single Memory can be used to retrieve and complete patterns.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Drawing from memory the face of a friend you have not seen in years is a
difficult task. However, if you happen to cross paths, you would easily
recognize each other. The biological memory is equipped with an impressive
compression algorithm that can store the essential, and then infer the details
to match perception. The Willshaw Memory is a simple abstract model for
cortical computations which implements mechanisms of biological memories. Using
our recently proposed sparse coding prescription for visual patterns, this
model can store and retrieve an impressive amount of real-world data in a
fault-tolerant manner. In this paper, we extend the capabilities of the basic
Associative Memory Model by using a Multiple-Modality framework. In this
setting, the memory stores several modalities (e.g., visual, or textual) of
each pattern simultaneously. After training, the memory can be used to infer
missing modalities when just a subset is perceived. Using a simple
encoder-memory-decoder architecture, and a newly proposed iterative retrieval
algorithm for the Willshaw Model, we perform experiments on the MNIST dataset.
By storing both the images and labels as modalities, a single Memory can be
used not only to retrieve and complete patterns but also to classify and
generate new ones. We further discuss how this model could be used for other
learning tasks, thus serving as a biologically-inspired framework for learning.
Related papers
- Hierarchical Working Memory and a New Magic Number [1.024113475677323]
We propose a recurrent neural network model for chunking within the framework of the synaptic theory of working memory.
Our work provides a novel conceptual and analytical framework for understanding the on-the-fly organization of information in the brain that is crucial for cognition.
arXiv Detail & Related papers (2024-08-14T16:03:47Z) - Associative Memories in the Feature Space [68.1903319310263]
We propose a class of memory models that only stores low-dimensional semantic embeddings, and uses them to retrieve similar, but not identical, memories.
We demonstrate a proof of concept of this method on a simple task on the MNIST dataset.
arXiv Detail & Related papers (2024-02-16T16:37:48Z) - What do larger image classifiers memorise? [64.01325988398838]
We show that training examples exhibit an unexpectedly diverse set of memorisation trajectories across model sizes.
We find that knowledge distillation, an effective and popular model compression technique, tends to inhibit memorisation, while also improving generalisation.
arXiv Detail & Related papers (2023-10-09T01:52:07Z) - On the Relationship Between Variational Inference and Auto-Associative
Memory [68.8204255655161]
We study how different neural network approaches to variational inference can be applied in this framework.
We evaluate the obtained algorithms on the CIFAR10 and CLEVR image datasets and compare them with other associative memory models.
arXiv Detail & Related papers (2022-10-14T14:18:47Z) - A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental
Learning [56.450090618578]
Class-Incremental Learning (CIL) aims to train a model with limited memory size to meet this requirement.
We show that when counting the model size into the total budget and comparing methods with aligned memory size, saving models do not consistently work.
We propose a simple yet effective baseline, denoted as MEMO for Memory-efficient Expandable MOdel.
arXiv Detail & Related papers (2022-05-26T08:24:01Z) - Associative Memories via Predictive Coding [37.59398215921529]
Associative memories in the brain receive and store patterns of activity registered by the sensory neurons.
We present a novel neural model for realizing associative memories based on a hierarchical generative network that receives external stimuli via sensory neurons.
arXiv Detail & Related papers (2021-09-16T15:46:26Z) - Kanerva++: extending The Kanerva Machine with differentiable, locally
block allocated latent memory [75.65949969000596]
Episodic and semantic memory are critical components of the human memory model.
We develop a new principled Bayesian memory allocation scheme that bridges the gap between episodic and semantic memory.
We demonstrate that this allocation scheme improves performance in memory conditional image generation.
arXiv Detail & Related papers (2021-02-20T18:40:40Z) - Self-Attentive Associative Memory [69.40038844695917]
We propose to separate the storage of individual experiences (item memory) and their occurring relationships (relational memory)
We achieve competitive results with our proposed two-memory model in a diversity of machine learning tasks.
arXiv Detail & Related papers (2020-02-10T03:27:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.