An Entropic Associative Memory
- URL: http://arxiv.org/abs/2009.13058v1
- Date: Mon, 28 Sep 2020 04:24:21 GMT
- Title: An Entropic Associative Memory
- Authors: Luis A. Pineda and Gibr\'an Fuentes and Rafael Morales
- Abstract summary: We use intrinsic-Indeterminate Computing to retrieve associative memory registers that hold representations of individual objects.
The system has been used to model a visual memory holding the representations of hand-written digits.
The similarity between the cue and the object recovered in memory operations depends on the entropy of the memory register.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Natural memories are associative, declarative and distributed. Symbolic
computing memories resemble natural memories in their declarative character,
and information can be stored and recovered explicitly; however, they lack the
associative and distributed properties of natural memories. Sub-symbolic
memories developed within the connectionist or artificial neural networks
paradigm are associative and distributed, but are unable to express symbolic
structure and information cannot be stored and retrieved explicitly; hence,
they lack the declarative property. To address this dilemma, we use
Relational-Indeterminate Computing to model associative memory registers that
hold distributed representations of individual objects. This mode of computing
has an intrinsic computing entropy which measures the indeterminacy of
representations. This parameter determines the operational characteristics of
the memory. Associative registers are embedded in an architecture that maps
concrete images expressed in modality-specific buffers into abstract
representations, and vice versa, and the memory system as a whole fulfills the
three properties of natural memories. The system has been used to model a
visual memory holding the representations of hand-written digits, and
recognition and recall experiments show that there is a range of entropy
values, not too low and not too high, in which associative memory registers
have a satisfactory performance. The similarity between the cue and the object
recovered in memory retrieve operations depends on the entropy of the memory
register holding the representation of the corresponding object. The
experiments were implemented in a simulation using a standard computer, but a
parallel architecture may be built where the memory operations would take a
very reduced number of computing steps.
Related papers
- Entropic associative memory for real world images [0.7373617024876725]
We show that EAM appropriately stores, recognizes and retrieves complex and unconventional images of animals and vehicles.
The retrieved objects can be seen as proper memories, associated recollections or products of imagination.
arXiv Detail & Related papers (2024-05-21T05:00:30Z) - Associative Memories in the Feature Space [68.1903319310263]
We propose a class of memory models that only stores low-dimensional semantic embeddings, and uses them to retrieve similar, but not identical, memories.
We demonstrate a proof of concept of this method on a simple task on the MNIST dataset.
arXiv Detail & Related papers (2024-02-16T16:37:48Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - On the Relationship Between Variational Inference and Auto-Associative
Memory [68.8204255655161]
We study how different neural network approaches to variational inference can be applied in this framework.
We evaluate the obtained algorithms on the CIFAR10 and CLEVR image datasets and compare them with other associative memory models.
arXiv Detail & Related papers (2022-10-14T14:18:47Z) - Entropic Associative Memory for Manuscript Symbols [0.0]
Manuscript symbols can be stored, recognized and retrieved from an entropic digital memory that is associative and distributed but yet declarative.
We discuss the operational characteristics of the entropic associative memory for retrieving objects with both complete and incomplete information.
arXiv Detail & Related papers (2022-02-17T02:29:33Z) - Kanerva++: extending The Kanerva Machine with differentiable, locally
block allocated latent memory [75.65949969000596]
Episodic and semantic memory are critical components of the human memory model.
We develop a new principled Bayesian memory allocation scheme that bridges the gap between episodic and semantic memory.
We demonstrate that this allocation scheme improves performance in memory conditional image generation.
arXiv Detail & Related papers (2021-02-20T18:40:40Z) - Memformer: A Memory-Augmented Transformer for Sequence Modeling [55.780849185884996]
We present Memformer, an efficient neural network for sequence modeling.
Our model achieves linear time complexity and constant memory space complexity when processing long sequences.
arXiv Detail & Related papers (2020-10-14T09:03:36Z) - Robust High-dimensional Memory-augmented Neural Networks [13.82206983716435]
Memory-augmented neural networks enhance neural networks with an explicit memory to overcome these issues.
Access to this explicit memory occurs via soft read and write operations involving every individual memory entry.
We propose a robust architecture that employs a computational memory unit as the explicit memory performing analog in-memory computation on high-dimensional (HD) vectors.
arXiv Detail & Related papers (2020-10-05T12:01:56Z) - Distributed Associative Memory Network with Memory Refreshing Loss [5.5792083698526405]
We introduce a novel Distributed Associative Memory architecture (DAM) with Memory Refreshing Loss (MRL)
Inspired by how the human brain works, our framework encodes data with distributed representation across multiple memory blocks.
MRL enables MANN to reinforce an association between input data and task objective by reproducing input data from stored memory contents.
arXiv Detail & Related papers (2020-07-21T07:34:33Z) - Self-Attentive Associative Memory [69.40038844695917]
We propose to separate the storage of individual experiences (item memory) and their occurring relationships (relational memory)
We achieve competitive results with our proposed two-memory model in a diversity of machine learning tasks.
arXiv Detail & Related papers (2020-02-10T03:27:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.