Entropic Associative Memory for Manuscript Symbols
- URL: http://arxiv.org/abs/2202.08413v1
- Date: Thu, 17 Feb 2022 02:29:33 GMT
- Title: Entropic Associative Memory for Manuscript Symbols
- Authors: Rafael Morales and No\'e Hern\'andez and Ricardo Cruz and Victor D.
Cruz and Luis A. Pineda
- Abstract summary: Manuscript symbols can be stored, recognized and retrieved from an entropic digital memory that is associative and distributed but yet declarative.
We discuss the operational characteristics of the entropic associative memory for retrieving objects with both complete and incomplete information.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Manuscript symbols can be stored, recognized and retrieved from an entropic
digital memory that is associative and distributed but yet declarative; memory
retrieval is a constructive operation, memory cues to objects not contained in
the memory are rejected directly without search, and memory operations can be
performed through parallel computations. Manuscript symbols, both letters and
numerals, are represented in Associative Memory Registers that have an
associated entropy. The memory recognition operation obeys an entropy trade-off
between precision and recall, and the entropy level impacts on the quality of
the objects recovered through the memory retrieval operation. The present
proposal is contrasted in several dimensions with neural networks models of
associative memory. We discuss the operational characteristics of the entropic
associative memory for retrieving objects with both complete and incomplete
information, such as severe occlusions. The experiments reported in this paper
add evidence on the potential of this framework for developing practical
applications and computational models of natural memory.
Related papers
- Entropic associative memory for real world images [0.7373617024876725]
We show that EAM appropriately stores, recognizes and retrieves complex and unconventional images of animals and vehicles.
The retrieved objects can be seen as proper memories, associated recollections or products of imagination.
arXiv Detail & Related papers (2024-05-21T05:00:30Z) - Enhancing Length Extrapolation in Sequential Models with Pointer-Augmented Neural Memory [66.88278207591294]
We propose Pointer-Augmented Neural Memory (PANM) to help neural networks understand and apply symbol processing to new, longer sequences of data.
PANM integrates an external neural memory that uses novel physical addresses and pointer manipulation techniques to mimic human and computer symbol processing abilities.
arXiv Detail & Related papers (2024-04-18T03:03:46Z) - Associative Memories in the Feature Space [68.1903319310263]
We propose a class of memory models that only stores low-dimensional semantic embeddings, and uses them to retrieve similar, but not identical, memories.
We demonstrate a proof of concept of this method on a simple task on the MNIST dataset.
arXiv Detail & Related papers (2024-02-16T16:37:48Z) - On the Relationship Between Variational Inference and Auto-Associative
Memory [68.8204255655161]
We study how different neural network approaches to variational inference can be applied in this framework.
We evaluate the obtained algorithms on the CIFAR10 and CLEVR image datasets and compare them with other associative memory models.
arXiv Detail & Related papers (2022-10-14T14:18:47Z) - Associative Memories via Predictive Coding [37.59398215921529]
Associative memories in the brain receive and store patterns of activity registered by the sensory neurons.
We present a novel neural model for realizing associative memories based on a hierarchical generative network that receives external stimuli via sensory neurons.
arXiv Detail & Related papers (2021-09-16T15:46:26Z) - Kanerva++: extending The Kanerva Machine with differentiable, locally
block allocated latent memory [75.65949969000596]
Episodic and semantic memory are critical components of the human memory model.
We develop a new principled Bayesian memory allocation scheme that bridges the gap between episodic and semantic memory.
We demonstrate that this allocation scheme improves performance in memory conditional image generation.
arXiv Detail & Related papers (2021-02-20T18:40:40Z) - Learning to Learn Variational Semantic Memory [132.39737669936125]
We introduce variational semantic memory into meta-learning to acquire long-term knowledge for few-shot learning.
The semantic memory is grown from scratch and gradually consolidated by absorbing information from tasks it experiences.
We formulate memory recall as the variational inference of a latent memory variable from addressed contents.
arXiv Detail & Related papers (2020-10-20T15:05:26Z) - An Entropic Associative Memory [0.0]
We use intrinsic-Indeterminate Computing to retrieve associative memory registers that hold representations of individual objects.
The system has been used to model a visual memory holding the representations of hand-written digits.
The similarity between the cue and the object recovered in memory operations depends on the entropy of the memory register.
arXiv Detail & Related papers (2020-09-28T04:24:21Z) - Distributed Associative Memory Network with Memory Refreshing Loss [5.5792083698526405]
We introduce a novel Distributed Associative Memory architecture (DAM) with Memory Refreshing Loss (MRL)
Inspired by how the human brain works, our framework encodes data with distributed representation across multiple memory blocks.
MRL enables MANN to reinforce an association between input data and task objective by reproducing input data from stored memory contents.
arXiv Detail & Related papers (2020-07-21T07:34:33Z) - Self-Attentive Associative Memory [69.40038844695917]
We propose to separate the storage of individual experiences (item memory) and their occurring relationships (relational memory)
We achieve competitive results with our proposed two-memory model in a diversity of machine learning tasks.
arXiv Detail & Related papers (2020-02-10T03:27:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.