A model of semantic completion in generative episodic memory
- URL: http://arxiv.org/abs/2111.13537v1
- Date: Fri, 26 Nov 2021 15:14:17 GMT
- Title: A model of semantic completion in generative episodic memory
- Authors: Zahra Fayyaz, Aya Altamimi, Sen Cheng, Laurenz Wiskott
- Abstract summary: We propose a computational model for generative episodic memory.
The model is able to complete missing parts of a memory trace in a semantically plausible way.
We also model an episodic memory experiment and can reproduce that semantically congruent contexts are always recalled better than incongruent ones.
- Score: 0.6690874707758508
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many different studies have suggested that episodic memory is a generative
process, but most computational models adopt a storage view. In this work, we
propose a computational model for generative episodic memory. It is based on
the central hypothesis that the hippocampus stores and retrieves selected
aspects of an episode as a memory trace, which is necessarily incomplete. At
recall, the neocortex reasonably fills in the missing information based on
general semantic information in a process we call semantic completion.
As episodes we use images of digits (MNIST) augmented by different
backgrounds representing context. Our model is based on a VQ-VAE which
generates a compressed latent representation in form of an index matrix, which
still has some spatial resolution. We assume that attention selects some part
of the index matrix while others are discarded, this then represents the gist
of the episode and is stored as a memory trace. At recall the missing parts are
filled in by a PixelCNN, modeling semantic completion, and the completed index
matrix is then decoded into a full image by the VQ-VAE.
The model is able to complete missing parts of a memory trace in a
semantically plausible way up to the point where it can generate plausible
images from scratch. Due to the combinatorics in the index matrix, the model
generalizes well to images not trained on. Compression as well as semantic
completion contribute to a strong reduction in memory requirements and
robustness to noise. Finally we also model an episodic memory experiment and
can reproduce that semantically congruent contexts are always recalled better
than incongruent ones, high attention levels improve memory accuracy in both
cases, and contexts that are not remembered correctly are more often remembered
semantically congruently than completely wrong.
Related papers
- Building, Reusing, and Generalizing Abstract Representations from Concrete Sequences [51.965994405124455]
Humans excel at learning abstract patterns across different sequences, filtering out irrelevant details.
Many sequence learning models lack the ability to abstract, which leads to memory inefficiency and poor transfer.
We introduce a non-parametric hierarchical variable learning model (HVM) that learns chunks from sequences and abstracts contextually similar chunks as variables.
arXiv Detail & Related papers (2024-10-27T18:13:07Z) - Associative Memories in the Feature Space [68.1903319310263]
We propose a class of memory models that only stores low-dimensional semantic embeddings, and uses them to retrieve similar, but not identical, memories.
We demonstrate a proof of concept of this method on a simple task on the MNIST dataset.
arXiv Detail & Related papers (2024-02-16T16:37:48Z) - What do larger image classifiers memorise? [64.01325988398838]
We show that training examples exhibit an unexpectedly diverse set of memorisation trajectories across model sizes.
We find that knowledge distillation, an effective and popular model compression technique, tends to inhibit memorisation, while also improving generalisation.
arXiv Detail & Related papers (2023-10-09T01:52:07Z) - From seeing to remembering: Images with harder-to-reconstruct
representations leave stronger memory traces [4.012995481864761]
We present a sparse coding model for compressing feature embeddings of images, and show that the reconstruction residuals from this model predict how well images are encoded into memory.
In an open memorability dataset of scene images, we show that reconstruction error not only explains memory accuracy but also response latencies during retrieval, subsuming, in the latter case, all of the variance explained by powerful vision-only models.
arXiv Detail & Related papers (2023-02-21T01:40:32Z) - Memories are One-to-Many Mapping Alleviators in Talking Face Generation [31.55290250247604]
Talking face generation aims at generating photo-realistic video portraits of a target person driven by input audio.
In this paper, we propose MemFace to complement the missing information with an implicit memory and an explicit memory.
Our experimental results show that our proposed MemFace surpasses all the state-of-the-art results across multiple scenarios consistently and significantly.
arXiv Detail & Related papers (2022-12-09T17:45:36Z) - Classification and Generation of real-world data with an Associative
Memory Model [0.0]
We extend the capabilities of the basic Associative Memory Model by using a Multiple-Modality framework.
By storing both the images and labels as modalities, a single Memory can be used to retrieve and complete patterns.
arXiv Detail & Related papers (2022-07-11T12:51:27Z) - LaMemo: Language Modeling with Look-Ahead Memory [50.6248714811912]
We propose Look-Ahead Memory (LaMemo) that enhances the recurrence memory by incrementally attending to the right-side tokens.
LaMemo embraces bi-directional attention and segment recurrence with an additional overhead only linearly proportional to the memory length.
Experiments on widely used language modeling benchmarks demonstrate its superiority over the baselines equipped with different types of memory.
arXiv Detail & Related papers (2022-04-15T06:11:25Z) - Associative Memories via Predictive Coding [37.59398215921529]
Associative memories in the brain receive and store patterns of activity registered by the sensory neurons.
We present a novel neural model for realizing associative memories based on a hierarchical generative network that receives external stimuli via sensory neurons.
arXiv Detail & Related papers (2021-09-16T15:46:26Z) - Kanerva++: extending The Kanerva Machine with differentiable, locally
block allocated latent memory [75.65949969000596]
Episodic and semantic memory are critical components of the human memory model.
We develop a new principled Bayesian memory allocation scheme that bridges the gap between episodic and semantic memory.
We demonstrate that this allocation scheme improves performance in memory conditional image generation.
arXiv Detail & Related papers (2021-02-20T18:40:40Z) - Self-Attentive Associative Memory [69.40038844695917]
We propose to separate the storage of individual experiences (item memory) and their occurring relationships (relational memory)
We achieve competitive results with our proposed two-memory model in a diversity of machine learning tasks.
arXiv Detail & Related papers (2020-02-10T03:27:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.