From seeing to remembering: Images with harder-to-reconstruct
representations leave stronger memory traces
- URL: http://arxiv.org/abs/2302.10392v1
- Date: Tue, 21 Feb 2023 01:40:32 GMT
- Title: From seeing to remembering: Images with harder-to-reconstruct
representations leave stronger memory traces
- Authors: Qi Lin, Zifan Li, John Lafferty, Ilker Yildirim
- Abstract summary: We present a sparse coding model for compressing feature embeddings of images, and show that the reconstruction residuals from this model predict how well images are encoded into memory.
In an open memorability dataset of scene images, we show that reconstruction error not only explains memory accuracy but also response latencies during retrieval, subsuming, in the latter case, all of the variance explained by powerful vision-only models.
- Score: 4.012995481864761
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Much of what we remember is not due to intentional selection, but simply a
by-product of perceiving. This raises a foundational question about the
architecture of the mind: How does perception interface with and influence
memory? Here, inspired by a classic proposal relating perceptual processing to
memory durability, the level-of-processing theory, we present a sparse coding
model for compressing feature embeddings of images, and show that the
reconstruction residuals from this model predict how well images are encoded
into memory. In an open memorability dataset of scene images, we show that
reconstruction error not only explains memory accuracy but also response
latencies during retrieval, subsuming, in the latter case, all of the variance
explained by powerful vision-only models. We also confirm a prediction of this
account with 'model-driven psychophysics'. This work establishes reconstruction
error as a novel signal interfacing perception and memory, possibly through
adaptive modulation of perceptual processing.
Related papers
- Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images [2.4861619769660637]
Image memorability refers to the phenomenon where certain images are more likely to be remembered than others.
We modeled the subjective experience of visual memorability using an autoencoder based on VGG16 Convolutional Neural Networks (CNNs)
We investigated the relationship between memorability and reconstruction error, assessed latent space representations distinctiveness, and developed a Gated Recurrent Unit (GRU) model to predict memorability likelihood.
arXiv Detail & Related papers (2024-10-19T22:58:33Z) - What do larger image classifiers memorise? [64.01325988398838]
We show that training examples exhibit an unexpectedly diverse set of memorisation trajectories across model sizes.
We find that knowledge distillation, an effective and popular model compression technique, tends to inhibit memorisation, while also improving generalisation.
arXiv Detail & Related papers (2023-10-09T01:52:07Z) - Not All Image Regions Matter: Masked Vector Quantization for
Autoregressive Image Generation [78.13793505707952]
Existing autoregressive models follow the two-stage generation paradigm that first learns a codebook in the latent space for image reconstruction and then completes the image generation autoregressively based on the learned codebook.
We propose a novel two-stage framework, which consists of Masked Quantization VAE (MQ-VAE) Stack model from modeling redundancy.
arXiv Detail & Related papers (2023-05-23T02:15:53Z) - Improving Image Recognition by Retrieving from Web-Scale Image-Text Data [68.63453336523318]
We introduce an attention-based memory module, which learns the importance of each retrieved example from the memory.
Compared to existing approaches, our method removes the influence of the irrelevant retrieved examples, and retains those that are beneficial to the input query.
We show that it achieves state-of-the-art accuracies in ImageNet-LT, Places-LT and Webvision datasets.
arXiv Detail & Related papers (2023-04-11T12:12:05Z) - Classification and Generation of real-world data with an Associative
Memory Model [0.0]
We extend the capabilities of the basic Associative Memory Model by using a Multiple-Modality framework.
By storing both the images and labels as modalities, a single Memory can be used to retrieve and complete patterns.
arXiv Detail & Related papers (2022-07-11T12:51:27Z) - A model of semantic completion in generative episodic memory [0.6690874707758508]
We propose a computational model for generative episodic memory.
The model is able to complete missing parts of a memory trace in a semantically plausible way.
We also model an episodic memory experiment and can reproduce that semantically congruent contexts are always recalled better than incongruent ones.
arXiv Detail & Related papers (2021-11-26T15:14:17Z) - Associative Memories via Predictive Coding [37.59398215921529]
Associative memories in the brain receive and store patterns of activity registered by the sensory neurons.
We present a novel neural model for realizing associative memories based on a hierarchical generative network that receives external stimuli via sensory neurons.
arXiv Detail & Related papers (2021-09-16T15:46:26Z) - Kanerva++: extending The Kanerva Machine with differentiable, locally
block allocated latent memory [75.65949969000596]
Episodic and semantic memory are critical components of the human memory model.
We develop a new principled Bayesian memory allocation scheme that bridges the gap between episodic and semantic memory.
We demonstrate that this allocation scheme improves performance in memory conditional image generation.
arXiv Detail & Related papers (2021-02-20T18:40:40Z) - CNN with large memory layers [2.368995563245609]
This work is centred around the recently proposed product key memory structure citelarge_memory, implemented for a number of computer vision applications.
The memory structure can be regarded as a simple computation primitive suitable to be augmented to nearly all neural network architectures.
arXiv Detail & Related papers (2021-01-27T20:58:20Z) - Pyramid Attention Networks for Image Restoration [124.34970277136061]
Self-similarity refers to the image prior widely used in image restoration algorithms.
Recent advanced deep convolutional neural network based methods for image restoration do not take full advantage of self-similarities.
We present a novel Pyramid Attention module for image restoration, which captures long-range feature correspondences from a multi-scale feature pyramid.
arXiv Detail & Related papers (2020-04-28T21:12:36Z) - Self-Attentive Associative Memory [69.40038844695917]
We propose to separate the storage of individual experiences (item memory) and their occurring relationships (relational memory)
We achieve competitive results with our proposed two-memory model in a diversity of machine learning tasks.
arXiv Detail & Related papers (2020-02-10T03:27:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.