Memory Encoding Model
- URL: http://arxiv.org/abs/2308.01175v1
- Date: Wed, 2 Aug 2023 14:29:10 GMT
- Title: Memory Encoding Model
- Authors: Huzheng Yang, James Gee, Jianbo Shi
- Abstract summary: We explore a new class of brain encoding model by adding memory-related information as input.
During a vision-memory cognitive task, we found the non-visual brain is largely predictable using previously seen images.
- Score: 14.943061215875655
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We explore a new class of brain encoding model by adding memory-related
information as input. Memory is an essential brain mechanism that works
alongside visual stimuli. During a vision-memory cognitive task, we found the
non-visual brain is largely predictable using previously seen images. Our
Memory Encoding Model (Mem) won the Algonauts 2023 visual brain competition
even without model ensemble (single model score 66.8, ensemble score 70.8). Our
ensemble model without memory input (61.4) can also stand a 3rd place.
Furthermore, we observe periodic delayed brain response correlated to 6th-7th
prior image, and hippocampus also showed correlated activity timed with this
periodicity. We conjuncture that the periodic replay could be related to memory
mechanism to enhance the working memory.
Related papers
- What do larger image classifiers memorise? [64.01325988398838]
We show that training examples exhibit an unexpectedly diverse set of memorisation trajectories across model sizes.
We find that knowledge distillation, an effective and popular model compression technique, tends to inhibit memorisation, while also improving generalisation.
arXiv Detail & Related papers (2023-10-09T01:52:07Z) - Memory-Augmented Theory of Mind Network [59.9781556714202]
Social reasoning requires the capacity of theory of mind (ToM) to contextualise and attribute mental states to others.
Recent machine learning approaches to ToM have demonstrated that we can train the observer to read the past and present behaviours of other agents.
We tackle the challenges by equipping the observer with novel neural memory mechanisms to encode, and hierarchical attention to selectively retrieve information about others.
This results in ToMMY, a theory of mind model that learns to reason while making little assumptions about the underlying mental processes.
arXiv Detail & Related papers (2023-01-17T14:48:58Z) - An associative memory model with very high memory rate: Image storage by
sequential addition learning [0.0]
This system realizes the bidirectional learning between one cue neuron in the cue ball and the neurons in the recall net.
It can memorize many patterns and recall these patterns or those that are similar at any time.
arXiv Detail & Related papers (2022-10-08T02:56:23Z) - A bio-inspired implementation of a sparse-learning spike-based
hippocampus memory model [0.0]
We propose a novel bio-inspired memory model based on the hippocampus.
It can learn memories, recall them from a cue and even forget memories when trying to learn others with the same cue.
This work presents the first hardware implementation of a fully functional bio-inspired spike-based hippocampus memory model.
arXiv Detail & Related papers (2022-06-10T07:48:29Z) - BayesPCN: A Continually Learnable Predictive Coding Associative Memory [15.090562171434815]
BayesPCN is a hierarchical associative memory capable of performing continual one-shot memory writes without meta-learning.
Experiments show that BayesPCN can recall corrupted i.i.d. high-dimensional data observed hundreds of "timesteps" ago without a significant drop in recall ability.
arXiv Detail & Related papers (2022-05-20T02:28:11Z) - LaMemo: Language Modeling with Look-Ahead Memory [50.6248714811912]
We propose Look-Ahead Memory (LaMemo) that enhances the recurrence memory by incrementally attending to the right-side tokens.
LaMemo embraces bi-directional attention and segment recurrence with an additional overhead only linearly proportional to the memory length.
Experiments on widely used language modeling benchmarks demonstrate its superiority over the baselines equipped with different types of memory.
arXiv Detail & Related papers (2022-04-15T06:11:25Z) - Associative Memories via Predictive Coding [37.59398215921529]
Associative memories in the brain receive and store patterns of activity registered by the sensory neurons.
We present a novel neural model for realizing associative memories based on a hierarchical generative network that receives external stimuli via sensory neurons.
arXiv Detail & Related papers (2021-09-16T15:46:26Z) - Kanerva++: extending The Kanerva Machine with differentiable, locally
block allocated latent memory [75.65949969000596]
Episodic and semantic memory are critical components of the human memory model.
We develop a new principled Bayesian memory allocation scheme that bridges the gap between episodic and semantic memory.
We demonstrate that this allocation scheme improves performance in memory conditional image generation.
arXiv Detail & Related papers (2021-02-20T18:40:40Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z) - Self-Attentive Associative Memory [69.40038844695917]
We propose to separate the storage of individual experiences (item memory) and their occurring relationships (relational memory)
We achieve competitive results with our proposed two-memory model in a diversity of machine learning tasks.
arXiv Detail & Related papers (2020-02-10T03:27:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.