Semantic and episodic memories in a predictive coding model of the neocortex
- URL: http://arxiv.org/abs/2509.01987v1
- Date: Tue, 02 Sep 2025 06:13:16 GMT
- Title: Semantic and episodic memories in a predictive coding model of the neocortex
- Authors: Lucie Fontaine, Frédéric Alexandre,
- Abstract summary: Complementary Learning Systems theory holds that intelligent agents need two learning systems.<n>Semantic memory is encoded in the neocortex with dense, overlapping representations and acquires structured knowledge.<n>Episodic memory is encoded in the hippocampus with sparse, pattern-separated representations and quickly learns the specifics of individual experiences.
- Score: 1.70266830658388
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Complementary Learning Systems theory holds that intelligent agents need two learning systems. Semantic memory is encoded in the neocortex with dense, overlapping representations and acquires structured knowledge. Episodic memory is encoded in the hippocampus with sparse, pattern-separated representations and quickly learns the specifics of individual experiences. Recently, this duality between semantic and episodic memories has been challenged by predictive coding, a biologically plausible neural network model of the neocortex which was shown to have hippocampus-like abilities on auto-associative memory tasks. These results raise the question of the episodic capabilities of the neocortex and their relation to semantic memory. In this paper, we present such a predictive coding model of the neocortex and explore its episodic capabilities. We show that this kind of model can indeed recall the specifics of individual examples but only if it is trained on a small number of examples. The model is overfitted to these exemples and does not generalize well, suggesting that episodic memory can arise from semantic learning. Indeed, a model trained with many more examples loses its recall capabilities. This work suggests that individual examples can be encoded gradually in the neocortex using dense, overlapping representations but only in a limited number, motivating the need for sparse, pattern-separated representations as found in the hippocampus.
Related papers
- Concept-Guided Interpretability via Neural Chunking [64.6429903327095]
We show that neural networks exhibit patterns in their raw population activity that mirror regularities in the training data.<n>We propose three methods to extract recurring chunks on a neural population level.<n>Our work points to a new direction for interpretability, one that harnesses both cognitive principles and the structure of naturalistic data.
arXiv Detail & Related papers (2025-05-16T13:49:43Z) - Synaptic Theory of Chunking in Working Memory [0.5735035463793009]
We introduce a synaptic theory of chunking, in which short-term synaptic plasticity enables the formation of chunk representations in working memory.<n>We show that a specialized population of chunking neurons'' selectively controls groups of stimulus-responsive neurons, akin to gating.<n>Our work provides a novel conceptual and analytical framework for understanding how the brain organizes information in real time.
arXiv Detail & Related papers (2024-08-14T16:03:47Z) - Causal Estimation of Memorisation Profiles [58.20086589761273]
Understanding memorisation in language models has practical and societal implications.
Memorisation is the causal effect of training with an instance on the model's ability to predict that instance.
This paper proposes a new, principled, and efficient method to estimate memorisation based on the difference-in-differences design from econometrics.
arXiv Detail & Related papers (2024-06-06T17:59:09Z) - What do larger image classifiers memorise? [64.01325988398838]
We show that training examples exhibit an unexpectedly diverse set of memorisation trajectories across model sizes.
We find that knowledge distillation, an effective and popular model compression technique, tends to inhibit memorisation, while also improving generalisation.
arXiv Detail & Related papers (2023-10-09T01:52:07Z) - Can Neural Network Memorization Be Localized? [102.68044087952913]
We show that memorization is a phenomenon confined to a small set of neurons in various layers of the model.
We propose a new form of dropout -- $textitexample-tied dropout$ that enables us to direct memorization of examples to an ai determined set of neurons.
arXiv Detail & Related papers (2023-07-18T18:36:29Z) - Competitive learning to generate sparse representations for associative
memory [0.0]
We propose a biologically plausible network that encodes images into codes that are suitable for associative memory.
It is organized into groups of neurons that specialize on local receptive fields, and learn through a competitive scheme.
arXiv Detail & Related papers (2023-01-05T17:57:52Z) - Measures of Information Reflect Memorization Patterns [53.71420125627608]
We show that the diversity in the activation patterns of different neurons is reflective of model generalization and memorization.
Importantly, we discover that information organization points to the two forms of memorization, even for neural activations computed on unlabelled in-distribution examples.
arXiv Detail & Related papers (2022-10-17T20:15:24Z) - Classification and Generation of real-world data with an Associative
Memory Model [0.0]
We extend the capabilities of the basic Associative Memory Model by using a Multiple-Modality framework.
By storing both the images and labels as modalities, a single Memory can be used to retrieve and complete patterns.
arXiv Detail & Related papers (2022-07-11T12:51:27Z) - Associative Memories via Predictive Coding [37.59398215921529]
Associative memories in the brain receive and store patterns of activity registered by the sensory neurons.
We present a novel neural model for realizing associative memories based on a hierarchical generative network that receives external stimuli via sensory neurons.
arXiv Detail & Related papers (2021-09-16T15:46:26Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.