A consequence of failed sequential learning: A computational account of developmental amnesia
- URL: http://arxiv.org/abs/2602.12547v1
- Date: Fri, 13 Feb 2026 02:55:06 GMT
- Title: A consequence of failed sequential learning: A computational account of developmental amnesia
- Authors: Qi Zhang,
- Abstract summary: Developmental amnesia has been discovered to occur in children with hippocampal atrophy.<n>This unique combination of characteristics seems to challenge the understanding that early loss of episodic memory may impede cognitive development.<n>No computational model has been reported that is able to mimic the unique combination of characteristics.
- Score: 8.156069657157342
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Developmental amnesia, featured with severely impaired episodic memory and almost normal semantic memory, has been discovered to occur in children with hippocampal atrophy. This unique combination of characteristics seems to challenge the understanding that early loss of episodic memory may impede cognitive development and result in severe mental retardation. Although a few underlying mechanisms have been suggested, no computational model has been reported that is able to mimic the unique combination of characteristics. In this study, a cognitive system is presented, and developmental amnesia is demonstrated computationally in terms of impaired episodic recall, spared recognition and spared semantic learning. Impaired sequential/spatial learning ability of the hippocampus is suggested to be the cause of such amnesia. Simulation shows that impaired sequential leaning may only result in severe impairment of episodic recall, but affect neither recognition ability nor semantic learning. The spared semantic learning is inline with the view that semantic learning is largely associated with the consolidation of episodic memory, a process in which episodic memory may be mostly activated randomly, instead of sequentially. Furthermore, retrograded amnesia is also simulated, and the result and its mechanism are in agreement with most computational models of amnesia reported previously.
Related papers
- Cognitive algorithms and systems of episodic memory, semantic memory and their learnings [8.156069657157342]
Declarative memory is made up of two dissociated parts: episodic memory and semantic memory.<n>Lesions in the hippocampus often result in various impairments of explicit memory.<n>This chapter reviews several cognitive systems that are centered to mimic explicit memory.
arXiv Detail & Related papers (2026-02-06T23:22:52Z) - AI Meets Brain: Memory Systems from Cognitive Neuroscience to Autonomous Agents [69.39123054975218]
Memory serves as the pivotal nexus bridging past and future.<n>Recent research on autonomous agents has increasingly focused on designing efficient memory by drawing on cognitive neuroscience.
arXiv Detail & Related papers (2025-12-29T10:01:32Z) - GENESIS: A Generative Model of Episodic-Semantic Interaction [0.40286876168661084]
We introduce the Generative Episodic-Semantic Integration System (GENESIS)<n>GENESIS formalizes memory as the interaction between two limited-capacity generative systems.<n>It provides a principled account of memory as an active, constructive, and resource-bounded process.
arXiv Detail & Related papers (2025-10-17T17:11:13Z) - Semantic and episodic memories in a predictive coding model of the neocortex [1.70266830658388]
Complementary Learning Systems theory holds that intelligent agents need two learning systems.<n>Semantic memory is encoded in the neocortex with dense, overlapping representations and acquires structured knowledge.<n>Episodic memory is encoded in the hippocampus with sparse, pattern-separated representations and quickly learns the specifics of individual experiences.
arXiv Detail & Related papers (2025-09-02T06:13:16Z) - Weight Factorization and Centralization for Continual Learning in Speech Recognition [55.63455095283984]
Continually training the models in a rehearsal-free, multilingual, and language agnostic condition, likely leads to catastrophic forgetting.<n>Inspired by the ability of human brains to learn and consolidate knowledge through the waking-sleeping cycle, we propose a continual learning approach.
arXiv Detail & Related papers (2025-06-19T19:59:24Z) - Retentive or Forgetful? Diving into the Knowledge Memorizing Mechanism
of Language Models [49.39276272693035]
Large-scale pre-trained language models have shown remarkable memorizing ability.
Vanilla neural networks without pre-training have been long observed suffering from the catastrophic forgetting problem.
We find that 1) Vanilla language models are forgetful; 2) Pre-training leads to retentive language models; 3) Knowledge relevance and diversification significantly influence the memory formation.
arXiv Detail & Related papers (2023-05-16T03:50:38Z) - Continual learning benefits from multiple sleep mechanisms: NREM, REM,
and Synaptic Downscaling [51.316408685035526]
Learning new tasks and skills in succession without losing prior learning is a computational challenge for both artificial and biological neural networks.
Here, we investigate how modeling three distinct components of mammalian sleep together affects continual learning in artificial neural networks.
arXiv Detail & Related papers (2022-09-09T13:45:27Z) - Learning Human Cognitive Appraisal Through Reinforcement Memory Unit [63.83306892013521]
We propose a memory-enhancing mechanism for recurrent neural networks that exploits the effect of human cognitive appraisal in sequential assessment tasks.
We conceptualize the memory-enhancing mechanism as Reinforcement Memory Unit (RMU) that contains an appraisal state together with two positive and negative reinforcement memories.
arXiv Detail & Related papers (2022-08-06T08:56:55Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Brain-inspired feature exaggeration in generative replay for continual
learning [4.682734815593623]
When learning new classes, the internal representation of previously learnt ones can often be overwritten.
Recent developments in neuroscience have uncovered a method through which the brain avoids its own form of memory interference.
This paper presents a new state-of-the-art performance on the classification of early classes in the class-incremental learning dataset CIFAR100.
arXiv Detail & Related papers (2021-10-26T10:49:02Z) - Association: Remind Your GAN not to Forget [11.653696510515807]
We propose a brain-like approach that imitates the associative learning process to achieve continual learning.
Experiments demonstrate the effectiveness of our method in alleviating catastrophic forgetting on image-to-image translation tasks.
arXiv Detail & Related papers (2020-11-27T04:43:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.