Self-recovery of memory via generative replay
- URL: http://arxiv.org/abs/2301.06030v1
- Date: Sun, 15 Jan 2023 07:28:14 GMT
- Title: Self-recovery of memory via generative replay
- Authors: Zhenglong Zhou, Geshi Yeung, Anna C. Schapiro
- Abstract summary: We propose a novel architecture that augments generative replay with an adaptive, brain-like capacity to autonomously recover memories.
We demonstrate this capacity of the architecture across several continual learning tasks and environments.
- Score: 0.8594140167290099
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A remarkable capacity of the brain is its ability to autonomously reorganize
memories during offline periods. Memory replay, a mechanism hypothesized to
underlie biological offline learning, has inspired offline methods for reducing
forgetting in artificial neural networks in continual learning settings. A
memory-efficient and neurally-plausible method is generative replay, which
achieves state of the art performance on continual learning benchmarks.
However, unlike the brain, standard generative replay does not self-reorganize
memories when trained offline on its own replay samples. We propose a novel
architecture that augments generative replay with an adaptive, brain-like
capacity to autonomously recover memories. We demonstrate this capacity of the
architecture across several continual learning tasks and environments.
Related papers
- Stable Hadamard Memory: Revitalizing Memory-Augmented Agents for Reinforcement Learning [64.93848182403116]
Current deep-learning memory models struggle in reinforcement learning environments that are partially observable and long-term.
We introduce the Stable Hadamard Memory, a novel memory model for reinforcement learning agents.
Our approach significantly outperforms state-of-the-art memory-based methods on challenging partially observable benchmarks.
arXiv Detail & Related papers (2024-10-14T03:50:17Z) - Saliency-Guided Hidden Associative Replay for Continual Learning [13.551181595881326]
Continual Learning is a burgeoning domain in next-generation AI, focusing on training neural networks over a sequence of tasks akin to human learning.
This paper presents the Saliency Guided Hidden Associative Replay for Continual Learning.
This novel framework synergizes associative memory with replay-based strategies. SHARC primarily archives salient data segments via sparse memory encoding.
arXiv Detail & Related papers (2023-10-06T15:54:12Z) - Memory-Augmented Theory of Mind Network [59.9781556714202]
Social reasoning requires the capacity of theory of mind (ToM) to contextualise and attribute mental states to others.
Recent machine learning approaches to ToM have demonstrated that we can train the observer to read the past and present behaviours of other agents.
We tackle the challenges by equipping the observer with novel neural memory mechanisms to encode, and hierarchical attention to selectively retrieve information about others.
This results in ToMMY, a theory of mind model that learns to reason while making little assumptions about the underlying mental processes.
arXiv Detail & Related papers (2023-01-17T14:48:58Z) - Learning Human Cognitive Appraisal Through Reinforcement Memory Unit [63.83306892013521]
We propose a memory-enhancing mechanism for recurrent neural networks that exploits the effect of human cognitive appraisal in sequential assessment tasks.
We conceptualize the memory-enhancing mechanism as Reinforcement Memory Unit (RMU) that contains an appraisal state together with two positive and negative reinforcement memories.
arXiv Detail & Related papers (2022-08-06T08:56:55Z) - Decoupling Knowledge from Memorization: Retrieval-augmented Prompt
Learning [113.58691755215663]
We develop RetroPrompt to help a model strike a balance between generalization and memorization.
In contrast with vanilla prompt learning, RetroPrompt constructs an open-book knowledge-store from training instances.
Extensive experiments demonstrate that RetroPrompt can obtain better performance in both few-shot and zero-shot settings.
arXiv Detail & Related papers (2022-05-29T16:07:30Z) - Latent Space based Memory Replay for Continual Learning in Artificial
Neural Networks [0.0]
We explore the application of latent space based memory replay for classification using artificial neural networks.
We are able to preserve good performance in previous tasks by storing only a small percentage of the original data in a compressed latent space version.
arXiv Detail & Related papers (2021-11-26T02:47:51Z) - Brain-inspired feature exaggeration in generative replay for continual
learning [4.682734815593623]
When learning new classes, the internal representation of previously learnt ones can often be overwritten.
Recent developments in neuroscience have uncovered a method through which the brain avoids its own form of memory interference.
This paper presents a new state-of-the-art performance on the classification of early classes in the class-incremental learning dataset CIFAR100.
arXiv Detail & Related papers (2021-10-26T10:49:02Z) - Learning offline: memory replay in biological and artificial
reinforcement learning [1.0136215038345011]
We review the functional roles of replay in the fields of neuroscience and AI.
Replay is important for memory consolidation in biological neural networks.
It is also key to stabilising learning in deep neural networks.
arXiv Detail & Related papers (2021-09-21T08:57:19Z) - Replay in Deep Learning: Current Approaches and Missing Biological
Elements [33.20770284464084]
Replay is the reactivation of one or more neural patterns.
It is thought to play a critical role in memory formation, retrieval, and consolidation.
We provide the first comprehensive comparison between replay in the mammalian brain and replay in artificial neural networks.
arXiv Detail & Related papers (2021-04-01T15:19:08Z) - Self-Attentive Associative Memory [69.40038844695917]
We propose to separate the storage of individual experiences (item memory) and their occurring relationships (relational memory)
We achieve competitive results with our proposed two-memory model in a diversity of machine learning tasks.
arXiv Detail & Related papers (2020-02-10T03:27:48Z) - Encoding-based Memory Modules for Recurrent Neural Networks [79.42778415729475]
We study the memorization subtask from the point of view of the design and training of recurrent neural networks.
We propose a new model, the Linear Memory Network, which features an encoding-based memorization component built with a linear autoencoder for sequences.
arXiv Detail & Related papers (2020-01-31T11:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.