Latent Space based Memory Replay for Continual Learning in Artificial
Neural Networks
- URL: http://arxiv.org/abs/2111.13297v1
- Date: Fri, 26 Nov 2021 02:47:51 GMT
- Title: Latent Space based Memory Replay for Continual Learning in Artificial
Neural Networks
- Authors: Haitz S\'aez de Oc\'ariz Borde
- Abstract summary: We explore the application of latent space based memory replay for classification using artificial neural networks.
We are able to preserve good performance in previous tasks by storing only a small percentage of the original data in a compressed latent space version.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Memory replay may be key to learning in biological brains, which manage to
learn new tasks continually without catastrophically interfering with previous
knowledge. On the other hand, artificial neural networks suffer from
catastrophic forgetting and tend to only perform well on tasks that they were
recently trained on. In this work we explore the application of latent space
based memory replay for classification using artificial neural networks. We are
able to preserve good performance in previous tasks by storing only a small
percentage of the original data in a compressed latent space version.
Related papers
- Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Saliency-Guided Hidden Associative Replay for Continual Learning [13.551181595881326]
Continual Learning is a burgeoning domain in next-generation AI, focusing on training neural networks over a sequence of tasks akin to human learning.
This paper presents the Saliency Guided Hidden Associative Replay for Continual Learning.
This novel framework synergizes associative memory with replay-based strategies. SHARC primarily archives salient data segments via sparse memory encoding.
arXiv Detail & Related papers (2023-10-06T15:54:12Z) - Memoria: Resolving Fateful Forgetting Problem through Human-Inspired Memory Architecture [5.9360953869782325]
We present Memoria, a memory system for artificial neural networks.
Results prove the effectiveness of Memoria in the diverse tasks of sorting, language modeling, and classification.
Engram analysis reveals that Memoria exhibits the primacy, recency, and temporal contiguity effects which are characteristics of human memory.
arXiv Detail & Related papers (2023-10-04T09:40:46Z) - Self-recovery of memory via generative replay [0.8594140167290099]
We propose a novel architecture that augments generative replay with an adaptive, brain-like capacity to autonomously recover memories.
We demonstrate this capacity of the architecture across several continual learning tasks and environments.
arXiv Detail & Related papers (2023-01-15T07:28:14Z) - Saliency-Augmented Memory Completion for Continual Learning [8.243137410556495]
How to forget is a problem continual learning must address.
Our paper proposes a new saliency-augmented memory completion framework for continual learning.
arXiv Detail & Related papers (2022-12-26T18:06:39Z) - Continual learning benefits from multiple sleep mechanisms: NREM, REM,
and Synaptic Downscaling [51.316408685035526]
Learning new tasks and skills in succession without losing prior learning is a computational challenge for both artificial and biological neural networks.
Here, we investigate how modeling three distinct components of mammalian sleep together affects continual learning in artificial neural networks.
arXiv Detail & Related papers (2022-09-09T13:45:27Z) - Learning Bayesian Sparse Networks with Full Experience Replay for
Continual Learning [54.7584721943286]
Continual Learning (CL) methods aim to enable machine learning models to learn new tasks without catastrophic forgetting of those that have been previously mastered.
Existing CL approaches often keep a buffer of previously-seen samples, perform knowledge distillation, or use regularization techniques towards this goal.
We propose to only activate and select sparse neurons for learning current and past tasks at any stage.
arXiv Detail & Related papers (2022-02-21T13:25:03Z) - Reservoir Stack Machines [77.12475691708838]
Memory-augmented neural networks equip a recurrent neural network with an explicit memory to support tasks that require information storage.
We introduce the reservoir stack machine, a model which can provably recognize all deterministic context-free languages.
Our results show that the reservoir stack machine achieves zero error, even on test sequences longer than the training data.
arXiv Detail & Related papers (2021-05-04T16:50:40Z) - Replay in Deep Learning: Current Approaches and Missing Biological
Elements [33.20770284464084]
Replay is the reactivation of one or more neural patterns.
It is thought to play a critical role in memory formation, retrieval, and consolidation.
We provide the first comprehensive comparison between replay in the mammalian brain and replay in artificial neural networks.
arXiv Detail & Related papers (2021-04-01T15:19:08Z) - Improving Computational Efficiency in Visual Reinforcement Learning via
Stored Embeddings [89.63764845984076]
We present Stored Embeddings for Efficient Reinforcement Learning (SEER)
SEER is a simple modification of existing off-policy deep reinforcement learning methods.
We show that SEER does not degrade the performance of RLizable agents while significantly saving computation and memory.
arXiv Detail & Related papers (2021-03-04T08:14:10Z) - Encoding-based Memory Modules for Recurrent Neural Networks [79.42778415729475]
We study the memorization subtask from the point of view of the design and training of recurrent neural networks.
We propose a new model, the Linear Memory Network, which features an encoding-based memorization component built with a linear autoencoder for sequences.
arXiv Detail & Related papers (2020-01-31T11:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.