Wake-Sleep Consolidated Learning
- URL: http://arxiv.org/abs/2401.08623v1
- Date: Wed, 6 Dec 2023 18:15:08 GMT
- Title: Wake-Sleep Consolidated Learning
- Authors: Amelia Sorrenti, Giovanni Bellitto, Federica Proietto Salanitri,
Matteo Pennisi, Simone Palazzo, Concetto Spampinato
- Abstract summary: We propose Wake-Sleep Consolidated Learning to improve the performance of deep neural networks for visual classification tasks.
Our method learns continually via the synchronization between distinct wake and sleep phases.
We evaluate the effectiveness of our approach on three benchmark datasets.
- Score: 9.596781985154927
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose Wake-Sleep Consolidated Learning (WSCL), a learning strategy
leveraging Complementary Learning System theory and the wake-sleep phases of
the human brain to improve the performance of deep neural networks for visual
classification tasks in continual learning settings. Our method learns
continually via the synchronization between distinct wake and sleep phases.
During the wake phase, the model is exposed to sensory input and adapts its
representations, ensuring stability through a dynamic parameter freezing
mechanism and storing episodic memories in a short-term temporary memory
(similarly to what happens in the hippocampus). During the sleep phase, the
training process is split into NREM and REM stages. In the NREM stage, the
model's synaptic weights are consolidated using replayed samples from the
short-term and long-term memory and the synaptic plasticity mechanism is
activated, strengthening important connections and weakening unimportant ones.
In the REM stage, the model is exposed to previously-unseen realistic visual
sensory experience, and the dreaming process is activated, which enables the
model to explore the potential feature space, thus preparing synapses to future
knowledge. We evaluate the effectiveness of our approach on three benchmark
datasets: CIFAR-10, Tiny-ImageNet and FG-ImageNet. In all cases, our method
outperforms the baselines and prior work, yielding a significant performance
gain on continual visual classification tasks. Furthermore, we demonstrate the
usefulness of all processing stages and the importance of dreaming to enable
positive forward transfer.
Related papers
- MuDreamer: Learning Predictive World Models without Reconstruction [58.0159270859475]
We present MuDreamer, a robust reinforcement learning agent that builds upon the DreamerV3 algorithm by learning a predictive world model without the need for reconstructing input signals.
Our method achieves comparable performance on the Atari100k benchmark while benefiting from faster training.
arXiv Detail & Related papers (2024-05-23T22:09:01Z) - Hebbian and Gradient-based Plasticity Enables Robust Memory and Rapid
Learning in RNNs [13.250455334302288]
Evidence supports that synaptic plasticity plays a critical role in memory formation and fast learning.
We equip Recurrent Neural Networks with plasticity rules to enable them to adapt their parameters according to ongoing experiences.
Our models show promising results on sequential and associative memory tasks, illustrating their ability to robustly form and retain memories.
arXiv Detail & Related papers (2023-02-07T03:42:42Z) - Continual learning benefits from multiple sleep mechanisms: NREM, REM,
and Synaptic Downscaling [51.316408685035526]
Learning new tasks and skills in succession without losing prior learning is a computational challenge for both artificial and biological neural networks.
Here, we investigate how modeling three distinct components of mammalian sleep together affects continual learning in artificial neural networks.
arXiv Detail & Related papers (2022-09-09T13:45:27Z) - Modeling Associative Plasticity between Synapses to Enhance Learning of
Spiking Neural Networks [4.736525128377909]
Spiking Neural Networks (SNNs) are the third generation of artificial neural networks that enable energy-efficient implementation on neuromorphic hardware.
We propose a robust and effective learning mechanism by modeling the associative plasticity between synapses.
Our approaches achieve superior performance on static and state-of-the-art neuromorphic datasets.
arXiv Detail & Related papers (2022-07-24T06:12:23Z) - Learning Bayesian Sparse Networks with Full Experience Replay for
Continual Learning [54.7584721943286]
Continual Learning (CL) methods aim to enable machine learning models to learn new tasks without catastrophic forgetting of those that have been previously mastered.
Existing CL approaches often keep a buffer of previously-seen samples, perform knowledge distillation, or use regularization techniques towards this goal.
We propose to only activate and select sparse neurons for learning current and past tasks at any stage.
arXiv Detail & Related papers (2022-02-21T13:25:03Z) - Self-Regulated Learning for Egocentric Video Activity Anticipation [147.9783215348252]
Self-Regulated Learning (SRL) aims to regulate the intermediate representation consecutively to produce representation that emphasizes the novel information in the frame of the current time-stamp.
SRL sharply outperforms existing state-of-the-art in most cases on two egocentric video datasets and two third-person video datasets.
arXiv Detail & Related papers (2021-11-23T03:29:18Z) - Memory semantization through perturbed and adversarial dreaming [0.7874708385247353]
We propose that rapid-eye-movement (REM) dreaming is essential for efficient memory semantization.
We implement a cortical architecture with hierarchically organized feedforward and feedback pathways, inspired by generative adversarial networks (GANs)
Our results suggest that adversarial dreaming during REM sleep is essential for extracting memory contents, while dreaming during NREM sleep improves the robustness of the latent representation to noisy sensory inputs.
arXiv Detail & Related papers (2021-09-09T13:31:13Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Association: Remind Your GAN not to Forget [11.653696510515807]
We propose a brain-like approach that imitates the associative learning process to achieve continual learning.
Experiments demonstrate the effectiveness of our method in alleviating catastrophic forgetting on image-to-image translation tasks.
arXiv Detail & Related papers (2020-11-27T04:43:15Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.