Continual learning benefits from multiple sleep mechanisms: NREM, REM,
and Synaptic Downscaling
- URL: http://arxiv.org/abs/2209.05245v1
- Date: Fri, 9 Sep 2022 13:45:27 GMT
- Title: Continual learning benefits from multiple sleep mechanisms: NREM, REM,
and Synaptic Downscaling
- Authors: Brian S. Robinson, Clare W. Lau, Alexander New, Shane M. Nichols, Erik
C. Johnson, Michael Wolmetz, and William G. Coon
- Abstract summary: Learning new tasks and skills in succession without losing prior learning is a computational challenge for both artificial and biological neural networks.
Here, we investigate how modeling three distinct components of mammalian sleep together affects continual learning in artificial neural networks.
- Score: 51.316408685035526
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Learning new tasks and skills in succession without losing prior learning
(i.e., catastrophic forgetting) is a computational challenge for both
artificial and biological neural networks, yet artificial systems struggle to
achieve parity with their biological analogues. Mammalian brains employ
numerous neural operations in support of continual learning during sleep. These
are ripe for artificial adaptation. Here, we investigate how modeling three
distinct components of mammalian sleep together affects continual learning in
artificial neural networks: (1) a veridical memory replay process observed
during non-rapid eye movement (NREM) sleep; (2) a generative memory replay
process linked to REM sleep; and (3) a synaptic downscaling process which has
been proposed to tune signal-to-noise ratios and support neural upkeep. We find
benefits from the inclusion of all three sleep components when evaluating
performance on a continual learning CIFAR-100 image classification benchmark.
Maximum accuracy improved during training and catastrophic forgetting was
reduced during later tasks. While some catastrophic forgetting persisted over
the course of network training, higher levels of synaptic downscaling lead to
better retention of early tasks and further facilitated the recovery of early
task accuracy during subsequent training. One key takeaway is that there is a
trade-off at hand when considering the level of synaptic downscaling to use -
more aggressive downscaling better protects early tasks, but less downscaling
enhances the ability to learn new tasks. Intermediate levels can strike a
balance with the highest overall accuracies during training. Overall, our
results both provide insight into how to adapt sleep components to enhance
artificial continual learning systems and highlight areas for future
neuroscientific sleep research to further such systems.
Related papers
- TACOS: Task Agnostic Continual Learning in Spiking Neural Networks [1.703671463296347]
Catastrophic interference, the loss of previously learned information when learning new information, remains a major challenge in machine learning.
We show that neuro-inspired mechanisms such as synaptic consolidation and metaplasticity can mitigate catastrophic interference in a spiking neural network.
Our model, TACOS, combines neuromodulation with complex synaptic dynamics to enable new learning while protecting previous information.
arXiv Detail & Related papers (2024-08-16T15:42:16Z) - Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Wake-Sleep Consolidated Learning [9.596781985154927]
We propose Wake-Sleep Consolidated Learning to improve the performance of deep neural networks for visual classification tasks.
Our method learns continually via the synchronization between distinct wake and sleep phases.
We evaluate the effectiveness of our approach on three benchmark datasets.
arXiv Detail & Related papers (2023-12-06T18:15:08Z) - Learning with Chemical versus Electrical Synapses -- Does it Make a
Difference? [61.85704286298537]
Bio-inspired neural networks have the potential to advance our understanding of neural computation and improve the state-of-the-art of AI systems.
We conduct experiments with autonomous lane-keeping through a photorealistic autonomous driving simulator to evaluate their performance under diverse conditions.
arXiv Detail & Related papers (2023-11-21T13:07:20Z) - SI-SD: Sleep Interpreter through awake-guided cross-subject Semantic Decoding [5.283755248013948]
We design a novel cognitive neuroscience experiment and collect a comprehensive, well-annotated electroencephalography (EEG) dataset from 134 subjects during both wakefulness and sleep.
We develop SI-SD that enhances sleep semantic decoding through the position-wise alignment of neural latent sequence between wakefulness and sleep.
arXiv Detail & Related papers (2023-09-28T14:06:34Z) - Learning Bayesian Sparse Networks with Full Experience Replay for
Continual Learning [54.7584721943286]
Continual Learning (CL) methods aim to enable machine learning models to learn new tasks without catastrophic forgetting of those that have been previously mastered.
Existing CL approaches often keep a buffer of previously-seen samples, perform knowledge distillation, or use regularization techniques towards this goal.
We propose to only activate and select sparse neurons for learning current and past tasks at any stage.
arXiv Detail & Related papers (2022-02-21T13:25:03Z) - Learning offline: memory replay in biological and artificial
reinforcement learning [1.0136215038345011]
We review the functional roles of replay in the fields of neuroscience and AI.
Replay is important for memory consolidation in biological neural networks.
It is also key to stabilising learning in deep neural networks.
arXiv Detail & Related papers (2021-09-21T08:57:19Z) - Association: Remind Your GAN not to Forget [11.653696510515807]
We propose a brain-like approach that imitates the associative learning process to achieve continual learning.
Experiments demonstrate the effectiveness of our method in alleviating catastrophic forgetting on image-to-image translation tasks.
arXiv Detail & Related papers (2020-11-27T04:43:15Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.