Replay in Deep Learning: Current Approaches and Missing Biological
Elements
- URL: http://arxiv.org/abs/2104.04132v2
- Date: Fri, 28 May 2021 21:01:25 GMT
- Title: Replay in Deep Learning: Current Approaches and Missing Biological
Elements
- Authors: Tyler L. Hayes, Giri P. Krishnan, Maxim Bazhenov, Hava T. Siegelmann,
Terrence J. Sejnowski, Christopher Kanan
- Abstract summary: Replay is the reactivation of one or more neural patterns.
It is thought to play a critical role in memory formation, retrieval, and consolidation.
We provide the first comprehensive comparison between replay in the mammalian brain and replay in artificial neural networks.
- Score: 33.20770284464084
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Replay is the reactivation of one or more neural patterns, which are similar
to the activation patterns experienced during past waking experiences. Replay
was first observed in biological neural networks during sleep, and it is now
thought to play a critical role in memory formation, retrieval, and
consolidation. Replay-like mechanisms have been incorporated into deep
artificial neural networks that learn over time to avoid catastrophic
forgetting of previous knowledge. Replay algorithms have been successfully used
in a wide range of deep learning methods within supervised, unsupervised, and
reinforcement learning paradigms. In this paper, we provide the first
comprehensive comparison between replay in the mammalian brain and replay in
artificial neural networks. We identify multiple aspects of biological replay
that are missing in deep learning systems and hypothesize how they could be
utilized to improve artificial neural networks.
Related papers
- Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Continual learning benefits from multiple sleep mechanisms: NREM, REM,
and Synaptic Downscaling [51.316408685035526]
Learning new tasks and skills in succession without losing prior learning is a computational challenge for both artificial and biological neural networks.
Here, we investigate how modeling three distinct components of mammalian sleep together affects continual learning in artificial neural networks.
arXiv Detail & Related papers (2022-09-09T13:45:27Z) - Memory-enriched computation and learning in spiking neural networks
through Hebbian plasticity [9.453554184019108]
Hebbian plasticity is believed to play a pivotal role in biological memory.
We introduce a novel spiking neural network architecture that is enriched by Hebbian synaptic plasticity.
We show that Hebbian enrichment renders spiking neural networks surprisingly versatile in terms of their computational as well as learning capabilities.
arXiv Detail & Related papers (2022-05-23T12:48:37Z) - Latent Space based Memory Replay for Continual Learning in Artificial
Neural Networks [0.0]
We explore the application of latent space based memory replay for classification using artificial neural networks.
We are able to preserve good performance in previous tasks by storing only a small percentage of the original data in a compressed latent space version.
arXiv Detail & Related papers (2021-11-26T02:47:51Z) - Training Spiking Neural Networks Using Lessons From Deep Learning [28.827506468167652]
The inner workings of our synapses and neurons provide a glimpse at what the future of deep learning might look like.
Some ideas are well accepted and commonly used amongst the neuromorphic engineering community, while others are presented or justified for the first time here.
A series of companion interactive tutorials complementary to this paper using our Python package, snnTorch, are also made available.
arXiv Detail & Related papers (2021-09-27T09:28:04Z) - Learning offline: memory replay in biological and artificial
reinforcement learning [1.0136215038345011]
We review the functional roles of replay in the fields of neuroscience and AI.
Replay is important for memory consolidation in biological neural networks.
It is also key to stabilising learning in deep neural networks.
arXiv Detail & Related papers (2021-09-21T08:57:19Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Continual Learning: Tackling Catastrophic Forgetting in Deep Neural
Networks with Replay Processes [0.0]
Continual algorithms are designed to accumulate and improve knowledge in a curriculum of learning-experiences without forgetting.
Generative Replay consists of regenerating past learning experiences with a generative model to remember them.
We show that they are very promising methods for continual learning.
arXiv Detail & Related papers (2020-07-01T13:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.