Continual Learning and Catastrophic Forgetting
- URL: http://arxiv.org/abs/2403.05175v1
- Date: Fri, 8 Mar 2024 09:32:43 GMT
- Title: Continual Learning and Catastrophic Forgetting
- Authors: Gido M. van de Ven, Nicholas Soures, Dhireesha Kudithipudi
- Abstract summary: This book chapter delves into the dynamics of continual learning, which is the process of incrementally learning from a non-stationary stream of data.
An important reason is that, when learning something new, these networks tend to quickly and drastically forget what they had learned before.
- Score: 4.6159105670682194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This book chapter delves into the dynamics of continual learning, which is
the process of incrementally learning from a non-stationary stream of data.
Although continual learning is a natural skill for the human brain, it is very
challenging for artificial neural networks. An important reason is that, when
learning something new, these networks tend to quickly and drastically forget
what they had learned before, a phenomenon known as catastrophic forgetting.
Especially in the last decade, continual learning has become an extensively
studied topic in deep learning. This book chapter reviews the insights that
this field has generated.
Related papers
- Continual Learning: Applications and the Road Forward [119.03464063873407]
Continual learning aims to allow machine learning models to continuously learn on new data, by accumulating knowledge without forgetting what was learned in the past.
This work is the result of the many discussions the authors had at the Dagstuhl seminar on Deep Continual Learning, in March 2023.
arXiv Detail & Related papers (2023-11-20T16:40:29Z) - Critical Learning Periods for Multisensory Integration in Deep Networks [112.40005682521638]
We show that the ability of a neural network to integrate information from diverse sources hinges critically on being exposed to properly correlated signals during the early phases of training.
We show that critical periods arise from the complex and unstable early transient dynamics, which are decisive of final performance of the trained system and their learned representations.
arXiv Detail & Related papers (2022-10-06T23:50:38Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Deep Learning and Artificial General Intelligence: Still a Long Way to
Go [0.15229257192293197]
Deep learning using neural network architecture has been on the frontier of computer science research.
This article critically shows five major reasons why deep neural networks, as of the current state, are not ready to be the technique of choice for reaching Artificial General Intelligence.
arXiv Detail & Related papers (2022-03-25T23:36:17Z) - Transferability in Deep Learning: A Survey [80.67296873915176]
The ability to acquire and reuse knowledge is known as transferability in deep learning.
We present this survey to connect different isolated areas in deep learning with their relation to transferability.
We implement a benchmark and an open-source library, enabling a fair evaluation of deep learning methods in terms of transferability.
arXiv Detail & Related papers (2022-01-15T15:03:17Z) - Towards continual task learning in artificial neural networks: current
approaches and insights from neuroscience [0.0]
The innate capacity of humans and other animals to learn a diverse, and often interfering, range of knowledge is a hallmark of natural intelligence.
The ability of artificial neural networks to learn across a range of tasks and domains is a clear goal of artificial intelligence.
arXiv Detail & Related papers (2021-12-28T13:50:51Z) - Training Spiking Neural Networks Using Lessons From Deep Learning [28.827506468167652]
The inner workings of our synapses and neurons provide a glimpse at what the future of deep learning might look like.
Some ideas are well accepted and commonly used amongst the neuromorphic engineering community, while others are presented or justified for the first time here.
A series of companion interactive tutorials complementary to this paper using our Python package, snnTorch, are also made available.
arXiv Detail & Related papers (2021-09-27T09:28:04Z) - Ten Quick Tips for Deep Learning in Biology [116.78436313026478]
Machine learning is concerned with the development and applications of algorithms that can recognize patterns in data and use them for predictive modeling.
Deep learning has become its own subfield of machine learning.
In the context of biological research, deep learning has been increasingly used to derive novel insights from high-dimensional biological data.
arXiv Detail & Related papers (2021-05-29T21:02:44Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Continual Learning: Tackling Catastrophic Forgetting in Deep Neural
Networks with Replay Processes [0.0]
Continual algorithms are designed to accumulate and improve knowledge in a curriculum of learning-experiences without forgetting.
Generative Replay consists of regenerating past learning experiences with a generative model to remember them.
We show that they are very promising methods for continual learning.
arXiv Detail & Related papers (2020-07-01T13:44:33Z) - Learning to Continually Learn [14.988129334830003]
We propose A Neuromodulated Meta-Learning Algorithm (ANML)
Inspired by neuromodulatory processes in the brain, we propose A Neuromodulated Meta-Learning Algorithm (ANML)
ANML produces state-of-the-art continual learning performance, sequentially learning as many as 600 classes (over 9,000 SGD updates)
arXiv Detail & Related papers (2020-02-21T22:52:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.