A Theory for Knowledge Transfer in Continual Learning
- URL: http://arxiv.org/abs/2208.06931v1
- Date: Sun, 14 Aug 2022 22:28:26 GMT
- Title: A Theory for Knowledge Transfer in Continual Learning
- Authors: Diana Benavides-Prado and Patricia Riddle
- Abstract summary: Continual learning of tasks is an active area in deep neural networks.
Recent work has investigated forward knowledge transfer to new tasks.
We present a theory for knowledge transfer in continual supervised learning.
- Score: 7.056222499095849
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continual learning of a stream of tasks is an active area in deep neural
networks. The main challenge investigated has been the phenomenon of
catastrophic forgetting or interference of newly acquired knowledge with
knowledge from previous tasks. Recent work has investigated forward knowledge
transfer to new tasks. Backward transfer for improving knowledge gained during
previous tasks has received much less attention. There is in general limited
understanding of how knowledge transfer could aid tasks learned continually. We
present a theory for knowledge transfer in continual supervised learning, which
considers both forward and backward transfer. We aim at understanding their
impact for increasingly knowledgeable learners. We derive error bounds for each
of these transfer mechanisms. These bounds are agnostic to specific
implementations (e.g. deep neural networks). We demonstrate that, for a
continual learner that observes related tasks, both forward and backward
transfer can contribute to an increasing performance as more tasks are
observed.
Related papers
- Evaluating the structure of cognitive tasks with transfer learning [67.22168759751541]
This study investigates the transferability of deep learning representations between different EEG decoding tasks.
We conduct extensive experiments using state-of-the-art decoding models on two recently released EEG datasets.
arXiv Detail & Related papers (2023-07-28T14:51:09Z) - Is forgetting less a good inductive bias for forward transfer? [7.704064306361941]
We argue that the measure of forward transfer to a task should not be affected by the restrictions placed on the continual learner.
Instead, forward transfer should be measured by how easy it is to learn a new task given a set of representations produced by continual learning on previous tasks.
Our results indicate that less forgetful representations lead to a better forward transfer suggesting a strong correlation between retaining past information and learning efficiency on new tasks.
arXiv Detail & Related papers (2023-03-14T19:52:09Z) - Online Continual Learning via the Knowledge Invariant and Spread-out
Properties [4.109784267309124]
Key challenge in continual learning is catastrophic forgetting.
We propose a new method, named Online Continual Learning via the Knowledge Invariant and Spread-out Properties (OCLKISP)
We empirically evaluate our proposed method on four popular benchmarks for continual learning: Split CIFAR 100, Split SVHN, Split CUB200 and Split Tiny-Image-Net.
arXiv Detail & Related papers (2023-02-02T04:03:38Z) - Beyond Not-Forgetting: Continual Learning with Backward Knowledge
Transfer [39.99577526417276]
In continual learning (CL) an agent can improve the learning performance of both a new task and old' tasks.
Most existing CL methods focus on addressing catastrophic forgetting in neural networks by minimizing the modification of the learnt model for old tasks.
We propose a new CL method with Backward knowlEdge tRansfer (CUBER) for a fixed capacity neural network without data replay.
arXiv Detail & Related papers (2022-11-01T23:55:51Z) - Learning with Recoverable Forgetting [77.56338597012927]
Learning wIth Recoverable Forgetting explicitly handles the task- or sample-specific knowledge removal and recovery.
Specifically, LIRF brings in two innovative schemes, namely knowledge deposit and withdrawal.
We conduct experiments on several datasets, and demonstrate that the proposed LIRF strategy yields encouraging results with gratifying generalization capability.
arXiv Detail & Related papers (2022-07-17T16:42:31Z) - Continual Prompt Tuning for Dialog State Tracking [58.66412648276873]
A desirable dialog system should be able to continually learn new skills without forgetting old ones.
We present Continual Prompt Tuning, a parameter-efficient framework that not only avoids forgetting but also enables knowledge transfer between tasks.
arXiv Detail & Related papers (2022-03-13T13:22:41Z) - Transferability in Deep Learning: A Survey [80.67296873915176]
The ability to acquire and reuse knowledge is known as transferability in deep learning.
We present this survey to connect different isolated areas in deep learning with their relation to transferability.
We implement a benchmark and an open-source library, enabling a fair evaluation of deep learning methods in terms of transferability.
arXiv Detail & Related papers (2022-01-15T15:03:17Z) - AFEC: Active Forgetting of Negative Transfer in Continual Learning [37.03139674884091]
We show that biological neural networks can actively forget the old knowledge that conflicts with the learning of a new experience.
Inspired by the biological active forgetting, we propose to actively forget the old knowledge that limits the learning of new tasks to benefit continual learning.
arXiv Detail & Related papers (2021-10-23T10:03:19Z) - Unsupervised Transfer Learning for Spatiotemporal Predictive Networks [90.67309545798224]
We study how to transfer knowledge from a zoo of unsupervisedly learned models towards another network.
Our motivation is that models are expected to understand complex dynamics from different sources.
Our approach yields significant improvements on three benchmarks fortemporal prediction, and benefits the target even from less relevant ones.
arXiv Detail & Related papers (2020-09-24T15:40:55Z) - Transfer Learning in Deep Reinforcement Learning: A Survey [64.36174156782333]
Reinforcement learning is a learning paradigm for solving sequential decision-making problems.
Recent years have witnessed remarkable progress in reinforcement learning upon the fast development of deep neural networks.
transfer learning has arisen to tackle various challenges faced by reinforcement learning.
arXiv Detail & Related papers (2020-09-16T18:38:54Z) - Learning Transferable Concepts in Deep Reinforcement Learning [0.7161783472741748]
We show that learning discrete representations of sensory inputs can provide a high-level abstraction that is common across multiple tasks.
In particular, we show that it is possible to learn such representations by self-supervision, following an information theoretic approach.
Our method is able to learn concepts in locomotive and optimal control tasks that increase the sample efficiency in both known and unknown tasks.
arXiv Detail & Related papers (2020-05-16T04:45:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.