Integrating Curricula with Replays: Its Effects on Continual Learning
- URL: http://arxiv.org/abs/2307.05747v2
- Date: Tue, 25 Jul 2023 15:16:33 GMT
- Title: Integrating Curricula with Replays: Its Effects on Continual Learning
- Authors: Ren Jie Tee and Mengmi Zhang
- Abstract summary: Humans engage in learning and reviewing processes with curricula when acquiring new skills or knowledge.
The goal is to emulate the human learning process, thereby improving knowledge retention and facilitating learning transfer.
Existing replay methods in continual learning agents involve the random selection and ordering of data from previous tasks.
- Score: 3.2489082010225494
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humans engage in learning and reviewing processes with curricula when
acquiring new skills or knowledge. This human learning behavior has inspired
the integration of curricula with replay methods in continual learning agents.
The goal is to emulate the human learning process, thereby improving knowledge
retention and facilitating learning transfer. Existing replay methods in
continual learning agents involve the random selection and ordering of data
from previous tasks, which has shown to be effective. However, limited research
has explored the integration of different curricula with replay methods to
enhance continual learning. Our study takes initial steps in examining the
impact of integrating curricula with replay methods on continual learning in
three specific aspects: the interleaved frequency of replayed exemplars with
training data, the sequence in which exemplars are replayed, and the strategy
for selecting exemplars into the replay buffer. These aspects of curricula
design align with cognitive psychology principles and leverage the benefits of
interleaved practice during replays, easy-to-hard rehearsal, and exemplar
selection strategy involving exemplars from a uniform distribution of
difficulties. Based on our results, these three curricula effectively mitigated
catastrophic forgetting and enhanced positive knowledge transfer, demonstrating
the potential of curricula in advancing continual learning methodologies. Our
code and data are available:
https://github.com/ZhangLab-DeepNeuroCogLab/Integrating-Curricula-with-Replays
Related papers
- Practical Recommendations for Replay-based Continual Learning Methods [18.559132470835937]
Continual Learning requires the model to learn from a stream of dynamic, non-stationary data without forgetting previous knowledge.
Replay approaches have empirically proved to be the most effective ones.
arXiv Detail & Related papers (2022-03-19T12:44:44Z) - Relational Experience Replay: Continual Learning by Adaptively Tuning
Task-wise Relationship [54.73817402934303]
We propose Experience Continual Replay (ERR), a bi-level learning framework to adaptively tune task-wise to achieve a better stability plasticity' tradeoff.
ERR can consistently improve the performance of all baselines and surpass current state-of-the-art methods.
arXiv Detail & Related papers (2021-12-31T12:05:22Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via
Relabeling Experience and Unsupervised Pre-training [94.87393610927812]
We present an off-policy, interactive reinforcement learning algorithm that capitalizes on the strengths of both feedback and off-policy learning.
We demonstrate that our approach is capable of learning tasks of higher complexity than previously considered by human-in-the-loop methods.
arXiv Detail & Related papers (2021-06-09T14:10:50Z) - Learning Invariant Representation for Continual Learning [5.979373021392084]
A key challenge in Continual learning is catastrophically forgetting previously learned tasks when the agent faces a new one.
We propose a new pseudo-rehearsal-based method, named learning Invariant Representation for Continual Learning (IRCL)
Disentangling the shared invariant representation helps to learn continually a sequence of tasks, while being more robust to forgetting and having better knowledge transfer.
arXiv Detail & Related papers (2021-01-15T15:12:51Z) - Augmenting Policy Learning with Routines Discovered from a Demonstration [86.9307760606403]
We propose routine-augmented policy learning (RAPL)
RAPL discovers routines composed of primitive actions from a single demonstration.
We show that RAPL improves the state-of-the-art imitation learning method SQIL and reinforcement learning method A2C.
arXiv Detail & Related papers (2020-12-23T03:15:21Z) - Transfer Learning in Deep Reinforcement Learning: A Survey [64.36174156782333]
Reinforcement learning is a learning paradigm for solving sequential decision-making problems.
Recent years have witnessed remarkable progress in reinforcement learning upon the fast development of deep neural networks.
transfer learning has arisen to tackle various challenges faced by reinforcement learning.
arXiv Detail & Related papers (2020-09-16T18:38:54Z) - Meta-Learning with Sparse Experience Replay for Lifelong Language
Learning [26.296412053816233]
We propose a novel approach to lifelong learning of language tasks based on meta-learning with sparse experience replay.
We show that under the realistic setting of performing a single pass on a stream of tasks, our method obtains state-of-the-art results on lifelong text classification and relation extraction.
arXiv Detail & Related papers (2020-09-10T14:36:38Z) - Importance Weighted Policy Learning and Adaptation [89.46467771037054]
We study a complementary approach which is conceptually simple, general, modular and built on top of recent improvements in off-policy learning.
The framework is inspired by ideas from the probabilistic inference literature and combines robust off-policy learning with a behavior prior.
Our approach achieves competitive adaptation performance on hold-out tasks compared to meta reinforcement learning baselines and can scale to complex sparse-reward scenarios.
arXiv Detail & Related papers (2020-09-10T14:16:58Z) - Continual Learning: Tackling Catastrophic Forgetting in Deep Neural
Networks with Replay Processes [0.0]
Continual algorithms are designed to accumulate and improve knowledge in a curriculum of learning-experiences without forgetting.
Generative Replay consists of regenerating past learning experiences with a generative model to remember them.
We show that they are very promising methods for continual learning.
arXiv Detail & Related papers (2020-07-01T13:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.