Online Fast Adaptation and Knowledge Accumulation: a New Approach to
Continual Learning
- URL: http://arxiv.org/abs/2003.05856v3
- Date: Wed, 20 Jan 2021 23:58:29 GMT
- Title: Online Fast Adaptation and Knowledge Accumulation: a New Approach to
Continual Learning
- Authors: Massimo Caccia, Pau Rodriguez, Oleksiy Ostapenko, Fabrice Normandin,
Min Lin, Lucas Caccia, Issam Laradji, Irina Rish, Alexandre Lacoste, David
Vazquez, Laurent Charlin
- Abstract summary: Continual learning studies agents that learn from streams of tasks without forgetting previous ones while adapting to new ones.
We show that current continual learning, meta-learning, meta-continual learning, and continual-meta learning techniques fail in this new scenario.
We propose Continual-MAML, an online extension of the popular MAML algorithm as a strong baseline for this scenario.
- Score: 74.07455280246212
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continual learning studies agents that learn from streams of tasks without
forgetting previous ones while adapting to new ones. Two recent
continual-learning scenarios have opened new avenues of research. In
meta-continual learning, the model is pre-trained to minimize catastrophic
forgetting of previous tasks. In continual-meta learning, the aim is to train
agents for faster remembering of previous tasks through adaptation. In their
original formulations, both methods have limitations. We stand on their
shoulders to propose a more general scenario, OSAKA, where an agent must
quickly solve new (out-of-distribution) tasks, while also requiring fast
remembering. We show that current continual learning, meta-learning,
meta-continual learning, and continual-meta learning techniques fail in this
new scenario. We propose Continual-MAML, an online extension of the popular
MAML algorithm as a strong baseline for this scenario. We empirically show that
Continual-MAML is better suited to the new scenario than the aforementioned
methodologies, as well as standard continual learning and meta-learning
approaches.
Related papers
- Continual Learning for Large Language Models: A Survey [95.79977915131145]
Large language models (LLMs) are not amenable to frequent re-training, due to high training costs arising from their massive scale.
This paper surveys recent works on continual learning for LLMs.
arXiv Detail & Related papers (2024-02-02T12:34:09Z) - On the Effectiveness of Fine-tuning Versus Meta-reinforcement Learning [71.55412580325743]
We show that multi-task pretraining with fine-tuning on new tasks performs equally as well, or better, than meta-pretraining with meta test-time adaptation.
This is encouraging for future research, as multi-task pretraining tends to be simpler and computationally cheaper than meta-RL.
arXiv Detail & Related papers (2022-06-07T13:24:00Z) - CoMPS: Continual Meta Policy Search [113.33157585319906]
We develop a new continual meta-learning method to address challenges in sequential multi-task learning.
We find that CoMPS outperforms prior continual learning and off-policy meta-reinforcement methods on several sequences of challenging continuous control tasks.
arXiv Detail & Related papers (2021-12-08T18:53:08Z) - Generalising via Meta-Examples for Continual Learning in the Wild [24.09600678738403]
We develop a novel strategy to deal with neural networks that "learn in the wild"
We equip it with MEML - Meta-Example Meta-Learning - a new module that simultaneously alleviates catastrophic forgetting.
We extend it by adopting a technique that creates various augmented tasks and optimises over the hardest.
arXiv Detail & Related papers (2021-01-28T15:51:54Z) - Meta-learning the Learning Trends Shared Across Tasks [123.10294801296926]
Gradient-based meta-learning algorithms excel at quick adaptation to new tasks with limited data.
Existing meta-learning approaches only depend on the current task information during the adaptation.
We propose a 'Path-aware' model-agnostic meta-learning approach.
arXiv Detail & Related papers (2020-10-19T08:06:47Z) - Bilevel Continual Learning [76.50127663309604]
We present a novel framework of continual learning named "Bilevel Continual Learning" (BCL)
Our experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods.
arXiv Detail & Related papers (2020-07-30T16:00:23Z) - La-MAML: Look-ahead Meta Learning for Continual Learning [14.405620521842621]
We propose Look-ahead MAML (La-MAML), a fast optimisation-based meta-learning algorithm for online-continual learning, aided by a small episodic memory.
La-MAML achieves performance superior to other replay-based, prior-based and meta-learning based approaches for continual learning on real-world visual classification benchmarks.
arXiv Detail & Related papers (2020-07-27T23:07:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.