Towards Robust and Efficient Continual Language Learning
- URL: http://arxiv.org/abs/2307.05741v1
- Date: Tue, 11 Jul 2023 19:08:31 GMT
- Title: Towards Robust and Efficient Continual Language Learning
- Authors: Adam Fisch, Amal Rannen-Triki, Razvan Pascanu, J\"org Bornschein,
Angeliki Lazaridou, Elena Gribovskaya, Marc'Aurelio Ranzato
- Abstract summary: We construct a new benchmark of task sequences that target different possible transfer scenarios one might face.
We propose a simple, yet effective, learner that satisfies many of our desiderata simply by leveraging a selective strategy for initializing new models from past task checkpoints.
- Score: 36.541749819691546
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the application space of language models continues to evolve, a natural
question to ask is how we can quickly adapt models to new tasks. We approach
this classic question from a continual learning perspective, in which we aim to
continue fine-tuning models trained on past tasks on new tasks, with the goal
of "transferring" relevant knowledge. However, this strategy also runs the risk
of doing more harm than good, i.e., negative transfer. In this paper, we
construct a new benchmark of task sequences that target different possible
transfer scenarios one might face, such as a sequence of tasks with high
potential of positive transfer, high potential for negative transfer, no
expected effect, or a mixture of each. An ideal learner should be able to
maximally exploit information from all tasks that have any potential for
positive transfer, while also avoiding the negative effects of any distracting
tasks that may confuse it. We then propose a simple, yet effective, learner
that satisfies many of our desiderata simply by leveraging a selective strategy
for initializing new models from past task checkpoints. Still, limitations
remain, and we hope this benchmark can help the community to further build and
analyze such learners.
Related papers
- Is forgetting less a good inductive bias for forward transfer? [7.704064306361941]
We argue that the measure of forward transfer to a task should not be affected by the restrictions placed on the continual learner.
Instead, forward transfer should be measured by how easy it is to learn a new task given a set of representations produced by continual learning on previous tasks.
Our results indicate that less forgetful representations lead to a better forward transfer suggesting a strong correlation between retaining past information and learning efficiency on new tasks.
arXiv Detail & Related papers (2023-03-14T19:52:09Z) - Robust Knowledge Transfer in Tiered Reinforcement Learning [22.303882476904295]
We study the Tiered Reinforcement Learning setting, where the goal is to transfer knowledge from the low-tier (source) task to the high-tier (target) task.
Unlike previous work, we do not assume the low-tier and high-tier tasks share the same dynamics or reward functions.
We propose novel online learning algorithms such that, for the high-tier task, it can achieve constant regret on partial states depending on the task similarity.
arXiv Detail & Related papers (2023-02-10T22:25:42Z) - ForkMerge: Mitigating Negative Transfer in Auxiliary-Task Learning [59.08197876733052]
Auxiliary-Task Learning (ATL) aims to improve the performance of the target task by leveraging the knowledge obtained from related tasks.
Sometimes, learning multiple tasks simultaneously results in lower accuracy than learning only the target task, known as negative transfer.
ForkMerge is a novel approach that periodically forks the model into multiple branches, automatically searches the varying task weights.
arXiv Detail & Related papers (2023-01-30T02:27:02Z) - Beyond Not-Forgetting: Continual Learning with Backward Knowledge
Transfer [39.99577526417276]
In continual learning (CL) an agent can improve the learning performance of both a new task and old' tasks.
Most existing CL methods focus on addressing catastrophic forgetting in neural networks by minimizing the modification of the learnt model for old tasks.
We propose a new CL method with Backward knowlEdge tRansfer (CUBER) for a fixed capacity neural network without data replay.
arXiv Detail & Related papers (2022-11-01T23:55:51Z) - An Exploration of Data Efficiency in Intra-Dataset Task Transfer for
Dialog Understanding [65.75873687351553]
This study explores the effects of varying quantities of target task training data on sequential transfer learning in the dialog domain.
Unintuitively, our data shows that often target task training data size has minimal effect on how sequential transfer learning performs compared to the same model without transfer learning.
arXiv Detail & Related papers (2022-10-21T04:36:46Z) - Continual Prompt Tuning for Dialog State Tracking [58.66412648276873]
A desirable dialog system should be able to continually learn new skills without forgetting old ones.
We present Continual Prompt Tuning, a parameter-efficient framework that not only avoids forgetting but also enables knowledge transfer between tasks.
arXiv Detail & Related papers (2022-03-13T13:22:41Z) - Lifelong Learning of Few-shot Learners across NLP Tasks [45.273018249235705]
We study the challenge of lifelong learning to few-shot learn over a sequence of diverse NLP tasks.
We propose a continual meta-learning approach which learns to generate adapter weights from a few examples.
We demonstrate our approach preserves model performance over training tasks and leads to positive knowledge transfer when the future tasks are learned.
arXiv Detail & Related papers (2021-04-18T10:41:56Z) - Unsupervised Transfer Learning for Spatiotemporal Predictive Networks [90.67309545798224]
We study how to transfer knowledge from a zoo of unsupervisedly learned models towards another network.
Our motivation is that models are expected to understand complex dynamics from different sources.
Our approach yields significant improvements on three benchmarks fortemporal prediction, and benefits the target even from less relevant ones.
arXiv Detail & Related papers (2020-09-24T15:40:55Z) - Transforming task representations to perform novel tasks [12.008469282323492]
An important aspect of intelligence is the ability to adapt to a novel task without any direct experience (zero-shot)
We propose a general computational framework for adapting to novel tasks based on their relationship to prior tasks.
arXiv Detail & Related papers (2020-05-08T23:41:57Z) - Exploring and Predicting Transferability across NLP Tasks [115.6278033699853]
We study the transferability between 33 NLP tasks across three broad classes of problems.
Our results show that transfer learning is more beneficial than previously thought.
We also develop task embeddings that can be used to predict the most transferable source tasks for a given target task.
arXiv Detail & Related papers (2020-05-02T09:39:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.