The Effect of Task Ordering in Continual Learning
- URL: http://arxiv.org/abs/2205.13323v1
- Date: Thu, 26 May 2022 12:56:15 GMT
- Title: The Effect of Task Ordering in Continual Learning
- Authors: Samuel J. Bell and Neil D. Lawrence
- Abstract summary: We show that reordering tasks significantly affects the amount of catastrophic forgetting.
We show that the effect of task ordering can be exploited to modify continual learning performance.
- Score: 12.571389210876315
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate the effect of task ordering on continual learning performance.
We conduct an extensive series of empirical experiments on synthetic and
naturalistic datasets and show that reordering tasks significantly affects the
amount of catastrophic forgetting. Connecting to the field of curriculum
learning, we show that the effect of task ordering can be exploited to modify
continual learning performance, and present a simple approach for doing so. Our
method computes the distance between all pairs of tasks, where distance is
defined as the source task curvature of a gradient step toward the target task.
Using statistically rigorous methods and sound experimental design, we show
that task ordering is an important aspect of continual learning that can be
modified for improved performance.
Related papers
- Does the Order of Fine-tuning Matter and Why? [11.975836356680855]
We study the effect of fine-tuning multiple intermediate tasks and their ordering on target task performance.
Experimental results show that there is an impact of task ordering on target task performance by up to 6% of performance gain and up to 4% of performance loss.
arXiv Detail & Related papers (2024-10-03T19:07:14Z) - Task Weighting through Gradient Projection for Multitask Learning [5.5967570276373655]
In multitask learning, conflicts between task gradients are a frequent issue degrading a model's training performance.
In this work, we present a method to adapt the Gradient Projection algorithm PCGrad to simultaneously perform task prioritization.
Our approach differs from traditional task weighting performed by scaling task losses in that our weighting scheme applies only in cases where tasks are in conflict, but lets the training proceed unhindered otherwise.
arXiv Detail & Related papers (2024-09-03T11:17:44Z) - Instruction Matters: A Simple yet Effective Task Selection for Optimized Instruction Tuning of Specific Tasks [51.15473776489712]
We introduce a simple yet effective task selection method that leverages instruction information alone to identify relevant tasks.
Our method is significantly more efficient than traditional approaches, which require complex measurements of pairwise transferability between tasks or the creation of data samples for the target task.
Experimental results demonstrate that training on a small set of tasks, chosen solely on the instructions, results in substantial improvements in performance on benchmarks such as P3, Big-Bench, NIV2, and Big-Bench Hard.
arXiv Detail & Related papers (2024-04-25T08:49:47Z) - Data-CUBE: Data Curriculum for Instruction-based Sentence Representation
Learning [85.66907881270785]
We propose a data curriculum method, namely Data-CUBE, that arranges the orders of all the multi-task data for training.
In the task level, we aim to find the optimal task order to minimize the total cross-task interference risk.
In the instance level, we measure the difficulty of all instances per task, then divide them into the easy-to-difficult mini-batches for training.
arXiv Detail & Related papers (2024-01-07T18:12:20Z) - An Exploration of Data Efficiency in Intra-Dataset Task Transfer for
Dialog Understanding [65.75873687351553]
This study explores the effects of varying quantities of target task training data on sequential transfer learning in the dialog domain.
Unintuitively, our data shows that often target task training data size has minimal effect on how sequential transfer learning performs compared to the same model without transfer learning.
arXiv Detail & Related papers (2022-10-21T04:36:46Z) - Task Formulation Matters When Learning Continually: A Case Study in
Visual Question Answering [58.82325933356066]
Continual learning aims to train a model incrementally on a sequence of tasks without forgetting previous knowledge.
We present a detailed study of how different settings affect performance for Visual Question Answering.
arXiv Detail & Related papers (2022-09-30T19:12:58Z) - Leveraging convergence behavior to balance conflicting tasks in
multi-task learning [3.6212652499950138]
Multi-Task Learning uses correlated tasks to improve performance generalization.
Tasks often conflict with each other, which makes it challenging to define how the gradients of multiple tasks should be combined.
We propose a method that takes into account temporal behaviour of the gradients to create a dynamic bias that adjust the importance of each task during the backpropagation.
arXiv Detail & Related papers (2022-04-14T01:52:34Z) - Variational Multi-Task Learning with Gumbel-Softmax Priors [105.22406384964144]
Multi-task learning aims to explore task relatedness to improve individual tasks.
We propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks.
arXiv Detail & Related papers (2021-11-09T18:49:45Z) - Efficiently Identifying Task Groupings for Multi-Task Learning [55.80489920205404]
Multi-task learning can leverage information learned by one task to benefit the training of other tasks.
We suggest an approach to select which tasks should train together in multi-task learning models.
Our method determines task groupings in a single training run by co-training all tasks together and quantifying the effect to which one task's gradient would affect another task's loss.
arXiv Detail & Related papers (2021-09-10T02:01:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.