ImpressLearn: Continual Learning via Combined Task Impressions
- URL: http://arxiv.org/abs/2210.01987v1
- Date: Wed, 5 Oct 2022 02:28:25 GMT
- Title: ImpressLearn: Continual Learning via Combined Task Impressions
- Authors: Dhrupad Bhardwaj, Julia Kempe, Artem Vysogorets, Angela M. Teng, and
Evaristus C. Ezekwem
- Abstract summary: This work proposes a new method to sequentially train a deep neural network on multiple tasks without suffering catastrophic forgetting.
We show that simply learning a linear combination of a small number of task-specific masks on a randomly backbone network is sufficient to both retain accuracy on previously learned tasks, as well as achieve high accuracy on new tasks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work proposes a new method to sequentially train a deep neural network
on multiple tasks without suffering catastrophic forgetting, while endowing it
with the capability to quickly adapt to unseen tasks. Starting from existing
work on network masking (Wortsman et al., 2020), we show that simply learning a
linear combination of a small number of task-specific masks (impressions) on a
randomly initialized backbone network is sufficient to both retain accuracy on
previously learned tasks, as well as achieve high accuracy on new tasks. In
contrast to previous methods, we do not require to generate dedicated masks or
contexts for each new task, instead leveraging transfer learning to keep
per-task parameter overhead small. Our work illustrates the power of linearly
combining individual impressions, each of which fares poorly in isolation, to
achieve performance comparable to a dedicated mask. Moreover, even repeated
impressions from the same task (homogeneous masks), when combined can approach
the performance of heterogeneous combinations if sufficiently many impressions
are used. Our approach scales more efficiently than existing methods, often
requiring orders of magnitude fewer parameters and can function without
modification even when task identity is missing. In addition, in the setting
where task labels are not given at inference, our algorithm gives an often
favorable alternative to the entropy based task-inference methods proposed in
(Wortsman et al., 2020). We evaluate our method on a number of well known image
classification data sets and architectures.
Related papers
- Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks [69.38572074372392]
We present the first results proving that feature learning occurs during training with a nonlinear model on multiple tasks.
Our key insight is that multi-task pretraining induces a pseudo-contrastive loss that favors representations that align points that typically have the same label across tasks.
arXiv Detail & Related papers (2023-07-13T16:39:08Z) - TaskMix: Data Augmentation for Meta-Learning of Spoken Intent
Understanding [0.0]
We show that a state-of-the-art data augmentation method worsens this problem of overfitting when the task diversity is low.
We propose a simple method, TaskMix, which synthesizes new tasks by linearly interpolating existing tasks.
We show that TaskMix outperforms baselines, alleviates overfitting when task diversity is low, and does not degrade performance even when it is high.
arXiv Detail & Related papers (2022-09-26T00:37:40Z) - Interval Bound Interpolation for Few-shot Learning with Few Tasks [15.85259386116784]
Few-shot learning aims to transfer the knowledge acquired from training on a diverse set of tasks to unseen tasks with a limited amount of labeled data.
We introduce the notion of interval bounds from the provably robust training literature to few-shot learning.
We then use a novel strategy to artificially form new tasks for training by interpolating between the available tasks and their respective interval bounds.
arXiv Detail & Related papers (2022-04-07T15:29:27Z) - Few-shot Sequence Learning with Transformers [79.87875859408955]
Few-shot algorithms aim at learning new tasks provided only a handful of training examples.
In this work we investigate few-shot learning in the setting where the data points are sequences of tokens.
We propose an efficient learning algorithm based on Transformers.
arXiv Detail & Related papers (2020-12-17T12:30:38Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - Reparameterizing Convolutions for Incremental Multi-Task Learning
without Task Interference [75.95287293847697]
Two common challenges in developing multi-task models are often overlooked in literature.
First, enabling the model to be inherently incremental, continuously incorporating information from new tasks without forgetting the previously learned ones (incremental learning)
Second, eliminating adverse interactions amongst tasks, which has been shown to significantly degrade the single-task performance in a multi-task setup (task interference)
arXiv Detail & Related papers (2020-07-24T14:44:46Z) - Adversarial Continual Learning [99.56738010842301]
We propose a hybrid continual learning framework that learns a disjoint representation for task-invariant and task-specific features.
Our model combines architecture growth to prevent forgetting of task-specific skills and an experience replay approach to preserve shared skills.
arXiv Detail & Related papers (2020-03-21T02:08:17Z) - Ternary Feature Masks: zero-forgetting for task-incremental learning [68.34518408920661]
We propose an approach without any forgetting to continual learning for the task-aware regime.
By using ternary masks we can upgrade a model to new tasks, reusing knowledge from previous tasks while not forgetting anything about them.
Our method outperforms current state-of-the-art while reducing memory overhead in comparison to weight-based approaches.
arXiv Detail & Related papers (2020-01-23T18:08:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.