Continual Learning of a Mixed Sequence of Similar and Dissimilar Tasks
- URL: http://arxiv.org/abs/2112.10017v1
- Date: Sat, 18 Dec 2021 22:37:30 GMT
- Title: Continual Learning of a Mixed Sequence of Similar and Dissimilar Tasks
- Authors: Zixuan Ke, Bing Liu, Xingchang Huang
- Abstract summary: No technique has been proposed to learn a sequence of mixed similar and dissimilar tasks that can deal with forgetting.
This paper proposes such a technique to learn both types of tasks in the same network.
For dissimilar tasks, the algorithm focuses on dealing with forgetting, and for similar tasks, the algorithm focuses on selectively transferring the knowledge learned from some similar previous tasks to improve the new task learning.
- Score: 18.679936596282847
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Existing research on continual learning of a sequence of tasks focused on
dealing with catastrophic forgetting, where the tasks are assumed to be
dissimilar and have little shared knowledge. Some work has also been done to
transfer previously learned knowledge to the new task when the tasks are
similar and have shared knowledge. To the best of our knowledge, no technique
has been proposed to learn a sequence of mixed similar and dissimilar tasks
that can deal with forgetting and also transfer knowledge forward and backward.
This paper proposes such a technique to learn both types of tasks in the same
network. For dissimilar tasks, the algorithm focuses on dealing with
forgetting, and for similar tasks, the algorithm focuses on selectively
transferring the knowledge learned from some similar previous tasks to improve
the new task learning. Additionally, the algorithm automatically detects
whether a new task is similar to any previous tasks. Empirical evaluation using
sequences of mixed tasks demonstrates the effectiveness of the proposed model.
Related papers
- Minimax Forward and Backward Learning of Evolving Tasks with Performance
Guarantees [6.008132390640294]
The incremental learning of a growing sequence of tasks holds promise to enable accurate classification.
This paper presents incremental minimax risk classifiers (IMRCs) that effectively exploit forward and backward learning.
IMRCs can result in a significant performance improvement, especially for reduced sample sizes.
arXiv Detail & Related papers (2023-10-24T16:21:41Z) - Multitask Learning with No Regret: from Improved Confidence Bounds to
Active Learning [79.07658065326592]
Quantifying uncertainty in the estimated tasks is of pivotal importance for many downstream applications, such as online or active learning.
We provide novel multitask confidence intervals in the challenging setting when neither the similarity between tasks nor the tasks' features are available to the learner.
We propose a novel online learning algorithm that achieves such improved regret without knowing this parameter in advance.
arXiv Detail & Related papers (2023-08-03T13:08:09Z) - Online Continual Learning via the Knowledge Invariant and Spread-out
Properties [4.109784267309124]
Key challenge in continual learning is catastrophic forgetting.
We propose a new method, named Online Continual Learning via the Knowledge Invariant and Spread-out Properties (OCLKISP)
We empirically evaluate our proposed method on four popular benchmarks for continual learning: Split CIFAR 100, Split SVHN, Split CUB200 and Split Tiny-Image-Net.
arXiv Detail & Related papers (2023-02-02T04:03:38Z) - Toward Sustainable Continual Learning: Detection and Knowledge
Repurposing of Similar Tasks [31.095642850920385]
We introduce a paradigm where the continual learner gets a sequence of mixed similar and dissimilar tasks.
We propose a new continual learning framework that uses a task similarity detection function that does not require additional learning.
Our experiments show that the proposed framework performs competitively on widely used computer vision benchmarks.
arXiv Detail & Related papers (2022-10-11T19:35:30Z) - Transferring Knowledge for Reinforcement Learning in Contact-Rich
Manipulation [10.219833196479142]
We address the challenge of transferring knowledge within a family of similar tasks by leveraging multiple skill priors.
Our method learns a latent action space representing the skill embedding from demonstrated trajectories for each prior task.
We have evaluated our method on a set of peg-in-hole insertion tasks and demonstrate better generalization to new tasks that have never been encountered during training.
arXiv Detail & Related papers (2022-09-19T10:31:13Z) - ConTinTin: Continual Learning from Task Instructions [101.36836925135091]
This work defines a new learning paradigm ConTinTin, in which a system should learn a sequence of new tasks one by one, each task is explained by a piece of textual instruction.
To our knowledge, this is the first time to study ConTinTin in NLP.
arXiv Detail & Related papers (2022-03-16T10:27:18Z) - Relational Experience Replay: Continual Learning by Adaptively Tuning
Task-wise Relationship [54.73817402934303]
We propose Experience Continual Replay (ERR), a bi-level learning framework to adaptively tune task-wise to achieve a better stability plasticity' tradeoff.
ERR can consistently improve the performance of all baselines and surpass current state-of-the-art methods.
arXiv Detail & Related papers (2021-12-31T12:05:22Z) - Variational Multi-Task Learning with Gumbel-Softmax Priors [105.22406384964144]
Multi-task learning aims to explore task relatedness to improve individual tasks.
We propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks.
arXiv Detail & Related papers (2021-11-09T18:49:45Z) - Efficiently Identifying Task Groupings for Multi-Task Learning [55.80489920205404]
Multi-task learning can leverage information learned by one task to benefit the training of other tasks.
We suggest an approach to select which tasks should train together in multi-task learning models.
Our method determines task groupings in a single training run by co-training all tasks together and quantifying the effect to which one task's gradient would affect another task's loss.
arXiv Detail & Related papers (2021-09-10T02:01:43Z) - Auxiliary Learning by Implicit Differentiation [54.92146615836611]
Training neural networks with auxiliary tasks is a common practice for improving the performance on a main task of interest.
Here, we propose a novel framework, AuxiLearn, that targets both challenges based on implicit differentiation.
First, when useful auxiliaries are known, we propose learning a network that combines all losses into a single coherent objective function.
Second, when no useful auxiliary task is known, we describe how to learn a network that generates a meaningful, novel auxiliary task.
arXiv Detail & Related papers (2020-06-22T19:35:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.