Multitask Learning with No Regret: from Improved Confidence Bounds to
Active Learning
- URL: http://arxiv.org/abs/2308.01744v1
- Date: Thu, 3 Aug 2023 13:08:09 GMT
- Title: Multitask Learning with No Regret: from Improved Confidence Bounds to
Active Learning
- Authors: Pier Giuseppe Sessa, Pierre Laforgue, Nicol\`o Cesa-Bianchi, Andreas
Krause
- Abstract summary: Quantifying uncertainty in the estimated tasks is of pivotal importance for many downstream applications, such as online or active learning.
We provide novel multitask confidence intervals in the challenging setting when neither the similarity between tasks nor the tasks' features are available to the learner.
We propose a novel online learning algorithm that achieves such improved regret without knowing this parameter in advance.
- Score: 79.07658065326592
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multitask learning is a powerful framework that enables one to simultaneously
learn multiple related tasks by sharing information between them. Quantifying
uncertainty in the estimated tasks is of pivotal importance for many downstream
applications, such as online or active learning. In this work, we provide novel
multitask confidence intervals in the challenging agnostic setting, i.e., when
neither the similarity between tasks nor the tasks' features are available to
the learner. The obtained intervals do not require i.i.d. data and can be
directly applied to bound the regret in online learning. Through a refined
analysis of the multitask information gain, we obtain new regret guarantees
that, depending on a task similarity parameter, can significantly improve over
treating tasks independently. We further propose a novel online learning
algorithm that achieves such improved regret without knowing this parameter in
advance, i.e., automatically adapting to task similarity. As a second key
application of our results, we introduce a novel multitask active learning
setup where several tasks must be simultaneously optimized, but only one of
them can be queried for feedback by the learner at each round. For this
problem, we design a no-regret algorithm that uses our confidence intervals to
decide which task should be queried. Finally, we empirically validate our
bounds and algorithms on synthetic and real-world (drug discovery) data.
Related papers
- Continual Learning of Numerous Tasks from Long-tail Distributions [17.706669222987273]
Continual learning focuses on developing models that learn and adapt to new tasks while retaining previously acquired knowledge.
Existing continual learning algorithms usually involve a small number of tasks with uniform sizes and may not accurately represent real-world learning scenarios.
We propose a method that reuses the states in Adam by maintaining a weighted average of the second moments from previous tasks.
We demonstrate that our method, compatible with most existing continual learning algorithms, effectively reduces forgetting with only a small amount of additional computational or memory costs.
arXiv Detail & Related papers (2024-04-03T13:56:33Z) - Is Multi-Task Learning an Upper Bound for Continual Learning? [26.729088618251282]
This paper proposes a novel continual self-supervised learning setting, where each task corresponds to learning an invariant representation for a specific class of data augmentations.
We show that continual learning often beats multi-task learning on various benchmark datasets, including MNIST, CIFAR-10, and CIFAR-100.
arXiv Detail & Related papers (2022-10-26T15:45:11Z) - Saliency-Regularized Deep Multi-Task Learning [7.3810864598379755]
Multitask learning enforces multiple learning tasks to share knowledge to improve their generalization abilities.
Modern deep multitask learning can jointly learn latent features and task sharing, but they are obscure in task relation.
This paper proposes a new multitask learning framework that jointly learns latent features and explicit task relations.
arXiv Detail & Related papers (2022-07-03T20:26:44Z) - Variational Multi-Task Learning with Gumbel-Softmax Priors [105.22406384964144]
Multi-task learning aims to explore task relatedness to improve individual tasks.
We propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks.
arXiv Detail & Related papers (2021-11-09T18:49:45Z) - Active Multitask Learning with Committees [15.862634213775697]
The cost of annotating training data has traditionally been a bottleneck for supervised learning approaches.
We propose an active multitask learning algorithm that achieves knowledge transfer between tasks.
Our approach reduces the number of queries needed during training while maintaining high accuracy on test data.
arXiv Detail & Related papers (2021-03-24T18:07:23Z) - Measuring and Harnessing Transference in Multi-Task Learning [58.48659733262734]
Multi-task learning can leverage information learned by one task to benefit the training of other tasks.
We analyze the dynamics of information transfer, or transference, across tasks throughout training.
arXiv Detail & Related papers (2020-10-29T08:25:43Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - Linear Mode Connectivity in Multitask and Continual Learning [46.98656798573886]
We investigate whether multitask and continual solutions are similarly connected.
We propose an effective algorithm that constrains the sequentially learned minima to behave as the multitask solution.
arXiv Detail & Related papers (2020-10-09T10:53:25Z) - Reparameterizing Convolutions for Incremental Multi-Task Learning
without Task Interference [75.95287293847697]
Two common challenges in developing multi-task models are often overlooked in literature.
First, enabling the model to be inherently incremental, continuously incorporating information from new tasks without forgetting the previously learned ones (incremental learning)
Second, eliminating adverse interactions amongst tasks, which has been shown to significantly degrade the single-task performance in a multi-task setup (task interference)
arXiv Detail & Related papers (2020-07-24T14:44:46Z) - Gradient Surgery for Multi-Task Learning [119.675492088251]
Multi-task learning has emerged as a promising approach for sharing structure across multiple tasks.
The reasons why multi-task learning is so challenging compared to single-task learning are not fully understood.
We propose a form of gradient surgery that projects a task's gradient onto the normal plane of the gradient of any other task that has a conflicting gradient.
arXiv Detail & Related papers (2020-01-19T06:33:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.