Active Multi-Task Representation Learning
- URL: http://arxiv.org/abs/2202.00911v1
- Date: Wed, 2 Feb 2022 08:23:24 GMT
- Title: Active Multi-Task Representation Learning
- Authors: Yifang Chen, Simon S. Du, Kevin Jamieson
- Abstract summary: We give the first formal study on resource task sampling by leveraging the techniques from active learning.
We propose an algorithm that iteratively estimates the relevance of each source task to the target task and samples from each source task based on the estimated relevance.
- Score: 50.13453053304159
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To leverage the power of big data from source tasks and overcome the scarcity
of the target task samples, representation learning based on multi-task
pretraining has become a standard approach in many applications. However, up
until now, choosing which source tasks to include in the multi-task learning
has been more art than science. In this paper, we give the first formal study
on resource task sampling by leveraging the techniques from active learning. We
propose an algorithm that iteratively estimates the relevance of each source
task to the target task and samples from each source task based on the
estimated relevance. Theoretically, we show that for the linear representation
class, to achieve the same error rate, our algorithm can save up to a
\textit{number of source tasks} factor in the source task sample complexity,
compared with the naive uniform sampling from all source tasks. We also provide
experiments on real-world computer vision datasets to illustrate the
effectiveness of our proposed method on both linear and convolutional neural
network representation classes. We believe our paper serves as an important
initial step to bring techniques from active learning to representation
learning.
Related papers
- The Power of Active Multi-Task Learning in Reinforcement Learning from Human Feedback [12.388205905012423]
Reinforcement learning from human feedback has contributed to performance improvements in large language models.
We formulate RLHF as the contextual dueling bandit problem and assume a common linear representation.
We prove that to achieve $varepsilon-$optimal, the sample complexity of the source tasks can be significantly reduced.
arXiv Detail & Related papers (2024-05-18T08:29:15Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Identification of Negative Transfers in Multitask Learning Using
Surrogate Models [29.882265735630046]
Multitask learning is widely used to train a low-resource target task by augmenting it with multiple related source tasks.
A critical problem in multitask learning is identifying subsets of source tasks that would benefit the target task.
We introduce an efficient procedure to address this problem via surrogate modeling.
arXiv Detail & Related papers (2023-03-25T23:16:11Z) - Provable Benefits of Representational Transfer in Reinforcement Learning [59.712501044999875]
We study the problem of representational transfer in RL, where an agent first pretrains in a number of source tasks to discover a shared representation.
We show that given generative access to source tasks, we can discover a representation, using which subsequent linear RL techniques quickly converge to a near-optimal policy.
arXiv Detail & Related papers (2022-05-29T04:31:29Z) - X-Learner: Learning Cross Sources and Tasks for Universal Visual
Representation [71.51719469058666]
We propose a representation learning framework called X-Learner.
X-Learner learns the universal feature of multiple vision tasks supervised by various sources.
X-Learner achieves strong performance on different tasks without extra annotations, modalities and computational costs.
arXiv Detail & Related papers (2022-03-16T17:23:26Z) - Variational Multi-Task Learning with Gumbel-Softmax Priors [105.22406384964144]
Multi-task learning aims to explore task relatedness to improve individual tasks.
We propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks.
arXiv Detail & Related papers (2021-11-09T18:49:45Z) - Pretext Tasks selection for multitask self-supervised speech
representation learning [23.39079406674442]
This paper introduces a method to select a group of pretext tasks among a set of candidates.
Experiments conducted on speaker recognition and automatic speech recognition validate our approach.
arXiv Detail & Related papers (2021-07-01T16:36:29Z) - Exploring Relational Context for Multi-Task Dense Prediction [76.86090370115]
We consider a multi-task environment for dense prediction tasks, represented by a common backbone and independent task-specific heads.
We explore various attention-based contexts, such as global and local, in the multi-task setting.
We propose an Adaptive Task-Relational Context module, which samples the pool of all available contexts for each task pair.
arXiv Detail & Related papers (2021-04-28T16:45:56Z) - Efficient Reinforcement Learning in Resource Allocation Problems Through
Permutation Invariant Multi-task Learning [6.247939901619901]
We show that in certain settings, the available data can be dramatically increased through a form of multi-task learning.
We provide a theoretical performance bound for the gain in sample efficiency under this setting.
This motivates a new approach to multi-task learning, which involves the design of an appropriate neural network architecture and a prioritized task-sampling strategy.
arXiv Detail & Related papers (2021-02-18T14:13:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.