Multi-task Bias-Variance Trade-off Through Functional Constraints
- URL: http://arxiv.org/abs/2210.15573v1
- Date: Thu, 27 Oct 2022 16:06:47 GMT
- Title: Multi-task Bias-Variance Trade-off Through Functional Constraints
- Authors: Juan Cervino, Juan Andres Bazerque, Miguel Calvo-Fullana, Alejandro
Ribeiro
- Abstract summary: Multi-task learning aims to acquire a set of functions that perform well for diverse tasks.
In this paper we draw intuition from the two extreme learning scenarios -- a single function for all tasks, and a task-specific function that ignores the other tasks.
We introduce a constrained learning formulation that enforces domain specific solutions to a central function.
- Score: 102.64082402388192
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-task learning aims to acquire a set of functions, either regressors or
classifiers, that perform well for diverse tasks. At its core, the idea behind
multi-task learning is to exploit the intrinsic similarity across data sources
to aid in the learning process for each individual domain. In this paper we
draw intuition from the two extreme learning scenarios -- a single function for
all tasks, and a task-specific function that ignores the other tasks
dependencies -- to propose a bias-variance trade-off. To control the
relationship between the variance (given by the number of i.i.d. samples), and
the bias (coming from data from other task), we introduce a constrained
learning formulation that enforces domain specific solutions to be close to a
central function. This problem is solved in the dual domain, for which we
propose a stochastic primal-dual algorithm. Experimental results for a
multi-domain classification problem with real data show that the proposed
procedure outperforms both the task specific, as well as the single
classifiers.
Related papers
- Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Leveraging sparse and shared feature activations for disentangled
representation learning [112.22699167017471]
We propose to leverage knowledge extracted from a diversified set of supervised tasks to learn a common disentangled representation.
We validate our approach on six real world distribution shift benchmarks, and different data modalities.
arXiv Detail & Related papers (2023-04-17T01:33:24Z) - Identification of Negative Transfers in Multitask Learning Using
Surrogate Models [29.882265735630046]
Multitask learning is widely used to train a low-resource target task by augmenting it with multiple related source tasks.
A critical problem in multitask learning is identifying subsets of source tasks that would benefit the target task.
We introduce an efficient procedure to address this problem via surrogate modeling.
arXiv Detail & Related papers (2023-03-25T23:16:11Z) - Multi-Task Learning with Prior Information [5.770309971945476]
We propose a multi-task learning framework, where we utilize prior knowledge about the relations between features.
We also impose a penalty on the coefficients changing for each specific feature to ensure related tasks have similar coefficients on common features shared among them.
arXiv Detail & Related papers (2023-01-04T12:48:05Z) - Leveraging convergence behavior to balance conflicting tasks in
multi-task learning [3.6212652499950138]
Multi-Task Learning uses correlated tasks to improve performance generalization.
Tasks often conflict with each other, which makes it challenging to define how the gradients of multiple tasks should be combined.
We propose a method that takes into account temporal behaviour of the gradients to create a dynamic bias that adjust the importance of each task during the backpropagation.
arXiv Detail & Related papers (2022-04-14T01:52:34Z) - Variational Multi-Task Learning with Gumbel-Softmax Priors [105.22406384964144]
Multi-task learning aims to explore task relatedness to improve individual tasks.
We propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks.
arXiv Detail & Related papers (2021-11-09T18:49:45Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - Small Towers Make Big Differences [59.243296878666285]
Multi-task learning aims at solving multiple machine learning tasks at the same time.
A good solution to a multi-task learning problem should be generalizable in addition to being Pareto optimal.
We propose a method of under- parameterized self-auxiliaries for multi-task models to achieve the best of both worlds.
arXiv Detail & Related papers (2020-08-13T10:45:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.