Multi-task Supervised Learning via Cross-learning
- URL: http://arxiv.org/abs/2010.12993v3
- Date: Thu, 27 May 2021 01:55:44 GMT
- Title: Multi-task Supervised Learning via Cross-learning
- Authors: Juan Cervino, Juan Andres Bazerque, Miguel Calvo-Fullana and Alejandro
Ribeiro
- Abstract summary: We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
- Score: 102.64082402388192
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we consider a problem known as multi-task learning, consisting
of fitting a set of classifier or regression functions intended for solving
different tasks. In our novel formulation, we couple the parameters of these
functions, so that they learn in their task specific domains while staying
close to each other. This facilitates cross-fertilization in which data
collected across different domains help improving the learning performance at
each other task. First, we present a simplified case in which the goal is to
estimate the means of two Gaussian variables, for the purpose of gaining some
insights on the advantage of the proposed cross-learning strategy. Then we
provide a stochastic projected gradient algorithm to perform cross-learning
over a generic loss function. If the number of parameters is large, then the
projection step becomes computationally expensive. To avoid this situation, we
derive a primal-dual algorithm that exploits the structure of the dual problem,
achieving a formulation whose complexity only depends on the number of tasks.
Preliminary numerical experiments for image classification by neural networks
trained on a dataset divided in different domains corroborate that the
cross-learned function outperforms both the task-specific and the consensus
approaches.
Related papers
- Interpetable Target-Feature Aggregation for Multi-Task Learning based on Bias-Variance Analysis [53.38518232934096]
Multi-task learning (MTL) is a powerful machine learning paradigm designed to leverage shared knowledge across tasks to improve generalization and performance.
We propose an MTL approach at the intersection between task clustering and feature transformation based on a two-phase iterative aggregation of targets and features.
In both phases, a key aspect is to preserve the interpretability of the reduced targets and features through the aggregation with the mean, which is motivated by applications to Earth science.
arXiv Detail & Related papers (2024-06-12T08:30:16Z) - Multitask Learning with No Regret: from Improved Confidence Bounds to
Active Learning [79.07658065326592]
Quantifying uncertainty in the estimated tasks is of pivotal importance for many downstream applications, such as online or active learning.
We provide novel multitask confidence intervals in the challenging setting when neither the similarity between tasks nor the tasks' features are available to the learner.
We propose a novel online learning algorithm that achieves such improved regret without knowing this parameter in advance.
arXiv Detail & Related papers (2023-08-03T13:08:09Z) - Multi-Task Learning with Prior Information [5.770309971945476]
We propose a multi-task learning framework, where we utilize prior knowledge about the relations between features.
We also impose a penalty on the coefficients changing for each specific feature to ensure related tasks have similar coefficients on common features shared among them.
arXiv Detail & Related papers (2023-01-04T12:48:05Z) - Multi-task Bias-Variance Trade-off Through Functional Constraints [102.64082402388192]
Multi-task learning aims to acquire a set of functions that perform well for diverse tasks.
In this paper we draw intuition from the two extreme learning scenarios -- a single function for all tasks, and a task-specific function that ignores the other tasks.
We introduce a constrained learning formulation that enforces domain specific solutions to a central function.
arXiv Detail & Related papers (2022-10-27T16:06:47Z) - On Generalizing Beyond Domains in Cross-Domain Continual Learning [91.56748415975683]
Deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
Our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.
arXiv Detail & Related papers (2022-03-08T09:57:48Z) - Gap Minimization for Knowledge Sharing and Transfer [24.954256258648982]
In this paper, we introduce the notion of emphperformance gap, an intuitive and novel measure of the distance between learning tasks.
We show that the performance gap can be viewed as a data- and algorithm-dependent regularizer, which controls the model complexity and leads to finer guarantees.
We instantiate this principle with two algorithms: 1. gapBoost, a novel and principled boosting algorithm that explicitly minimizes the performance gap between source and target domains for transfer learning; and 2. gapMTNN, a representation learning algorithm that reformulates gap minimization as semantic conditional matching
arXiv Detail & Related papers (2022-01-26T23:06:20Z) - Learning-to-learn non-convex piecewise-Lipschitz functions [44.6133187924678]
We analyze the meta-learning of the algorithms for piecewise-Lipschitz functions, a non-task with applications to both machine learning algorithms.
We propose a practical meta-learning procedure that learns both the step-size of the algorithm from multiple online learning tasks.
arXiv Detail & Related papers (2021-08-19T16:22:48Z) - Auxiliary Learning by Implicit Differentiation [54.92146615836611]
Training neural networks with auxiliary tasks is a common practice for improving the performance on a main task of interest.
Here, we propose a novel framework, AuxiLearn, that targets both challenges based on implicit differentiation.
First, when useful auxiliaries are known, we propose learning a network that combines all losses into a single coherent objective function.
Second, when no useful auxiliary task is known, we describe how to learn a network that generates a meaningful, novel auxiliary task.
arXiv Detail & Related papers (2020-06-22T19:35:07Z) - Double Double Descent: On Generalization Errors in Transfer Learning
between Linear Regression Tasks [30.075430694663293]
We study the transfer learning process between two linear regression problems.
We examine a parameter transfer mechanism whereby a subset of the parameters of the target task solution are constrained to the values learned for a related source task.
arXiv Detail & Related papers (2020-06-12T08:42:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.