Sign-regularized Multi-task Learning
- URL: http://arxiv.org/abs/2102.11191v1
- Date: Mon, 22 Feb 2021 17:11:15 GMT
- Title: Sign-regularized Multi-task Learning
- Authors: Johnny Torres, Guangji Bai, Junxiang Wang, Liang Zhao, Carmen Vaca,
Cristina Abad
- Abstract summary: Multi-task learning is a framework that enforces different learning tasks to share their knowledge to improve their performance.
It strives to handle several core issues; particularly, which tasks are correlated and similar, and how to share the knowledge among correlated tasks.
- Score: 13.685061061742523
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-task learning is a framework that enforces different learning tasks to
share their knowledge to improve their generalization performance. It is a hot
and active domain that strives to handle several core issues; particularly,
which tasks are correlated and similar, and how to share the knowledge among
correlated tasks. Existing works usually do not distinguish the polarity and
magnitude of feature weights and commonly rely on linear correlation, due to
three major technical challenges in: 1) optimizing the models that regularize
feature weight polarity, 2) deciding whether to regularize sign or magnitude,
3) identifying which tasks should share their sign and/or magnitude patterns.
To address them, this paper proposes a new multi-task learning framework that
can regularize feature weight signs across tasks. We innovatively formulate it
as a biconvex inequality constrained optimization with slacks and propose a new
efficient algorithm for the optimization with theoretical guarantees on
generalization performance and convergence. Extensive experiments on multiple
datasets demonstrate the proposed methods' effectiveness, efficiency, and
reasonableness of the regularized feature weighted patterns.
Related papers
- Sharing Knowledge in Multi-Task Deep Reinforcement Learning [57.38874587065694]
We study the benefit of sharing representations among tasks to enable the effective use of deep neural networks in Multi-Task Reinforcement Learning.
We prove this by providing theoretical guarantees that highlight the conditions for which is convenient to share representations among tasks.
arXiv Detail & Related papers (2024-01-17T19:31:21Z) - Multi-Task Learning with Prior Information [5.770309971945476]
We propose a multi-task learning framework, where we utilize prior knowledge about the relations between features.
We also impose a penalty on the coefficients changing for each specific feature to ensure related tasks have similar coefficients on common features shared among them.
arXiv Detail & Related papers (2023-01-04T12:48:05Z) - Multi-task Bias-Variance Trade-off Through Functional Constraints [102.64082402388192]
Multi-task learning aims to acquire a set of functions that perform well for diverse tasks.
In this paper we draw intuition from the two extreme learning scenarios -- a single function for all tasks, and a task-specific function that ignores the other tasks.
We introduce a constrained learning formulation that enforces domain specific solutions to a central function.
arXiv Detail & Related papers (2022-10-27T16:06:47Z) - Leveraging convergence behavior to balance conflicting tasks in
multi-task learning [3.6212652499950138]
Multi-Task Learning uses correlated tasks to improve performance generalization.
Tasks often conflict with each other, which makes it challenging to define how the gradients of multiple tasks should be combined.
We propose a method that takes into account temporal behaviour of the gradients to create a dynamic bias that adjust the importance of each task during the backpropagation.
arXiv Detail & Related papers (2022-04-14T01:52:34Z) - On Modality Bias Recognition and Reduction [70.69194431713825]
We study the modality bias problem in the context of multi-modal classification.
We propose a plug-and-play loss function method, whereby the feature space for each label is adaptively learned.
Our method yields remarkable performance improvements compared with the baselines.
arXiv Detail & Related papers (2022-02-25T13:47:09Z) - In Defense of the Unitary Scalarization for Deep Multi-Task Learning [121.76421174107463]
We present a theoretical analysis suggesting that many specialized multi-tasks can be interpreted as forms of regularization.
We show that, when coupled with standard regularization and stabilization techniques, unitary scalarization matches or improves upon the performance of complex multitasks.
arXiv Detail & Related papers (2022-01-11T18:44:17Z) - Small Towers Make Big Differences [59.243296878666285]
Multi-task learning aims at solving multiple machine learning tasks at the same time.
A good solution to a multi-task learning problem should be generalizable in addition to being Pareto optimal.
We propose a method of under- parameterized self-auxiliaries for multi-task models to achieve the best of both worlds.
arXiv Detail & Related papers (2020-08-13T10:45:31Z) - Task-Feature Collaborative Learning with Application to Personalized
Attribute Prediction [166.87111665908333]
We propose a novel multi-task learning method called Task-Feature Collaborative Learning (TFCL)
Specifically, we first propose a base model with a heterogeneous block-diagonal structure regularizer to leverage the collaborative grouping of features and tasks.
As a practical extension, we extend the base model by allowing overlapping features and differentiating the hard tasks.
arXiv Detail & Related papers (2020-04-29T02:32:04Z) - Distributed Primal-Dual Optimization for Online Multi-Task Learning [22.45069527817333]
We propose an adaptive primal-dual algorithm, which captures task-specific noise in adversarial learning and carries out a projection-free update with runtime efficiency.
Our model is well-suited to decentralized periodic-connected tasks as it allows the energy-starved or bandwidth-constraint tasks to postpone the update.
Empirical results confirm that the proposed model is highly effective on various real-world datasets.
arXiv Detail & Related papers (2020-04-02T23:36:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.