Adaptive and Robust Multi-Task Learning
- URL: http://arxiv.org/abs/2202.05250v4
- Date: Sat, 16 Sep 2023 22:24:06 GMT
- Title: Adaptive and Robust Multi-Task Learning
- Authors: Yaqi Duan, Kaizheng Wang
- Abstract summary: We study the multi-task learning problem that aims to simultaneously analyze multiple datasets collected from different sources.
We propose a family of adaptive methods that automatically utilize possible similarities among those tasks.
We derive sharp statistical guarantees for the methods and prove their robustness against outlier tasks.
- Score: 8.883733362171036
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the multi-task learning problem that aims to simultaneously analyze
multiple datasets collected from different sources and learn one model for each
of them. We propose a family of adaptive methods that automatically utilize
possible similarities among those tasks while carefully handling their
differences. We derive sharp statistical guarantees for the methods and prove
their robustness against outlier tasks. Numerical experiments on synthetic and
real datasets demonstrate the efficacy of our new methods.
Related papers
- Multi-Task Learning with Summary Statistics [4.871473117968554]
We propose a flexible multi-task learning framework utilizing summary statistics from various sources.
We also present an adaptive parameter selection approach based on a variant of Lepski's method.
This work offers a more flexible tool for training related models across various domains, with practical implications in genetic risk prediction.
arXiv Detail & Related papers (2023-07-05T15:55:23Z) - Multi-task Bias-Variance Trade-off Through Functional Constraints [102.64082402388192]
Multi-task learning aims to acquire a set of functions that perform well for diverse tasks.
In this paper we draw intuition from the two extreme learning scenarios -- a single function for all tasks, and a task-specific function that ignores the other tasks.
We introduce a constrained learning formulation that enforces domain specific solutions to a central function.
arXiv Detail & Related papers (2022-10-27T16:06:47Z) - Mitigating Gradient Bias in Multi-objective Learning: A Provably Convergent Stochastic Approach [38.76462300149459]
We develop a Multi-objective Correction (MoCo) method for multi-objective gradient optimization.
The unique feature of our method is that it can guarantee convergence without increasing the non fairness gradient.
arXiv Detail & Related papers (2022-10-23T05:54:26Z) - Multi-model Ensemble Learning Method for Human Expression Recognition [31.76775306959038]
We propose our solution based on the ensemble learning method to capture large amounts of real-life data.
We conduct many experiments on the AffWild2 dataset of the ABAW2022 Challenge, and the results demonstrate the effectiveness of our solution.
arXiv Detail & Related papers (2022-03-28T03:15:06Z) - On Modality Bias Recognition and Reduction [70.69194431713825]
We study the modality bias problem in the context of multi-modal classification.
We propose a plug-and-play loss function method, whereby the feature space for each label is adaptively learned.
Our method yields remarkable performance improvements compared with the baselines.
arXiv Detail & Related papers (2022-02-25T13:47:09Z) - The Effect of Diversity in Meta-Learning [79.56118674435844]
Few-shot learning aims to learn representations that can tackle novel tasks given a small number of examples.
Recent studies show that task distribution plays a vital role in the model's performance.
We study different task distributions on a myriad of models and datasets to evaluate the effect of task diversity on meta-learning algorithms.
arXiv Detail & Related papers (2022-01-27T19:39:07Z) - Variational Multi-Task Learning with Gumbel-Softmax Priors [105.22406384964144]
Multi-task learning aims to explore task relatedness to improve individual tasks.
We propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks.
arXiv Detail & Related papers (2021-11-09T18:49:45Z) - Boosting a Model Zoo for Multi-Task and Continual Learning [15.110807414130923]
"Model Zoo" is an algorithm that builds an ensemble of models, each of which is very small, and it is trained on a smaller set of tasks.
Model Zoo achieves large gains in prediction accuracy compared to state-of-the-art methods in multi-task and continual learning.
arXiv Detail & Related papers (2021-06-06T04:25:09Z) - Meta-Reinforcement Learning Robust to Distributional Shift via Model
Identification and Experience Relabeling [126.69933134648541]
We present a meta-reinforcement learning algorithm that is both efficient and extrapolates well when faced with out-of-distribution tasks at test time.
Our method is based on a simple insight: we recognize that dynamics models can be adapted efficiently and consistently with off-policy data.
arXiv Detail & Related papers (2020-06-12T13:34:46Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.