Multi-task learning via robust regularized clustering with non-convex group penalties
- URL: http://arxiv.org/abs/2404.03250v2
- Date: Mon, 27 May 2024 11:37:12 GMT
- Title: Multi-task learning via robust regularized clustering with non-convex group penalties
- Authors: Akira Okazaki, Shuichi Kawano,
- Abstract summary: Multi-task learning (MTL) aims to improve estimation performance by sharing common information among related tasks.
Existing MTL methods based on this assumption often ignore outlier tasks.
We propose a novel MTL method called MultiTask Regularized Clustering (MTLRRC)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-task learning (MTL) aims to improve estimation and prediction performance by sharing common information among related tasks. One natural assumption in MTL is that tasks are classified into clusters based on their characteristics. However, existing MTL methods based on this assumption often ignore outlier tasks that have large task-specific components or no relation to other tasks. To address this issue, we propose a novel MTL method called Multi-Task Learning via Robust Regularized Clustering (MTLRRC). MTLRRC incorporates robust regularization terms inspired by robust convex clustering, which is further extended to handle non-convex and group-sparse penalties. The extension allows MTLRRC to simultaneously perform robust task clustering and outlier task detection. The connection between the extended robust clustering and the multivariate M-estimator is also established. This provides an interpretation of the robustness of MTLRRC against outlier tasks. An efficient algorithm based on a modified alternating direction method of multipliers is developed for the estimation of the parameters. The effectiveness of MTLRRC is demonstrated through simulation studies and application to real data.
Related papers
- Interpetable Target-Feature Aggregation for Multi-Task Learning based on Bias-Variance Analysis [53.38518232934096]
Multi-task learning (MTL) is a powerful machine learning paradigm designed to leverage shared knowledge across tasks to improve generalization and performance.
We propose an MTL approach at the intersection between task clustering and feature transformation based on a two-phase iterative aggregation of targets and features.
In both phases, a key aspect is to preserve the interpretability of the reduced targets and features through the aggregation with the mean, which is motivated by applications to Earth science.
arXiv Detail & Related papers (2024-06-12T08:30:16Z) - Multitask Learning Can Improve Worst-Group Outcomes [76.92646345152788]
Multitask learning (MTL) is one such widely used technique.
We propose to modify standard MTL by regularizing the joint multitask representation space.
We find that our regularized MTL approach emphconsistently outperforms JTT on both average and worst-group outcomes.
arXiv Detail & Related papers (2023-12-05T21:38:24Z) - Task-Distributionally Robust Data-Free Meta-Learning [99.56612787882334]
Data-Free Meta-Learning (DFML) aims to efficiently learn new tasks by leveraging multiple pre-trained models without requiring their original training data.
For the first time, we reveal two major challenges hindering their practical deployments: Task-Distribution Shift ( TDS) and Task-Distribution Corruption (TDC)
arXiv Detail & Related papers (2023-11-23T15:46:54Z) - Low-Rank Multitask Learning based on Tensorized SVMs and LSSVMs [65.42104819071444]
Multitask learning (MTL) leverages task-relatedness to enhance performance.
We employ high-order tensors, with each mode corresponding to a task index, to naturally represent tasks referenced by multiple indices.
We propose a general framework of low-rank MTL methods with tensorized support vector machines (SVMs) and least square support vector machines (LSSVMs)
arXiv Detail & Related papers (2023-08-30T14:28:26Z) - Mitigating Task Interference in Multi-Task Learning via Explicit Task
Routing with Non-Learnable Primitives [19.90788777476128]
Multi-task learning (MTL) seeks to learn a single model to accomplish multiple tasks by leveraging shared information among the tasks.
Existing MTL models have been known to suffer from negative interference among tasks.
We propose ETR-NLP to mitigate task interference through a synergistic combination of non-learnable primitives and explicit task routing.
arXiv Detail & Related papers (2023-08-03T22:34:16Z) - When Multi-Task Learning Meets Partial Supervision: A Computer Vision Review [7.776434991976473]
Multi-Task Learning (MTL) aims to learn multiple tasks simultaneously while exploiting their mutual relationships.
This review focuses on how MTL could be utilised under different partial supervision settings to address these challenges.
arXiv Detail & Related papers (2023-07-25T20:08:41Z) - Multi-Task Learning Regression via Convex Clustering [0.0]
We propose an MTL method with a centroid parameter representing a cluster center of the task.
We show the effectiveness of the proposed method through Monte Carlo simulations and applications to real data.
arXiv Detail & Related papers (2023-04-26T07:25:21Z) - Semisoft Task Clustering for Multi-Task Learning [2.806911268410107]
Multi-task learning (MTL) aims to improve the performance of multiple related prediction tasks by leveraging useful information from them.
We propose a semisoft task clustering approach, which can simultaneously reveal the task clustering structure for both pure mixed tasks as well as select the relevant features.
The experimental results based on synthetic and real-world datasets validate the effectiveness and efficiency of the proposed approach.
arXiv Detail & Related papers (2022-11-28T07:23:56Z) - Semi-supervised Multi-task Learning for Semantics and Depth [88.77716991603252]
Multi-Task Learning (MTL) aims to enhance the model generalization by sharing representations between related tasks for better performance.
We propose the Semi-supervised Multi-Task Learning (MTL) method to leverage the available supervisory signals from different datasets.
We present a domain-aware discriminator structure with various alignment formulations to mitigate the domain discrepancy issue among datasets.
arXiv Detail & Related papers (2021-10-14T07:43:39Z) - Task-Feature Collaborative Learning with Application to Personalized
Attribute Prediction [166.87111665908333]
We propose a novel multi-task learning method called Task-Feature Collaborative Learning (TFCL)
Specifically, we first propose a base model with a heterogeneous block-diagonal structure regularizer to leverage the collaborative grouping of features and tasks.
As a practical extension, we extend the base model by allowing overlapping features and differentiating the hard tasks.
arXiv Detail & Related papers (2020-04-29T02:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.