Multi-Task Learning Regression via Convex Clustering
- URL: http://arxiv.org/abs/2304.13342v1
- Date: Wed, 26 Apr 2023 07:25:21 GMT
- Title: Multi-Task Learning Regression via Convex Clustering
- Authors: Akira Okazaki, Shuichi Kawano
- Abstract summary: We propose an MTL method with a centroid parameter representing a cluster center of the task.
We show the effectiveness of the proposed method through Monte Carlo simulations and applications to real data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-task learning (MTL) is a methodology that aims to improve the general
performance of estimation and prediction by sharing common information among
related tasks. In the MTL, there are several assumptions for the relationships
and methods to incorporate them. One of the natural assumptions in the
practical situation is that tasks are classified into some clusters with their
characteristics. For this assumption, the group fused regularization approach
performs clustering of the tasks by shrinking the difference among tasks. This
enables us to transfer common information within the same cluster. However,
this approach also transfers the information between different clusters, which
worsens the estimation and prediction. To overcome this problem, we propose an
MTL method with a centroid parameter representing a cluster center of the task.
Because this model separates parameters into the parameters for regression and
the parameters for clustering, we can improve estimation and prediction
accuracy for regression coefficient vectors. We show the effectiveness of the
proposed method through Monte Carlo simulations and applications to real data.
Related papers
- Time Series Clustering with General State Space Models via Stochastic Variational Inference [0.0]
We propose a novel method of model-based time series clustering with mixtures of general state space models (MSSMs)
An advantage of the proposed method is that it enables the use of time series models appropriate to the specific time series.
Experiments on simulated datasets show that the proposed method is effective for clustering, parameter estimation, and estimating the number of clusters.
arXiv Detail & Related papers (2024-06-29T12:48:53Z) - Interpetable Target-Feature Aggregation for Multi-Task Learning based on Bias-Variance Analysis [53.38518232934096]
Multi-task learning (MTL) is a powerful machine learning paradigm designed to leverage shared knowledge across tasks to improve generalization and performance.
We propose an MTL approach at the intersection between task clustering and feature transformation based on a two-phase iterative aggregation of targets and features.
In both phases, a key aspect is to preserve the interpretability of the reduced targets and features through the aggregation with the mean, which is motivated by applications to Earth science.
arXiv Detail & Related papers (2024-06-12T08:30:16Z) - Task Groupings Regularization: Data-Free Meta-Learning with Heterogeneous Pre-trained Models [83.02797560769285]
Data-Free Meta-Learning (DFML) aims to derive knowledge from a collection of pre-trained models without accessing their original data.
Current methods often overlook the heterogeneity among pre-trained models, which leads to performance degradation due to task conflicts.
We propose Task Groupings Regularization, a novel approach that benefits from model heterogeneity by grouping and aligning conflicting tasks.
arXiv Detail & Related papers (2024-05-26T13:11:55Z) - Multi-task learning via robust regularized clustering with non-convex group penalties [0.0]
Multi-task learning (MTL) aims to improve estimation performance by sharing common information among related tasks.
Existing MTL methods based on this assumption often ignore outlier tasks.
We propose a novel MTL method called MultiTask Regularized Clustering (MTLRRC)
arXiv Detail & Related papers (2024-04-04T07:09:43Z) - Heterogeneous Multi-Task Gaussian Cox Processes [61.67344039414193]
We present a novel extension of multi-task Gaussian Cox processes for modeling heterogeneous correlated tasks jointly.
A MOGP prior over the parameters of the dedicated likelihoods for classification, regression and point process tasks can facilitate sharing of information between heterogeneous tasks.
We derive a mean-field approximation to realize closed-form iterative updates for estimating model parameters.
arXiv Detail & Related papers (2023-08-29T15:01:01Z) - A parallelizable model-based approach for marginal and multivariate
clustering [0.0]
This paper develops a clustering method that takes advantage of the sturdiness of model-based clustering.
We tackle this issue by specifying a finite mixture model per margin that allows each margin to have a different number of clusters.
The proposed approach is computationally appealing as well as more tractable for moderate to high dimensions than a full' (joint) model-based clustering approach.
arXiv Detail & Related papers (2022-12-07T23:54:41Z) - A One-shot Framework for Distributed Clustered Learning in Heterogeneous
Environments [54.172993875654015]
The paper proposes a family of communication efficient methods for distributed learning in heterogeneous environments.
One-shot approach, based on local computations at the users and a clustering based aggregation step at the server is shown to provide strong learning guarantees.
For strongly convex problems it is shown that, as long as the number of data points per user is above a threshold, the proposed approach achieves order-optimal mean-squared error rates in terms of the sample size.
arXiv Detail & Related papers (2022-09-22T09:04:10Z) - You Never Cluster Alone [150.94921340034688]
We extend the mainstream contrastive learning paradigm to a cluster-level scheme, where all the data subjected to the same cluster contribute to a unified representation.
We define a set of categorical variables as clustering assignment confidence, which links the instance-level learning track with the cluster-level one.
By reparametrizing the assignment variables, TCC is trained end-to-end, requiring no alternating steps.
arXiv Detail & Related papers (2021-06-03T14:59:59Z) - Cluster-Specific Predictions with Multi-Task Gaussian Processes [4.368185344922342]
A model involving Gaussian processes (GPs) is introduced to handle multi-task learning, clustering, and prediction.
The model is instantiated as a mixture of multi-task GPs with common mean processes.
The overall algorithm, called MagmaClust, is publicly available as an R package.
arXiv Detail & Related papers (2020-11-16T11:08:59Z) - Task-Feature Collaborative Learning with Application to Personalized
Attribute Prediction [166.87111665908333]
We propose a novel multi-task learning method called Task-Feature Collaborative Learning (TFCL)
Specifically, we first propose a base model with a heterogeneous block-diagonal structure regularizer to leverage the collaborative grouping of features and tasks.
As a practical extension, we extend the base model by allowing overlapping features and differentiating the hard tasks.
arXiv Detail & Related papers (2020-04-29T02:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.