Low-Rank Multitask Learning based on Tensorized SVMs and LSSVMs
- URL: http://arxiv.org/abs/2308.16056v1
- Date: Wed, 30 Aug 2023 14:28:26 GMT
- Title: Low-Rank Multitask Learning based on Tensorized SVMs and LSSVMs
- Authors: Jiani Liu, Qinghua Tao, Ce Zhu, Yipeng Liu, Xiaolin Huang, Johan A.K.
Suykens
- Abstract summary: Multitask learning (MTL) leverages task-relatedness to enhance performance.
We employ high-order tensors, with each mode corresponding to a task index, to naturally represent tasks referenced by multiple indices.
We propose a general framework of low-rank MTL methods with tensorized support vector machines (SVMs) and least square support vector machines (LSSVMs)
- Score: 65.42104819071444
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multitask learning (MTL) leverages task-relatedness to enhance performance.
With the emergence of multimodal data, tasks can now be referenced by multiple
indices. In this paper, we employ high-order tensors, with each mode
corresponding to a task index, to naturally represent tasks referenced by
multiple indices and preserve their structural relations. Based on this
representation, we propose a general framework of low-rank MTL methods with
tensorized support vector machines (SVMs) and least square support vector
machines (LSSVMs), where the CP factorization is deployed over the coefficient
tensor. Our approach allows to model the task relation through a linear
combination of shared factors weighted by task-specific factors and is
generalized to both classification and regression problems. Through the
alternating optimization scheme and the Lagrangian function, each subproblem is
transformed into a convex problem, formulated as a quadratic programming or
linear system in the dual form. In contrast to previous MTL frameworks, our
decision function in the dual induces a weighted kernel function with a
task-coupling term characterized by the similarities of the task-specific
factors, better revealing the explicit relations across tasks in MTL.
Experimental results validate the effectiveness and superiority of our proposed
methods compared to existing state-of-the-art approaches in MTL. The code of
implementation will be available at https://github.com/liujiani0216/TSVM-MTL.
Related papers
- Interpetable Target-Feature Aggregation for Multi-Task Learning based on Bias-Variance Analysis [53.38518232934096]
Multi-task learning (MTL) is a powerful machine learning paradigm designed to leverage shared knowledge across tasks to improve generalization and performance.
We propose an MTL approach at the intersection between task clustering and feature transformation based on a two-phase iterative aggregation of targets and features.
In both phases, a key aspect is to preserve the interpretability of the reduced targets and features through the aggregation with the mean, which is motivated by applications to Earth science.
arXiv Detail & Related papers (2024-06-12T08:30:16Z) - Multi-task learning via robust regularized clustering with non-convex group penalties [0.0]
Multi-task learning (MTL) aims to improve estimation performance by sharing common information among related tasks.
Existing MTL methods based on this assumption often ignore outlier tasks.
We propose a novel MTL method called MultiTask Regularized Clustering (MTLRRC)
arXiv Detail & Related papers (2024-04-04T07:09:43Z) - Online Multi-Task Learning with Recursive Least Squares and Recursive Kernel Methods [50.67996219968513]
We introduce two novel approaches for Online Multi-Task Learning (MTL) Regression Problems.
We achieve exact and approximate recursions with quadratic per-instance cost on the dimension of the input space.
We compare our online MTL methods to other contenders in a real-world wind speed forecasting case study.
arXiv Detail & Related papers (2023-08-03T01:41:34Z) - Tensorized LSSVMs for Multitask Regression [48.844191210894245]
Multitask learning (MTL) can utilize the relatedness between multiple tasks for performance improvement.
New MTL is proposed by leveraging low-rank tensor analysis and Least Squares Support Vectorized Least Squares Support Vectorized tLSSVM-MTL.
arXiv Detail & Related papers (2023-03-04T16:36:03Z) - Multi-task Highly Adaptive Lasso [1.4680035572775534]
We propose a novel, fully nonparametric approach for the multi-task learning, the Multi-task Highly Adaptive Lasso (MT-HAL)
MT-HAL simultaneously learns features, samples and task associations important for the common model, while imposing a shared sparse structure among similar tasks.
We show that MT-HAL outperforms sparsity-based MTL competitors across a wide range of simulation studies.
arXiv Detail & Related papers (2023-01-27T23:46:57Z) - Multi-Task Learning for Sparsity Pattern Heterogeneity: Statistical and Computational Perspectives [10.514866749547558]
We consider a problem in Multi-Task Learning (MTL) where multiple linear models are jointly trained on a collection of datasets.
A key novelty of our framework is that it allows the sparsity pattern of regression coefficients and the values of non-zero coefficients to differ across tasks.
Our methods encourage models to share information across tasks through separately encouraging 1) coefficient supports, and/or 2) nonzero coefficient values to be similar.
This allows models to borrow strength during variable selection even when non-zero coefficient values differ across tasks.
arXiv Detail & Related papers (2022-12-16T19:52:25Z) - M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task
Learning with Model-Accelerator Co-design [95.41238363769892]
Multi-task learning (MTL) encapsulates multiple learned tasks in a single model and often lets those tasks learn better jointly.
Current MTL regimes have to activate nearly the entire model even to just execute a single task.
We present a model-accelerator co-design framework to enable efficient on-device MTL.
arXiv Detail & Related papers (2022-10-26T15:40:24Z) - Multi-Task Learning as a Bargaining Game [63.49888996291245]
In Multi-task learning (MTL), a joint model is trained to simultaneously make predictions for several tasks.
Since the gradients of these different tasks may conflict, training a joint model for MTL often yields lower performance than its corresponding single-task counterparts.
We propose viewing the gradients combination step as a bargaining game, where tasks negotiate to reach an agreement on a joint direction of parameter update.
arXiv Detail & Related papers (2022-02-02T13:21:53Z) - Learning Functions to Study the Benefit of Multitask Learning [25.325601027501836]
We study and quantify the generalization patterns of multitask learning (MTL) models for sequence labeling tasks.
Although multitask learning has achieved improved performance in some problems, there are also tasks that lose performance when trained together.
arXiv Detail & Related papers (2020-06-09T23:51:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.