Tensorized LSSVMs for Multitask Regression
- URL: http://arxiv.org/abs/2303.02451v1
- Date: Sat, 4 Mar 2023 16:36:03 GMT
- Title: Tensorized LSSVMs for Multitask Regression
- Authors: Jiani Liu, Qinghua Tao, Ce Zhu, Yipeng Liu, Johan A.K. Suykens
- Abstract summary: Multitask learning (MTL) can utilize the relatedness between multiple tasks for performance improvement.
New MTL is proposed by leveraging low-rank tensor analysis and Least Squares Support Vectorized Least Squares Support Vectorized tLSSVM-MTL.
- Score: 48.844191210894245
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multitask learning (MTL) can utilize the relatedness between multiple tasks
for performance improvement. The advent of multimodal data allows tasks to be
referenced by multiple indices. High-order tensors are capable of providing
efficient representations for such tasks, while preserving structural
task-relations. In this paper, a new MTL method is proposed by leveraging
low-rank tensor analysis and constructing tensorized Least Squares Support
Vector Machines, namely the tLSSVM-MTL, where multilinear modelling and its
nonlinear extensions can be flexibly exerted. We employ a high-order tensor for
all the weights with each mode relating to an index and factorize it with CP
decomposition, assigning a shared factor for all tasks and retaining
task-specific latent factors along each index. Then an alternating algorithm is
derived for the nonconvex optimization, where each resulting subproblem is
solved by a linear system. Experimental results demonstrate promising
performances of our tLSSVM-MTL.
Related papers
- Multi-task learning via robust regularized clustering with non-convex group penalties [0.0]
Multi-task learning (MTL) aims to improve estimation performance by sharing common information among related tasks.
Existing MTL methods based on this assumption often ignore outlier tasks.
We propose a novel MTL method called MultiTask Regularized Clustering (MTLRRC)
arXiv Detail & Related papers (2024-04-04T07:09:43Z) - Low-Rank Multitask Learning based on Tensorized SVMs and LSSVMs [65.42104819071444]
Multitask learning (MTL) leverages task-relatedness to enhance performance.
We employ high-order tensors, with each mode corresponding to a task index, to naturally represent tasks referenced by multiple indices.
We propose a general framework of low-rank MTL methods with tensorized support vector machines (SVMs) and least square support vector machines (LSSVMs)
arXiv Detail & Related papers (2023-08-30T14:28:26Z) - Multi-task Highly Adaptive Lasso [1.4680035572775534]
We propose a novel, fully nonparametric approach for the multi-task learning, the Multi-task Highly Adaptive Lasso (MT-HAL)
MT-HAL simultaneously learns features, samples and task associations important for the common model, while imposing a shared sparse structure among similar tasks.
We show that MT-HAL outperforms sparsity-based MTL competitors across a wide range of simulation studies.
arXiv Detail & Related papers (2023-01-27T23:46:57Z) - M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task
Learning with Model-Accelerator Co-design [95.41238363769892]
Multi-task learning (MTL) encapsulates multiple learned tasks in a single model and often lets those tasks learn better jointly.
Current MTL regimes have to activate nearly the entire model even to just execute a single task.
We present a model-accelerator co-design framework to enable efficient on-device MTL.
arXiv Detail & Related papers (2022-10-26T15:40:24Z) - Multi-Task Learning as a Bargaining Game [63.49888996291245]
In Multi-task learning (MTL), a joint model is trained to simultaneously make predictions for several tasks.
Since the gradients of these different tasks may conflict, training a joint model for MTL often yields lower performance than its corresponding single-task counterparts.
We propose viewing the gradients combination step as a bargaining game, where tasks negotiate to reach an agreement on a joint direction of parameter update.
arXiv Detail & Related papers (2022-02-02T13:21:53Z) - Tensor Full Feature Measure and Its Nonconvex Relaxation Applications to
Tensor Recovery [1.8899300124593645]
We propose a new tensor sparsity measure called Full Feature Measure (FFM)
It can simultaneously describe the feature dimension each dimension, and connect the Tucker rank with the tensor tube rank.
Two efficient models based on FFM are proposed, and two Alternating Multiplier Method (ADMM) algorithms are developed to solve the proposed model.
arXiv Detail & Related papers (2021-09-25T01:44:34Z) - MLCTR: A Fast Scalable Coupled Tensor Completion Based on Multi-Layer
Non-Linear Matrix Factorization [3.6978630614152013]
This paper focuses on the embedding learning aspect of the tensor completion problem and proposes a new multi-layer neural network architecture for factorization and completion (MLCTR)
The network architecture entails multiple advantages: a series of low-rank matrix factorizations building blocks to minimize overfitting, interleaved transfer functions in each layer for non-linearity, and by-pass connections to reduce diminishing problem and increase depths of networks.
Our algorithm is highly efficient for imputing missing values in the EPS data.
arXiv Detail & Related papers (2021-09-04T03:08:34Z) - Multi-task Over-the-Air Federated Learning: A Non-Orthogonal
Transmission Approach [52.85647632037537]
We propose a multi-task over-theair federated learning (MOAFL) framework, where multiple learning tasks share edge devices for data collection and learning models under the coordination of a edge server (ES)
Both the convergence analysis and numerical results demonstrate that the MOAFL framework can significantly reduce the uplink bandwidth consumption of multiple tasks without causing substantial learning performance degradation.
arXiv Detail & Related papers (2021-06-27T13:09:32Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.