Multi-task twin support vector machine with Universum data
- URL: http://arxiv.org/abs/2206.10978v1
- Date: Wed, 22 Jun 2022 11:05:58 GMT
- Title: Multi-task twin support vector machine with Universum data
- Authors: Hossein Moosaei, Fatemeh Bazikar, Milan Hlad\'ik
- Abstract summary: This study looks at the challenge of multi-task learning using Universum data to employ non-target task data.
It proposes a multi-task twin support vector machine with Universum data (UMTSVM) and provides two approaches to its solution.
Numerical experiments on several popular multi-task data sets and medical data sets demonstrate the efficiency of the proposed methods.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Multi-task learning (MTL) has emerged as a promising topic of machine
learning in recent years, aiming to enhance the performance of numerous related
learning tasks by exploiting beneficial information. During the training phase,
most of the existing multi-task learning models concentrate entirely on the
target task data and ignore the non-target task data contained in the target
tasks. To address this issue, Universum data, that do not correspond to any
class of a classification problem, may be used as prior knowledge in the
training model. This study looks at the challenge of multi-task learning using
Universum data to employ non-target task data, which leads to better
performance. It proposes a multi-task twin support vector machine with
Universum data (UMTSVM) and provides two approaches to its solution. The first
approach takes into account the dual formulation of UMTSVM and tries to solve a
quadratic programming problem. The second approach formulates a least-squares
version of UMTSVM and refers to it as LS-UMTSVM to further increase the
generalization performance. The solution of the two primal problems in
LS-UMTSVM is simplified to solving just two systems of linear equations,
resulting in an incredibly simple and quick approach. Numerical experiments on
several popular multi-task data sets and medical data sets demonstrate the
efficiency of the proposed methods.
Related papers
- Empowering Large Language Models in Wireless Communication: A Novel Dataset and Fine-Tuning Framework [81.29965270493238]
We develop a specialized dataset aimed at enhancing the evaluation and fine-tuning of large language models (LLMs) for wireless communication applications.
The dataset includes a diverse set of multi-hop questions, including true/false and multiple-choice types, spanning varying difficulty levels from easy to hard.
We introduce a Pointwise V-Information (PVI) based fine-tuning method, providing a detailed theoretical analysis and justification for its use in quantifying the information content of training data.
arXiv Detail & Related papers (2025-01-16T16:19:53Z) - Multi-task Representation Learning for Mixed Integer Linear Programming [13.106799330951842]
This paper introduces the first multi-task learning framework for ML-guided MILP solving.
We demonstrate that our multi-task learning model performs similarly to specialized models within the same distribution.
It significantly outperforms them in generalization across problem sizes and tasks.
arXiv Detail & Related papers (2024-12-18T23:33:32Z) - SGW-based Multi-Task Learning in Vision Tasks [8.459976488960269]
As the scale of datasets expands and the complexity of tasks increases, knowledge sharing becomes increasingly challenging.
We propose an information bottleneck knowledge extraction module (KEM)
This module aims to reduce inter-task interference by constraining the flow of information, thereby reducing computational complexity.
arXiv Detail & Related papers (2024-10-03T13:56:50Z) - MetaGPT: Merging Large Language Models Using Model Exclusive Task Arithmetic [6.46176287368784]
We propose textbfModel textbfExclusive textbfTask textbfArithmetic for merging textbfGPT-scale models.
Our proposed MetaGPT is data-agnostic and bypasses the heavy search process, making it cost-effective and easy to implement for LLMs.
arXiv Detail & Related papers (2024-06-17T10:12:45Z) - Data-CUBE: Data Curriculum for Instruction-based Sentence Representation
Learning [85.66907881270785]
We propose a data curriculum method, namely Data-CUBE, that arranges the orders of all the multi-task data for training.
In the task level, we aim to find the optimal task order to minimize the total cross-task interference risk.
In the instance level, we measure the difficulty of all instances per task, then divide them into the easy-to-difficult mini-batches for training.
arXiv Detail & Related papers (2024-01-07T18:12:20Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - STG-MTL: Scalable Task Grouping for Multi-Task Learning Using Data Map [4.263847576433289]
Multi-Task Learning (MTL) is a powerful technique that has gained popularity due to its performance improvement over traditional Single-Task Learning (STL)
However, MTL is often challenging because there is an exponential number of possible task groupings.
We propose a new data-driven method that addresses these challenges and provides a scalable and modular solution for classification task grouping.
arXiv Detail & Related papers (2023-07-07T03:54:26Z) - Diffusion Model is an Effective Planner and Data Synthesizer for
Multi-Task Reinforcement Learning [101.66860222415512]
Multi-Task Diffusion Model (textscMTDiff) is a diffusion-based method that incorporates Transformer backbones and prompt learning for generative planning and data synthesis.
For generative planning, we find textscMTDiff outperforms state-of-the-art algorithms across 50 tasks on Meta-World and 8 maps on Maze2D.
arXiv Detail & Related papers (2023-05-29T05:20:38Z) - Multi-task Bias-Variance Trade-off Through Functional Constraints [102.64082402388192]
Multi-task learning aims to acquire a set of functions that perform well for diverse tasks.
In this paper we draw intuition from the two extreme learning scenarios -- a single function for all tasks, and a task-specific function that ignores the other tasks.
We introduce a constrained learning formulation that enforces domain specific solutions to a central function.
arXiv Detail & Related papers (2022-10-27T16:06:47Z) - Variational Multi-Task Learning with Gumbel-Softmax Priors [105.22406384964144]
Multi-task learning aims to explore task relatedness to improve individual tasks.
We propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks.
arXiv Detail & Related papers (2021-11-09T18:49:45Z) - On-edge Multi-task Transfer Learning: Model and Practice with
Data-driven Task Allocation [20.20889051697198]
We show that task allocation with task importance for Multi-task Transfer Learning (MTL) is a variant of the NP-complete Knapsack problem.
We propose a Data-driven Cooperative Task Allocation (DCTA) approach to solve TATIM with high computational efficiency.
Our DCTA reduces 3.24 times of processing time, and saves 48.4% energy consumption compared with the state-of-the-art when solving TATIM.
arXiv Detail & Related papers (2021-07-06T08:24:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.