Multi-View representation learning in Multi-Task Scene
- URL: http://arxiv.org/abs/2201.05829v1
- Date: Sat, 15 Jan 2022 11:26:28 GMT
- Title: Multi-View representation learning in Multi-Task Scene
- Authors: Run-kun Lu, Jian-wei Liu, Si-ming Lian, Xin Zuo
- Abstract summary: We propose a novel semi-supervised algorithm, termed as Multi-Task Multi-View learning based on Common and Special Features (MTMVCSF)
An anti-noise multi-task multi-view algorithm called AN-MTMVCSF is proposed, which has a strong adaptability to noise labels.
The effectiveness of these algorithms is proved by a series of well-designed experiments on both real world and synthetic data.
- Score: 4.509968166110557
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Over recent decades have witnessed considerable progress in whether
multi-task learning or multi-view learning, but the situation that consider
both learning scenes simultaneously has received not too much attention. How to
utilize multiple views latent representation of each single task to improve
each learning task performance is a challenge problem. Based on this, we
proposed a novel semi-supervised algorithm, termed as Multi-Task Multi-View
learning based on Common and Special Features (MTMVCSF). In general,
multi-views are the different aspects of an object and every view includes the
underlying common or special information of this object. As a consequence, we
will mine multiple views jointly latent factor of each learning task which
consists of each view special feature and the common feature of all views. By
this way, the original multi-task multi-view data has degenerated into
multi-task data, and exploring the correlations among multiple tasks enables to
make an improvement on the performance of learning algorithm. Another obvious
advantage of this approach is that we get latent representation of the set of
unlabeled instances by the constraint of regression task with labeled
instances. The performance of classification and semi-supervised clustering
task in these latent representations perform obviously better than it in raw
data. Furthermore, an anti-noise multi-task multi-view algorithm called
AN-MTMVCSF is proposed, which has a strong adaptability to noise labels. The
effectiveness of these algorithms is proved by a series of well-designed
experiments on both real world and synthetic data.
Related papers
- Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - MmAP : Multi-modal Alignment Prompt for Cross-domain Multi-task Learning [29.88567810099265]
Multi-task learning is designed to train multiple correlated tasks simultaneously.
To tackle this challenge, we integrate the decoder-free vision-language model CLIP.
We propose Multi-modal Alignment Prompt (MmAP) for CLIP, which aligns text and visual modalities during fine-tuning process.
arXiv Detail & Related papers (2023-12-14T03:33:02Z) - Multi-Task Learning for Visual Scene Understanding [7.191593674138455]
This thesis is concerned with multi-task learning in the context of computer vision.
We propose several methods that tackle important aspects of multi-task learning.
The results show several advances in the state-of-the-art of multi-task learning.
arXiv Detail & Related papers (2022-03-28T16:57:58Z) - On Steering Multi-Annotations per Sample for Multi-Task Learning [79.98259057711044]
The study of multi-task learning has drawn great attention from the community.
Despite the remarkable progress, the challenge of optimally learning different tasks simultaneously remains to be explored.
Previous works attempt to modify the gradients from different tasks. Yet these methods give a subjective assumption of the relationship between tasks, and the modified gradient may be less accurate.
In this paper, we introduce Task Allocation(STA), a mechanism that addresses this issue by a task allocation approach, in which each sample is randomly allocated a subset of tasks.
For further progress, we propose Interleaved Task Allocation(ISTA) to iteratively allocate all
arXiv Detail & Related papers (2022-03-06T11:57:18Z) - Variational Multi-Task Learning with Gumbel-Softmax Priors [105.22406384964144]
Multi-task learning aims to explore task relatedness to improve individual tasks.
We propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks.
arXiv Detail & Related papers (2021-11-09T18:49:45Z) - Semi-supervised Multi-task Learning for Semantics and Depth [88.77716991603252]
Multi-Task Learning (MTL) aims to enhance the model generalization by sharing representations between related tasks for better performance.
We propose the Semi-supervised Multi-Task Learning (MTL) method to leverage the available supervisory signals from different datasets.
We present a domain-aware discriminator structure with various alignment formulations to mitigate the domain discrepancy issue among datasets.
arXiv Detail & Related papers (2021-10-14T07:43:39Z) - Generative Modeling for Multi-task Visual Learning [40.96212750592383]
We consider a novel problem of learning a shared generative model that is useful across various visual perception tasks.
We propose a general multi-task oriented generative modeling framework, by coupling a discriminative multi-task network with a generative network.
Our framework consistently outperforms state-of-the-art multi-task approaches.
arXiv Detail & Related papers (2021-06-25T03:42:59Z) - ASM2TV: An Adaptive Semi-Supervised Multi-Task Multi-View Learning
Framework [7.64589466094347]
Human activity recognition (HAR) in the Internet of Things can be formalized as a multi-task multi-view learning problem.
We introduce a novel framework ASM2TV for semi-supervised multi-task multi-view learning.
arXiv Detail & Related papers (2021-05-18T16:15:32Z) - Multi-Task Learning for Dense Prediction Tasks: A Survey [87.66280582034838]
Multi-task learning (MTL) techniques have shown promising results w.r.t. performance, computations and/or memory footprint.
We provide a well-rounded view on state-of-the-art deep learning approaches for MTL in computer vision.
arXiv Detail & Related papers (2020-04-28T09:15:50Z) - Exploit Clues from Views: Self-Supervised and Regularized Learning for
Multiview Object Recognition [66.87417785210772]
This work investigates the problem of multiview self-supervised learning (MV-SSL)
A novel surrogate task for self-supervised learning is proposed by pursuing "object invariant" representation.
Experiments shows that the recognition and retrieval results using view invariant prototype embedding (VISPE) outperform other self-supervised learning methods.
arXiv Detail & Related papers (2020-03-28T07:06:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.