Self-attention Multi-view Representation Learning with
Diversity-promoting Complementarity
- URL: http://arxiv.org/abs/2201.00168v1
- Date: Sat, 1 Jan 2022 11:17:02 GMT
- Title: Self-attention Multi-view Representation Learning with
Diversity-promoting Complementarity
- Authors: Jian-wei Liu, Xi-hao Ding, Run-kun Lu, Xionglin Luo
- Abstract summary: Multi-view learning attempts to generate a model with a better performance by exploiting the consensus and/or complementarity among multi-view data.
We propose a novel supervised multi-view representation learning algorithm, called Self-Attention Multi-View network with Diversity-Promoting Complementarity.
- Score: 4.213976613562574
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-view learning attempts to generate a model with a better performance by
exploiting the consensus and/or complementarity among multi-view data. However,
in terms of complementarity, most existing approaches only can find
representations with single complementarity rather than complementary
information with diversity. In this paper, to utilize both complementarity and
consistency simultaneously, give free rein to the potential of deep learning in
grasping diversity-promoting complementarity for multi-view representation
learning, we propose a novel supervised multi-view representation learning
algorithm, called Self-Attention Multi-View network with Diversity-Promoting
Complementarity (SAMVDPC), which exploits the consistency by a group of
encoders, uses self-attention to find complementary information entailing
diversity. Extensive experiments conducted on eight real-world datasets have
demonstrated the effectiveness of our proposed method, and show its superiority
over several baseline methods, which only consider single complementary
information.
Related papers
- Decoupling Common and Unique Representations for Multimodal Self-supervised Learning [22.12729786091061]
We propose Decoupling Common and Unique Representations (DeCUR), a simple yet effective method for multimodal self-supervised learning.
By distinguishing inter- and intra-modal embeddings through multimodal redundancy reduction, DeCUR can integrate complementary information across different modalities.
arXiv Detail & Related papers (2023-09-11T08:35:23Z) - Cross-view Graph Contrastive Representation Learning on Partially
Aligned Multi-view Data [52.491074276133325]
Multi-view representation learning has developed rapidly over the past decades and has been applied in many fields.
We propose a new cross-view graph contrastive learning framework, which integrates multi-view information to align data and learn latent representations.
Experiments conducted on several real datasets demonstrate the effectiveness of the proposed method on the clustering and classification tasks.
arXiv Detail & Related papers (2022-11-08T09:19:32Z) - Dual Representation Learning for One-Step Clustering of Multi-View Data [30.131568561100817]
We propose a novel one-step multi-view clustering method by exploiting the dual representation of both the common and specific information of different views.
With this framework, the representation learning and clustering partition mutually benefit each other, which effectively improve the clustering performance.
arXiv Detail & Related papers (2022-08-30T14:20:26Z) - Latent Heterogeneous Graph Network for Incomplete Multi-View Learning [57.49776938934186]
We propose a novel Latent Heterogeneous Graph Network (LHGN) for incomplete multi-view learning.
By learning a unified latent representation, a trade-off between consistency and complementarity among different views is implicitly realized.
To avoid any inconsistencies between training and test phase, a transductive learning technique is applied based on graph learning for classification tasks.
arXiv Detail & Related papers (2022-08-29T15:14:21Z) - Variational Distillation for Multi-View Learning [104.17551354374821]
We design several variational information bottlenecks to exploit two key characteristics for multi-view representation learning.
Under rigorously theoretical guarantee, our approach enables IB to grasp the intrinsic correlation between observations and semantic labels.
arXiv Detail & Related papers (2022-06-20T03:09:46Z) - A unified framework based on graph consensus term for multi-view
learning [5.168659132277719]
We propose a novel multi-view learning framework, which aims to leverage most existing graph embedding works into a unified formula.
Our method explores the graph structure in each view independently to preserve the diversity property of graph embedding methods.
To this end, the diversity and complementary information among different views could be simultaneously considered.
arXiv Detail & Related papers (2021-05-25T09:22:21Z) - Deep Partial Multi-View Learning [94.39367390062831]
We propose a novel framework termed Cross Partial Multi-View Networks (CPM-Nets)
We fifirst provide a formal defifinition of completeness and versatility for multi-view representation.
We then theoretically prove the versatility of the learned latent representations.
arXiv Detail & Related papers (2020-11-12T02:29:29Z) - Self-supervised Co-training for Video Representation Learning [103.69904379356413]
We investigate the benefit of adding semantic-class positives to instance-based Info Noise Contrastive Estimation training.
We propose a novel self-supervised co-training scheme to improve the popular infoNCE loss.
We evaluate the quality of the learnt representation on two different downstream tasks: action recognition and video retrieval.
arXiv Detail & Related papers (2020-10-19T17:59:01Z) - Embedded Deep Bilinear Interactive Information and Selective Fusion for
Multi-view Learning [70.67092105994598]
We propose a novel multi-view learning framework to make the multi-view classification better aimed at the above-mentioned two aspects.
In particular, we train different deep neural networks to learn various intra-view representations.
Experiments on six publicly available datasets demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2020-07-13T01:13:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.