Deep Contrastive Learning for Multi-View Network Embedding
- URL: http://arxiv.org/abs/2108.08296v1
- Date: Mon, 16 Aug 2021 06:29:18 GMT
- Title: Deep Contrastive Learning for Multi-View Network Embedding
- Authors: Mengqi Zhang, Yanqiao Zhu, Shu Wu and Liang Wang
- Abstract summary: Multi-view network embedding aims at projecting nodes in the network to low-dimensional vectors.
Most contrastive learning-based methods mostly rely on high-quality graph embedding.
We design a novel node-to-node Contrastive learning framework for Multi-view network Embedding (CREME)
- Score: 20.035449838566503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-view network embedding aims at projecting nodes in the network to
low-dimensional vectors, while preserving their multiple relations and
attribute information. Contrastive learning-based methods have preliminarily
shown promising performance in this task. However, most contrastive
learning-based methods mostly rely on high-quality graph embedding and explore
less on the relationships between different graph views. To deal with these
deficiencies, we design a novel node-to-node Contrastive learning framework for
Multi-view network Embedding (CREME), which mainly contains two contrastive
objectives: Multi-view fusion InfoMax and Inter-view InfoMin. The former
objective distills information from embeddings generated from different graph
views, while the latter distinguishes different graph views better to capture
the complementary information between them. Specifically, we first apply a view
encoder to generate each graph view representation and utilize a multi-view
aggregator to fuse these representations. Then, we unify the two contrastive
objectives into one learning objective for training. Extensive experiments on
three real-world datasets show that CREME outperforms existing methods
consistently.
Related papers
- Masked Two-channel Decoupling Framework for Incomplete Multi-view Weak Multi-label Learning [21.49630640829186]
In this paper, we focus on the complex yet highly realistic task of incomplete multi-view weak multi-label learning.
We propose a masked two-channel decoupling framework based on deep neural networks to solve this problem.
Our model is fully adaptable to arbitrary view and label absences while also performing well on the ideal full data.
arXiv Detail & Related papers (2024-04-26T11:39:50Z) - Visual Commonsense based Heterogeneous Graph Contrastive Learning [79.22206720896664]
We propose a heterogeneous graph contrastive learning method to better finish the visual reasoning task.
Our method is designed as a plug-and-play way, so that it can be quickly and easily combined with a wide range of representative methods.
arXiv Detail & Related papers (2023-11-11T12:01:18Z) - Hierarchical Contrastive Learning Enhanced Heterogeneous Graph Neural
Network [59.860534520941485]
Heterogeneous graph neural networks (HGNNs) as an emerging technique have shown superior capacity of dealing with heterogeneous information network (HIN)
Recently, contrastive learning, a self-supervised method, becomes one of the most exciting learning paradigms and shows great potential when there are no labels.
In this paper, we study the problem of self-supervised HGNNs and propose a novel co-contrastive learning mechanism for HGNNs, named HeCo.
arXiv Detail & Related papers (2023-04-24T16:17:21Z) - Learnable Graph Convolutional Network and Feature Fusion for Multi-view
Learning [30.74535386745822]
This paper proposes a joint deep learning framework called Learnable Graph Convolutional Network and Feature Fusion (LGCN-FF)
It consists of two stages: feature fusion network and learnable graph convolutional network.
The proposed LGCN-FF is validated to be superior to various state-of-the-art methods in multi-view semi-supervised classification.
arXiv Detail & Related papers (2022-11-16T19:07:12Z) - Cross-view Graph Contrastive Representation Learning on Partially
Aligned Multi-view Data [52.491074276133325]
Multi-view representation learning has developed rapidly over the past decades and has been applied in many fields.
We propose a new cross-view graph contrastive learning framework, which integrates multi-view information to align data and learn latent representations.
Experiments conducted on several real datasets demonstrate the effectiveness of the proposed method on the clustering and classification tasks.
arXiv Detail & Related papers (2022-11-08T09:19:32Z) - Group Contrastive Self-Supervised Learning on Graphs [101.45974132613293]
We study self-supervised learning on graphs using contrastive methods.
We argue that contrasting graphs in multiple subspaces enables graph encoders to capture more abundant characteristics.
arXiv Detail & Related papers (2021-07-20T22:09:21Z) - Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph
Representation Learning [48.09362183184101]
We propose a novel self-supervised approach to learn node representations by enhancing Siamese self-distillation with multi-scale contrastive learning.
Our method achieves new state-of-the-art results and surpasses some semi-supervised counterparts by large margins.
arXiv Detail & Related papers (2021-05-12T14:20:13Z) - Multi-view Graph Learning by Joint Modeling of Consistency and
Inconsistency [65.76554214664101]
Graph learning has emerged as a promising technique for multi-view clustering with its ability to learn a unified and robust graph from multiple views.
We propose a new multi-view graph learning framework, which for the first time simultaneously models multi-view consistency and multi-view inconsistency in a unified objective function.
Experiments on twelve multi-view datasets have demonstrated the robustness and efficiency of the proposed approach.
arXiv Detail & Related papers (2020-08-24T06:11:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.