GraphCoCo: Graph Complementary Contrastive Learning
- URL: http://arxiv.org/abs/2203.12821v1
- Date: Thu, 24 Mar 2022 02:58:36 GMT
- Title: GraphCoCo: Graph Complementary Contrastive Learning
- Authors: Jiawei Sun, Junchi Yan, Chentao Wu, Yue Ding, Ruoxin Chen, Xiang Yu,
Xinyu Lu, Jie Li
- Abstract summary: Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
- Score: 65.89743197355722
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Contrastive Learning (GCL) has shown promising performance in graph
representation learning (GRL) without the supervision of manual annotations.
GCL can generate graph-level embeddings by maximizing the Mutual Information
(MI) between different augmented views of the same graph (positive pairs).
However, we identify an obstacle that the optimization of InfoNCE loss only
concentrates on a few embeddings dimensions, limiting the distinguishability of
embeddings in downstream graph classification tasks. This paper proposes an
effective graph complementary contrastive learning approach named GraphCoCo to
tackle the above issue. Specifically, we set the embedding of the first
augmented view as the anchor embedding to localize "highlighted" dimensions
(i.e., the dimensions contribute most in similarity measurement). Then remove
these dimensions in the embeddings of the second augmented view to discover
neglected complementary representations. Therefore, the combination of anchor
and complementary embeddings significantly improves the performance in
downstream tasks. Comprehensive experiments on various benchmark datasets are
conducted to demonstrate the effectiveness of GraphCoCo, and the results show
that our model outperforms the state-of-the-art methods. Source code will be
made publicly available.
Related papers
- Two Trades is not Baffled: Condensing Graph via Crafting Rational Gradient Matching [50.30124426442228]
Training on large-scale graphs has achieved remarkable results in graph representation learning, but its cost and storage have raised growing concerns.
We propose a novel graph method named textbfCraftextbfTing textbfRationatextbf (textbfCTRL) which offers an optimized starting point closer to the original dataset's feature distribution.
arXiv Detail & Related papers (2024-02-07T14:49:10Z) - Subgraph Networks Based Contrastive Learning [5.736011243152416]
Graph contrastive learning (GCL) can solve the problem of annotated data scarcity.
Most existing GCL methods focus on the design of graph augmentation strategies and mutual information estimation operations.
We propose a novel framework called subgraph network-based contrastive learning (SGNCL)
arXiv Detail & Related papers (2023-06-06T08:52:44Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Graph Contrastive Learning with Implicit Augmentations [36.57536688367965]
Implicit Graph Contrastive Learning (iGCL) uses augmentations in latent space learned from a Variational Graph Auto-Encoder by reconstructing graph topological structure.
Experimental results on both graph-level and node-level tasks show that the proposed method achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-11-07T17:34:07Z) - Towards Graph Self-Supervised Learning with Contrastive Adjusted Zooming [48.99614465020678]
We introduce a novel self-supervised graph representation learning algorithm via Graph Contrastive Adjusted Zooming.
This mechanism enables G-Zoom to explore and extract self-supervision signals from a graph from multiple scales.
We have conducted extensive experiments on real-world datasets, and the results demonstrate that our proposed model outperforms state-of-the-art methods consistently.
arXiv Detail & Related papers (2021-11-20T22:45:53Z) - CGCL: Collaborative Graph Contrastive Learning without Handcrafted Graph Data Augmentations [12.820228374977441]
We propose a novel Collaborative Graph Contrastive Learning framework (CGCL)
This framework harnesses multiple graph encoders to observe the graph.
To ensure the collaboration among diverse graph encoders, we propose the concepts of asymmetric architecture and complementary encoders.
arXiv Detail & Related papers (2021-11-05T05:08:27Z) - Effective and Efficient Graph Learning for Multi-view Clustering [173.8313827799077]
We propose an effective and efficient graph learning model for multi-view clustering.
Our method exploits the view-similar between graphs of different views by the minimization of tensor Schatten p-norm.
Our proposed algorithm is time-economical and obtains the stable results and scales well with the data size.
arXiv Detail & Related papers (2021-08-15T13:14:28Z) - Diversified Multiscale Graph Learning with Graph Self-Correction [55.43696999424127]
We propose a diversified multiscale graph learning model equipped with two core ingredients.
A graph self-correction (GSC) mechanism to generate informative embedded graphs, and a diversity boosting regularizer (DBR) to achieve a comprehensive characterization of the input graph.
Experiments on popular graph classification benchmarks show that the proposed GSC mechanism leads to significant improvements over state-of-the-art graph pooling methods.
arXiv Detail & Related papers (2021-03-17T16:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.