Architecture Matters: Uncovering Implicit Mechanisms in Graph
Contrastive Learning
- URL: http://arxiv.org/abs/2311.02687v1
- Date: Sun, 5 Nov 2023 15:54:17 GMT
- Title: Architecture Matters: Uncovering Implicit Mechanisms in Graph
Contrastive Learning
- Authors: Xiaojun Guo, Yifei Wang, Zeming Wei, Yisen Wang
- Abstract summary: We present a systematic study of various graph contrastive learning (GCL) methods.
By uncovering how the implicit inductive bias of GNNs works in contrastive learning, we theoretically provide insights into the above intriguing properties of GCL.
Rather than directly porting existing NN methods to GCL, we advocate for more attention toward the unique architecture of graph learning.
- Score: 34.566003077992384
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the prosperity of contrastive learning for visual representation
learning (VCL), it is also adapted to the graph domain and yields promising
performance. However, through a systematic study of various graph contrastive
learning (GCL) methods, we observe that some common phenomena among existing
GCL methods that are quite different from the original VCL methods, including
1) positive samples are not a must for GCL; 2) negative samples are not
necessary for graph classification, neither for node classification when
adopting specific normalization modules; 3) data augmentations have much less
influence on GCL, as simple domain-agnostic augmentations (e.g., Gaussian
noise) can also attain fairly good performance. By uncovering how the implicit
inductive bias of GNNs works in contrastive learning, we theoretically provide
insights into the above intriguing properties of GCL. Rather than directly
porting existing VCL methods to GCL, we advocate for more attention toward the
unique architecture of graph learning and consider its implicit influence when
designing GCL methods. Code is available at https:
//github.com/PKU-ML/ArchitectureMattersGCL.
Related papers
- L^2CL: Embarrassingly Simple Layer-to-Layer Contrastive Learning for Graph Collaborative Filtering [33.165094795515785]
Graph neural networks (GNNs) have recently emerged as an effective approach to model neighborhood signals in collaborative filtering.
We propose L2CL, a principled Layer-to-Layer Contrastive Learning framework that contrasts representations from different layers.
We find that L2CL, using only one-hop contrastive learning paradigm, is able to capture intrinsic semantic structures and improve the quality of node representation.
arXiv Detail & Related papers (2024-07-19T12:45:21Z) - Hierarchical Topology Isomorphism Expertise Embedded Graph Contrastive
Learning [37.0788516033498]
We propose a novel hierarchical topology isomorphism expertise embedded graph contrastive learning.
We empirically demonstrate that the proposed method is universal to multiple state-of-the-art GCL models.
Our method beats the state-of-the-art method by 0.23% on unsupervised representation learning setting.
arXiv Detail & Related papers (2023-12-21T14:07:46Z) - Rethinking and Simplifying Bootstrapped Graph Latents [48.76934123429186]
Graph contrastive learning (GCL) has emerged as a representative paradigm in graph self-supervised learning.
We present SGCL, a simple yet effective GCL framework that utilizes the outputs from two consecutive iterations as positive pairs.
We show that SGCL can achieve competitive performance with fewer parameters, lower time and space costs, and significant convergence speedup.
arXiv Detail & Related papers (2023-12-05T09:49:50Z) - HomoGCL: Rethinking Homophily in Graph Contrastive Learning [64.85392028383164]
HomoGCL is a model-agnostic framework to expand the positive set using neighbor nodes with neighbor-specific significances.
We show that HomoGCL yields multiple state-of-the-art results across six public datasets.
arXiv Detail & Related papers (2023-06-16T04:06:52Z) - CARL-G: Clustering-Accelerated Representation Learning on Graphs [18.763104937800215]
We propose a novel clustering-based framework for graph representation learning that uses a loss inspired by Cluster Validation Indices (CVIs)
CARL-G is adaptable to different clustering methods and CVIs, and we show that with the right choice of clustering method and CVI, CARL-G outperforms node classification baselines on 4/5 datasets with up to a 79x training speedup compared to the best-performing baseline.
arXiv Detail & Related papers (2023-06-12T08:14:42Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Single-Pass Contrastive Learning Can Work for Both Homophilic and
Heterophilic Graph [60.28340453547902]
Graph contrastive learning (GCL) techniques typically require two forward passes for a single instance to construct the contrastive loss.
Existing GCL approaches fail to provide strong performance guarantees.
We implement the Single-Pass Graph Contrastive Learning method (SP-GCL)
Empirically, the features learned by the SP-GCL can match or outperform existing strong baselines with significantly less computational overhead.
arXiv Detail & Related papers (2022-11-20T07:18:56Z) - Unifying Graph Contrastive Learning with Flexible Contextual Scopes [57.86762576319638]
We present a self-supervised learning method termed Unifying Graph Contrastive Learning with Flexible Contextual Scopes (UGCL for short)
Our algorithm builds flexible contextual representations with contextual scopes by controlling the power of an adjacency matrix.
Based on representations from both local and contextual scopes, distL optimises a very simple contrastive loss function for graph representation learning.
arXiv Detail & Related papers (2022-10-17T07:16:17Z) - Revisiting Graph Contrastive Learning from the Perspective of Graph
Spectrum [91.06367395889514]
Graph Contrastive Learning (GCL) learning the node representations by augmenting graphs has attracted considerable attentions.
We answer these questions by establishing the connection between GCL and graph spectrum.
We propose a spectral graph contrastive learning module (SpCo), which is a general and GCL-friendly plug-in.
arXiv Detail & Related papers (2022-10-05T15:32:00Z) - Graph Soft-Contrastive Learning via Neighborhood Ranking [19.241089079154044]
Graph Contrastive Learning (GCL) has emerged as a promising approach in the realm of graph self-supervised learning.
We propose a novel paradigm, Graph Soft-Contrastive Learning (GSCL)
GSCL facilitates GCL via neighborhood ranking, avoiding the need to specify absolutely similar pairs.
arXiv Detail & Related papers (2022-09-28T09:52:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.