Rethinking and Simplifying Bootstrapped Graph Latents
- URL: http://arxiv.org/abs/2312.02619v1
- Date: Tue, 5 Dec 2023 09:49:50 GMT
- Title: Rethinking and Simplifying Bootstrapped Graph Latents
- Authors: Wangbin Sun, Jintang Li, Liang Chen, Bingzhe Wu, Yatao Bian, Zibin
Zheng
- Abstract summary: Graph contrastive learning (GCL) has emerged as a representative paradigm in graph self-supervised learning.
We present SGCL, a simple yet effective GCL framework that utilizes the outputs from two consecutive iterations as positive pairs.
We show that SGCL can achieve competitive performance with fewer parameters, lower time and space costs, and significant convergence speedup.
- Score: 48.76934123429186
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph contrastive learning (GCL) has emerged as a representative paradigm in
graph self-supervised learning, where negative samples are commonly regarded as
the key to preventing model collapse and producing distinguishable
representations. Recent studies have shown that GCL without negative samples
can achieve state-of-the-art performance as well as scalability improvement,
with bootstrapped graph latent (BGRL) as a prominent step forward. However,
BGRL relies on a complex architecture to maintain the ability to scatter
representations, and the underlying mechanisms enabling the success remain
largely unexplored. In this paper, we introduce an instance-level decorrelation
perspective to tackle the aforementioned issue and leverage it as a springboard
to reveal the potential unnecessary model complexity within BGRL. Based on our
findings, we present SGCL, a simple yet effective GCL framework that utilizes
the outputs from two consecutive iterations as positive pairs, eliminating the
negative samples. SGCL only requires a single graph augmentation and a single
graph encoder without additional parameters. Extensive experiments conducted on
various graph benchmarks demonstrate that SGCL can achieve competitive
performance with fewer parameters, lower time and space costs, and significant
convergence speedup.
Related papers
- SpeGCL: Self-supervised Graph Spectrum Contrastive Learning without Positive Samples [44.315865532262876]
Graph Contrastive Learning (GCL) excels at managing noise and fluctuations in input data, making it popular in various fields.
Most existing GCL methods focus mainly on the time domain (low-frequency information) for node feature representations.
We propose a novel spectral GCL framework without positive samples, named SpeGCL.
arXiv Detail & Related papers (2024-10-14T10:39:38Z) - HomoGCL: Rethinking Homophily in Graph Contrastive Learning [64.85392028383164]
HomoGCL is a model-agnostic framework to expand the positive set using neighbor nodes with neighbor-specific significances.
We show that HomoGCL yields multiple state-of-the-art results across six public datasets.
arXiv Detail & Related papers (2023-06-16T04:06:52Z) - LightGCL: Simple Yet Effective Graph Contrastive Learning for
Recommendation [9.181689366185038]
Graph neural clustering network (GNN) is a powerful learning approach for graph-based recommender systems.
In this paper, we propose a simple yet effective graph contrastive learning paradigm LightGCL.
arXiv Detail & Related papers (2023-02-16T10:16:21Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Single-Pass Contrastive Learning Can Work for Both Homophilic and
Heterophilic Graph [60.28340453547902]
Graph contrastive learning (GCL) techniques typically require two forward passes for a single instance to construct the contrastive loss.
Existing GCL approaches fail to provide strong performance guarantees.
We implement the Single-Pass Graph Contrastive Learning method (SP-GCL)
Empirically, the features learned by the SP-GCL can match or outperform existing strong baselines with significantly less computational overhead.
arXiv Detail & Related papers (2022-11-20T07:18:56Z) - Uncovering the Structural Fairness in Graph Contrastive Learning [87.65091052291544]
Graph contrastive learning (GCL) has emerged as a promising self-supervised approach for learning node representations.
We show that representations obtained by GCL methods are already fairer to degree bias than those learned by GCN.
We devise a novel graph augmentation method, called GRAph contrastive learning for DEgree bias (GRADE), which applies different strategies to low- and high-degree nodes.
arXiv Detail & Related papers (2022-10-06T15:58:25Z) - Augmentation-Free Graph Contrastive Learning [16.471928573824854]
Graph contrastive learning (GCL) is the most representative and prevalent self-supervised learning approach for graph-structured data.
Existing GCL methods rely on an augmentation scheme to learn the representations invariant across different augmentation views.
We propose a novel, theoretically-principled, and augmentation-free GCL, named AF-GCL, that leverages the features aggregated by Graph Neural Network to construct the self-supervision signal instead of augmentations.
arXiv Detail & Related papers (2022-04-11T05:37:03Z) - An Empirical Study of Graph Contrastive Learning [17.246488437677616]
Graph Contrastive Learning establishes a new paradigm for learning graph representations without human annotations.
We identify several critical design considerations within a general GCL paradigm, including augmentation functions, contrasting modes, contrastive objectives, and negative mining techniques.
To foster future research and ease the implementation of GCL algorithms, we develop an easy-to-use library PyGCL, featuring modularized CL components, standardized evaluation, and experiment management.
arXiv Detail & Related papers (2021-09-02T17:43:45Z) - Diversified Multiscale Graph Learning with Graph Self-Correction [55.43696999424127]
We propose a diversified multiscale graph learning model equipped with two core ingredients.
A graph self-correction (GSC) mechanism to generate informative embedded graphs, and a diversity boosting regularizer (DBR) to achieve a comprehensive characterization of the input graph.
Experiments on popular graph classification benchmarks show that the proposed GSC mechanism leads to significant improvements over state-of-the-art graph pooling methods.
arXiv Detail & Related papers (2021-03-17T16:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.