SpeGCL: Self-supervised Graph Spectrum Contrastive Learning without Positive Samples
- URL: http://arxiv.org/abs/2410.10365v1
- Date: Mon, 14 Oct 2024 10:39:38 GMT
- Title: SpeGCL: Self-supervised Graph Spectrum Contrastive Learning without Positive Samples
- Authors: Yuntao Shou, Xiangyong Cao, Deyu Meng,
- Abstract summary: Graph Contrastive Learning (GCL) excels at managing noise and fluctuations in input data, making it popular in various fields.
Most existing GCL methods focus mainly on the time domain (low-frequency information) for node feature representations.
We propose a novel spectral GCL framework without positive samples, named SpeGCL.
- Score: 44.315865532262876
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Contrastive Learning (GCL) excels at managing noise and fluctuations in input data, making it popular in various fields (e.g., social networks, and knowledge graphs). Our study finds that the difference in high-frequency information between augmented graphs is greater than that in low-frequency information. However, most existing GCL methods focus mainly on the time domain (low-frequency information) for node feature representations and cannot make good use of high-frequency information to speed up model convergence. Furthermore, existing GCL paradigms optimize graph embedding representations by pulling the distance between positive sample pairs closer and pushing the distance between positive and negative sample pairs farther away, but our theoretical analysis shows that graph contrastive learning benefits from pushing negative pairs farther away rather than pulling positive pairs closer. To solve the above-mentioned problems, we propose a novel spectral GCL framework without positive samples, named SpeGCL. Specifically, to solve the problem that existing GCL methods cannot utilize high-frequency information, SpeGCL uses a Fourier transform to extract high-frequency and low-frequency information of node features, and constructs a contrastive learning mechanism in a Fourier space to obtain better node feature representation. Furthermore, SpeGCL relies entirely on negative samples to refine the graph embedding. We also provide a theoretical justification for the efficacy of using only negative samples in SpeGCL. Extensive experiments on un-supervised learning, transfer learning, and semi-supervised learning have validated the superiority of our SpeGCL framework over the state-of-the-art GCL methods.
Related papers
- Rethinking and Simplifying Bootstrapped Graph Latents [48.76934123429186]
Graph contrastive learning (GCL) has emerged as a representative paradigm in graph self-supervised learning.
We present SGCL, a simple yet effective GCL framework that utilizes the outputs from two consecutive iterations as positive pairs.
We show that SGCL can achieve competitive performance with fewer parameters, lower time and space costs, and significant convergence speedup.
arXiv Detail & Related papers (2023-12-05T09:49:50Z) - Architecture Matters: Uncovering Implicit Mechanisms in Graph
Contrastive Learning [34.566003077992384]
We present a systematic study of various graph contrastive learning (GCL) methods.
By uncovering how the implicit inductive bias of GNNs works in contrastive learning, we theoretically provide insights into the above intriguing properties of GCL.
Rather than directly porting existing NN methods to GCL, we advocate for more attention toward the unique architecture of graph learning.
arXiv Detail & Related papers (2023-11-05T15:54:17Z) - HomoGCL: Rethinking Homophily in Graph Contrastive Learning [64.85392028383164]
HomoGCL is a model-agnostic framework to expand the positive set using neighbor nodes with neighbor-specific significances.
We show that HomoGCL yields multiple state-of-the-art results across six public datasets.
arXiv Detail & Related papers (2023-06-16T04:06:52Z) - LightGCL: Simple Yet Effective Graph Contrastive Learning for
Recommendation [9.181689366185038]
Graph neural clustering network (GNN) is a powerful learning approach for graph-based recommender systems.
In this paper, we propose a simple yet effective graph contrastive learning paradigm LightGCL.
arXiv Detail & Related papers (2023-02-16T10:16:21Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Single-Pass Contrastive Learning Can Work for Both Homophilic and
Heterophilic Graph [60.28340453547902]
Graph contrastive learning (GCL) techniques typically require two forward passes for a single instance to construct the contrastive loss.
Existing GCL approaches fail to provide strong performance guarantees.
We implement the Single-Pass Graph Contrastive Learning method (SP-GCL)
Empirically, the features learned by the SP-GCL can match or outperform existing strong baselines with significantly less computational overhead.
arXiv Detail & Related papers (2022-11-20T07:18:56Z) - Revisiting Graph Contrastive Learning from the Perspective of Graph
Spectrum [91.06367395889514]
Graph Contrastive Learning (GCL) learning the node representations by augmenting graphs has attracted considerable attentions.
We answer these questions by establishing the connection between GCL and graph spectrum.
We propose a spectral graph contrastive learning module (SpCo), which is a general and GCL-friendly plug-in.
arXiv Detail & Related papers (2022-10-05T15:32:00Z) - FastGCL: Fast Self-Supervised Learning on Graphs via Contrastive
Neighborhood Aggregation [26.07819501316758]
We argue that a better contrastive scheme should be tailored to the characteristics of graph neural networks.
By constructing weighted-aggregated and non-aggregated neighborhood information as positive and negative samples respectively, FastGCL identifies the potential semantic information of data.
Experiments have been conducted on node classification and graph classification tasks, showing that FastGCL has competitive classification performance and significant training speedup.
arXiv Detail & Related papers (2022-05-02T13:33:43Z) - Augmentation-Free Graph Contrastive Learning [16.471928573824854]
Graph contrastive learning (GCL) is the most representative and prevalent self-supervised learning approach for graph-structured data.
Existing GCL methods rely on an augmentation scheme to learn the representations invariant across different augmentation views.
We propose a novel, theoretically-principled, and augmentation-free GCL, named AF-GCL, that leverages the features aggregated by Graph Neural Network to construct the self-supervision signal instead of augmentations.
arXiv Detail & Related papers (2022-04-11T05:37:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.