Revisiting Graph Contrastive Learning from the Perspective of Graph
Spectrum
- URL: http://arxiv.org/abs/2210.02330v1
- Date: Wed, 5 Oct 2022 15:32:00 GMT
- Title: Revisiting Graph Contrastive Learning from the Perspective of Graph
Spectrum
- Authors: Nian Liu, Xiao Wang, Deyu Bo, Chuan Shi, Jian Pei
- Abstract summary: Graph Contrastive Learning (GCL) learning the node representations by augmenting graphs has attracted considerable attentions.
We answer these questions by establishing the connection between GCL and graph spectrum.
We propose a spectral graph contrastive learning module (SpCo), which is a general and GCL-friendly plug-in.
- Score: 91.06367395889514
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Contrastive Learning (GCL), learning the node representations by
augmenting graphs, has attracted considerable attentions. Despite the
proliferation of various graph augmentation strategies, some fundamental
questions still remain unclear: what information is essentially encoded into
the learned representations by GCL? Are there some general graph augmentation
rules behind different augmentations? If so, what are they and what insights
can they bring? In this paper, we answer these questions by establishing the
connection between GCL and graph spectrum. By an experimental investigation in
spectral domain, we firstly find the General grAph augMEntation (GAME) rule for
GCL, i.e., the difference of the high-frequency parts between two augmented
graphs should be larger than that of low-frequency parts. This rule reveals the
fundamental principle to revisit the current graph augmentations and design new
effective graph augmentations. Then we theoretically prove that GCL is able to
learn the invariance information by contrastive invariance theorem, together
with our GAME rule, for the first time, we uncover that the learned
representations by GCL essentially encode the low-frequency information, which
explains why GCL works. Guided by this rule, we propose a spectral graph
contrastive learning module (SpCo), which is a general and GCL-friendly
plug-in. We combine it with different existing GCL models, and extensive
experiments well demonstrate that it can further improve the performances of a
wide variety of different GCL methods.
Related papers
- Community-Invariant Graph Contrastive Learning [21.72222875193335]
This research investigates the role of the graph community in graph augmentation.
We propose a community-invariant GCL framework to maintain graph community structure during learnable graph augmentation.
arXiv Detail & Related papers (2024-05-02T14:59:58Z) - Graph Contrastive Learning with Cohesive Subgraph Awareness [34.76555185419192]
Graph contrastive learning (GCL) has emerged as a state-of-the-art strategy for learning representations of diverse graphs.
We argue that an awareness of subgraphs during the graph augmentation and learning processes has the potential to enhance GCL performance.
We propose a novel unified framework called CTAug, to seamlessly integrate cohesion awareness into various existing GCL mechanisms.
arXiv Detail & Related papers (2024-01-31T03:51:30Z) - Graph Contrastive Invariant Learning from the Causal Perspective [41.62549508842019]
Graph contrastive learning (GCL) learning the node representation by contrasting two augmented graphs in a self-supervised way has attracted considerable attention.
In this paper, we first study GCL from the perspective of causality.
By analyzing GCL with the structural causal model (SCM), we discover that traditional GCL may not well learn the invariant representations due to the non-causal information contained in the graph.
arXiv Detail & Related papers (2024-01-23T08:47:28Z) - Provable Training for Graph Contrastive Learning [58.8128675529977]
Graph Contrastive Learning (GCL) has emerged as a popular training approach for learning node embeddings from augmented graphs without labels.
We show that the training of GCL is indeed imbalanced across all nodes.
We propose the metric "node compactness", which is the lower bound of how a node follows the GCL principle.
arXiv Detail & Related papers (2023-09-25T08:23:53Z) - HomoGCL: Rethinking Homophily in Graph Contrastive Learning [64.85392028383164]
HomoGCL is a model-agnostic framework to expand the positive set using neighbor nodes with neighbor-specific significances.
We show that HomoGCL yields multiple state-of-the-art results across six public datasets.
arXiv Detail & Related papers (2023-06-16T04:06:52Z) - Graph Mixture of Experts: Learning on Large-Scale Graphs with Explicit
Diversity Modeling [60.0185734837814]
Graph neural networks (GNNs) have found extensive applications in learning from graph data.
To bolster the generalization capacity of GNNs, it has become customary to augment training graph structures with techniques like graph augmentations.
This study introduces the concept of Mixture-of-Experts (MoE) to GNNs, with the aim of augmenting their capacity to adapt to a diverse range of training graph structures.
arXiv Detail & Related papers (2023-04-06T01:09:36Z) - MA-GCL: Model Augmentation Tricks for Graph Contrastive Learning [41.963242524220654]
We present three easy-to-implement model augmentation tricks for graph contrastive learning (GCL)
Specifically, we present three easy-to-implement model augmentation tricks for GCL, namely asymmetric, random and shuffling.
Experimental results show that MA-GCL can achieve state-of-the-art performance on node classification benchmarks.
arXiv Detail & Related papers (2022-12-14T05:04:10Z) - Augmentation-Free Graph Contrastive Learning [16.471928573824854]
Graph contrastive learning (GCL) is the most representative and prevalent self-supervised learning approach for graph-structured data.
Existing GCL methods rely on an augmentation scheme to learn the representations invariant across different augmentation views.
We propose a novel, theoretically-principled, and augmentation-free GCL, named AF-GCL, that leverages the features aggregated by Graph Neural Network to construct the self-supervision signal instead of augmentations.
arXiv Detail & Related papers (2022-04-11T05:37:03Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z) - Data Augmentation View on Graph Convolutional Network and the Proposal
of Monte Carlo Graph Learning [51.03995934179918]
We introduce data augmentation, which is more transparent than the previous understandings.
Inspired by it, we propose a new graph learning paradigm -- Monte Carlo Graph Learning (MCGL)
We show that MCGL's tolerance to graph structure noise is weaker than GCN on noisy graphs.
arXiv Detail & Related papers (2020-06-23T15:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.