Augmentation-Free Graph Contrastive Learning
- URL: http://arxiv.org/abs/2204.04874v1
- Date: Mon, 11 Apr 2022 05:37:03 GMT
- Title: Augmentation-Free Graph Contrastive Learning
- Authors: Haonan Wang, Jieyu Zhang, Qi Zhu, Wei Huang
- Abstract summary: Graph contrastive learning (GCL) is the most representative and prevalent self-supervised learning approach for graph-structured data.
Existing GCL methods rely on an augmentation scheme to learn the representations invariant across different augmentation views.
We propose a novel, theoretically-principled, and augmentation-free GCL, named AF-GCL, that leverages the features aggregated by Graph Neural Network to construct the self-supervision signal instead of augmentations.
- Score: 16.471928573824854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph contrastive learning (GCL) is the most representative and prevalent
self-supervised learning approach for graph-structured data. Despite its
remarkable success, existing GCL methods highly rely on an augmentation scheme
to learn the representations invariant across different augmentation views. In
this work, we revisit such a convention in GCL through examining the effect of
augmentation techniques on graph data via the lens of spectral theory. We found
that graph augmentations preserve the low-frequency components and perturb the
middle- and high-frequency components of the graph, which contributes to the
success of GCL algorithms on homophilic graphs but hinders its application on
heterophilic graphs, due to the high-frequency preference of heterophilic data.
Motivated by this, we propose a novel, theoretically-principled, and
augmentation-free GCL method, named AF-GCL, that (1) leverages the features
aggregated by Graph Neural Network to construct the self-supervision signal
instead of augmentations and therefore (2) is less sensitive to the graph
homophily degree. Theoretically, We present the performance guarantee for
AF-GCL as well as an analysis for understanding the efficacy of AF-GCL.
Extensive experiments on 14 benchmark datasets with varying degrees of
heterophily show that AF-GCL presents competitive or better performance on
homophilic graphs and outperforms all existing state-of-the-art GCL methods on
heterophilic graphs with significantly less computational overhead.
Related papers
- HomoGCL: Rethinking Homophily in Graph Contrastive Learning [64.85392028383164]
HomoGCL is a model-agnostic framework to expand the positive set using neighbor nodes with neighbor-specific significances.
We show that HomoGCL yields multiple state-of-the-art results across six public datasets.
arXiv Detail & Related papers (2023-06-16T04:06:52Z) - Graph Contrastive Learning under Heterophily via Graph Filters [51.46061703680498]
Graph contrastive learning (CL) methods learn node representations in a self-supervised manner by maximizing the similarity between the augmented node representations obtained via a GNN-based encoder.
In this work, we propose an effective graph CL method, namely HLCL, for learning graph representations under heterophily.
Our extensive experiments show that HLCL outperforms state-of-the-art graph CL methods on benchmark datasets with heterophily, as well as large-scale real-world graphs, by up to 7%, and outperforms graph supervised learning methods on datasets with heterophily by up to 10%.
arXiv Detail & Related papers (2023-03-11T08:32:39Z) - Single-Pass Contrastive Learning Can Work for Both Homophilic and
Heterophilic Graph [60.28340453547902]
Graph contrastive learning (GCL) techniques typically require two forward passes for a single instance to construct the contrastive loss.
Existing GCL approaches fail to provide strong performance guarantees.
We implement the Single-Pass Graph Contrastive Learning method (SP-GCL)
Empirically, the features learned by the SP-GCL can match or outperform existing strong baselines with significantly less computational overhead.
arXiv Detail & Related papers (2022-11-20T07:18:56Z) - Uncovering the Structural Fairness in Graph Contrastive Learning [87.65091052291544]
Graph contrastive learning (GCL) has emerged as a promising self-supervised approach for learning node representations.
We show that representations obtained by GCL methods are already fairer to degree bias than those learned by GCN.
We devise a novel graph augmentation method, called GRAph contrastive learning for DEgree bias (GRADE), which applies different strategies to low- and high-degree nodes.
arXiv Detail & Related papers (2022-10-06T15:58:25Z) - Revisiting Graph Contrastive Learning from the Perspective of Graph
Spectrum [91.06367395889514]
Graph Contrastive Learning (GCL) learning the node representations by augmenting graphs has attracted considerable attentions.
We answer these questions by establishing the connection between GCL and graph spectrum.
We propose a spectral graph contrastive learning module (SpCo), which is a general and GCL-friendly plug-in.
arXiv Detail & Related papers (2022-10-05T15:32:00Z) - Graph Soft-Contrastive Learning via Neighborhood Ranking [19.241089079154044]
Graph Contrastive Learning (GCL) has emerged as a promising approach in the realm of graph self-supervised learning.
We propose a novel paradigm, Graph Soft-Contrastive Learning (GSCL)
GSCL facilitates GCL via neighborhood ranking, avoiding the need to specify absolutely similar pairs.
arXiv Detail & Related papers (2022-09-28T09:52:15Z) - ImGCL: Revisiting Graph Contrastive Learning on Imbalanced Node
Classification [26.0350727426613]
Graph contrastive learning (GCL) has attracted a surge of attention due to its superior performance for learning node/graph representations without labels.
In practice, the underlying class distribution of unlabeled nodes for the given graph is usually imbalanced.
We propose a principled GCL framework on Imbalanced node classification (ImGCL), which automatically and adaptively balances the representations learned from GCL without labels.
arXiv Detail & Related papers (2022-05-23T14:23:36Z) - Adversarial Graph Augmentation to Improve Graph Contrastive Learning [21.54343383921459]
We propose a novel principle, termed adversarial-GCL (AD-GCL), which enables GNNs to avoid capturing redundant information during the training.
We experimentally validate AD-GCL by comparing with the state-of-the-art GCL methods and achieve performance gains of up-to $14%$ in unsupervised, $6%$ in transfer, and $3%$ in semi-supervised learning settings.
arXiv Detail & Related papers (2021-06-10T15:34:26Z) - Diversified Multiscale Graph Learning with Graph Self-Correction [55.43696999424127]
We propose a diversified multiscale graph learning model equipped with two core ingredients.
A graph self-correction (GSC) mechanism to generate informative embedded graphs, and a diversity boosting regularizer (DBR) to achieve a comprehensive characterization of the input graph.
Experiments on popular graph classification benchmarks show that the proposed GSC mechanism leads to significant improvements over state-of-the-art graph pooling methods.
arXiv Detail & Related papers (2021-03-17T16:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.