Single-Pass Contrastive Learning Can Work for Both Homophilic and
Heterophilic Graph
- URL: http://arxiv.org/abs/2211.10890v4
- Date: Mon, 20 Nov 2023 05:25:23 GMT
- Title: Single-Pass Contrastive Learning Can Work for Both Homophilic and
Heterophilic Graph
- Authors: Haonan Wang, Jieyu Zhang, Qi Zhu, Wei Huang, Kenji Kawaguchi, Xiaokui
Xiao
- Abstract summary: Graph contrastive learning (GCL) techniques typically require two forward passes for a single instance to construct the contrastive loss.
Existing GCL approaches fail to provide strong performance guarantees.
We implement the Single-Pass Graph Contrastive Learning method (SP-GCL)
Empirically, the features learned by the SP-GCL can match or outperform existing strong baselines with significantly less computational overhead.
- Score: 60.28340453547902
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing graph contrastive learning (GCL) techniques typically require two
forward passes for a single instance to construct the contrastive loss, which
is effective for capturing the low-frequency signals of node features. Such a
dual-pass design has shown empirical success on homophilic graphs, but its
effectiveness on heterophilic graphs, where directly connected nodes typically
have different labels, is unknown. In addition, existing GCL approaches fail to
provide strong performance guarantees. Coupled with the unpredictability of GCL
approaches on heterophilic graphs, their applicability in real-world contexts
is limited. Then, a natural question arises: Can we design a GCL method that
works for both homophilic and heterophilic graphs with a performance guarantee?
To answer this question, we theoretically study the concentration property of
features obtained by neighborhood aggregation on homophilic and heterophilic
graphs, introduce the single-pass augmentation-free graph contrastive learning
loss based on the property, and provide performance guarantees for the
minimizer of the loss on downstream tasks. As a direct consequence of our
analysis, we implement the Single-Pass Graph Contrastive Learning method
(SP-GCL). Empirically, on 14 benchmark datasets with varying degrees of
homophily, the features learned by the SP-GCL can match or outperform existing
strong baselines with significantly less computational overhead, which
demonstrates the usefulness of our findings in real-world cases.
Related papers
- Rethinking and Simplifying Bootstrapped Graph Latents [48.76934123429186]
Graph contrastive learning (GCL) has emerged as a representative paradigm in graph self-supervised learning.
We present SGCL, a simple yet effective GCL framework that utilizes the outputs from two consecutive iterations as positive pairs.
We show that SGCL can achieve competitive performance with fewer parameters, lower time and space costs, and significant convergence speedup.
arXiv Detail & Related papers (2023-12-05T09:49:50Z) - Simple and Asymmetric Graph Contrastive Learning without Augmentations [39.301072710063636]
Asymmetric Contrastive Learning for Graphs (GraphACL) is easy to implement and does not rely on graph augmentations and homophily assumptions.
Experimental results show that the simple GraphACL significantly outperforms state-of-the-art graph contrastive learning and self-supervised learning methods on homophilic and heterophilic graphs.
arXiv Detail & Related papers (2023-10-29T03:14:20Z) - HomoGCL: Rethinking Homophily in Graph Contrastive Learning [64.85392028383164]
HomoGCL is a model-agnostic framework to expand the positive set using neighbor nodes with neighbor-specific significances.
We show that HomoGCL yields multiple state-of-the-art results across six public datasets.
arXiv Detail & Related papers (2023-06-16T04:06:52Z) - Graph Contrastive Learning under Heterophily via Graph Filters [51.46061703680498]
Graph contrastive learning (CL) methods learn node representations in a self-supervised manner by maximizing the similarity between the augmented node representations obtained via a GNN-based encoder.
In this work, we propose an effective graph CL method, namely HLCL, for learning graph representations under heterophily.
Our extensive experiments show that HLCL outperforms state-of-the-art graph CL methods on benchmark datasets with heterophily, as well as large-scale real-world graphs, by up to 7%, and outperforms graph supervised learning methods on datasets with heterophily by up to 10%.
arXiv Detail & Related papers (2023-03-11T08:32:39Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Graph Soft-Contrastive Learning via Neighborhood Ranking [19.241089079154044]
Graph Contrastive Learning (GCL) has emerged as a promising approach in the realm of graph self-supervised learning.
We propose a novel paradigm, Graph Soft-Contrastive Learning (GSCL)
GSCL facilitates GCL via neighborhood ranking, avoiding the need to specify absolutely similar pairs.
arXiv Detail & Related papers (2022-09-28T09:52:15Z) - Heterogeneous Graph Neural Networks using Self-supervised Reciprocally
Contrastive Learning [102.9138736545956]
Heterogeneous graph neural network (HGNN) is a very popular technique for the modeling and analysis of heterogeneous graphs.
We develop for the first time a novel and robust heterogeneous graph contrastive learning approach, namely HGCL, which introduces two views on respective guidance of node attributes and graph topologies.
In this new approach, we adopt distinct but most suitable attribute and topology fusion mechanisms in the two views, which are conducive to mining relevant information in attributes and topologies separately.
arXiv Detail & Related papers (2022-04-30T12:57:02Z) - Augmentation-Free Graph Contrastive Learning [16.471928573824854]
Graph contrastive learning (GCL) is the most representative and prevalent self-supervised learning approach for graph-structured data.
Existing GCL methods rely on an augmentation scheme to learn the representations invariant across different augmentation views.
We propose a novel, theoretically-principled, and augmentation-free GCL, named AF-GCL, that leverages the features aggregated by Graph Neural Network to construct the self-supervision signal instead of augmentations.
arXiv Detail & Related papers (2022-04-11T05:37:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.