HomoGCL: Rethinking Homophily in Graph Contrastive Learning
- URL: http://arxiv.org/abs/2306.09614v1
- Date: Fri, 16 Jun 2023 04:06:52 GMT
- Title: HomoGCL: Rethinking Homophily in Graph Contrastive Learning
- Authors: Wen-Zhi Li, Chang-Dong Wang, Hui Xiong, Jian-Huang Lai
- Abstract summary: HomoGCL is a model-agnostic framework to expand the positive set using neighbor nodes with neighbor-specific significances.
We show that HomoGCL yields multiple state-of-the-art results across six public datasets.
- Score: 64.85392028383164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive learning (CL) has become the de-facto learning paradigm in
self-supervised learning on graphs, which generally follows the
"augmenting-contrasting" learning scheme. However, we observe that unlike CL in
computer vision domain, CL in graph domain performs decently even without
augmentation. We conduct a systematic analysis of this phenomenon and argue
that homophily, i.e., the principle that "like attracts like", plays a key role
in the success of graph CL. Inspired to leverage this property explicitly, we
propose HomoGCL, a model-agnostic framework to expand the positive set using
neighbor nodes with neighbor-specific significances. Theoretically, HomoGCL
introduces a stricter lower bound of the mutual information between raw node
features and node embeddings in augmented views. Furthermore, HomoGCL can be
combined with existing graph CL models in a plug-and-play way with light extra
computational overhead. Extensive experiments demonstrate that HomoGCL yields
multiple state-of-the-art results across six public datasets and consistently
brings notable performance improvements when applied to various graph CL
methods. Code is avilable at https://github.com/wenzhilics/HomoGCL.
Related papers
- Architecture Matters: Uncovering Implicit Mechanisms in Graph
Contrastive Learning [34.566003077992384]
We present a systematic study of various graph contrastive learning (GCL) methods.
By uncovering how the implicit inductive bias of GNNs works in contrastive learning, we theoretically provide insights into the above intriguing properties of GCL.
Rather than directly porting existing NN methods to GCL, we advocate for more attention toward the unique architecture of graph learning.
arXiv Detail & Related papers (2023-11-05T15:54:17Z) - Simple and Asymmetric Graph Contrastive Learning without Augmentations [39.301072710063636]
Asymmetric Contrastive Learning for Graphs (GraphACL) is easy to implement and does not rely on graph augmentations and homophily assumptions.
Experimental results show that the simple GraphACL significantly outperforms state-of-the-art graph contrastive learning and self-supervised learning methods on homophilic and heterophilic graphs.
arXiv Detail & Related papers (2023-10-29T03:14:20Z) - Graph Contrastive Learning under Heterophily via Graph Filters [51.46061703680498]
Graph contrastive learning (CL) methods learn node representations in a self-supervised manner by maximizing the similarity between the augmented node representations obtained via a GNN-based encoder.
In this work, we propose an effective graph CL method, namely HLCL, for learning graph representations under heterophily.
Our extensive experiments show that HLCL outperforms state-of-the-art graph CL methods on benchmark datasets with heterophily, as well as large-scale real-world graphs, by up to 7%, and outperforms graph supervised learning methods on datasets with heterophily by up to 10%.
arXiv Detail & Related papers (2023-03-11T08:32:39Z) - Single-Pass Contrastive Learning Can Work for Both Homophilic and
Heterophilic Graph [60.28340453547902]
Graph contrastive learning (GCL) techniques typically require two forward passes for a single instance to construct the contrastive loss.
Existing GCL approaches fail to provide strong performance guarantees.
We implement the Single-Pass Graph Contrastive Learning method (SP-GCL)
Empirically, the features learned by the SP-GCL can match or outperform existing strong baselines with significantly less computational overhead.
arXiv Detail & Related papers (2022-11-20T07:18:56Z) - Unifying Graph Contrastive Learning with Flexible Contextual Scopes [57.86762576319638]
We present a self-supervised learning method termed Unifying Graph Contrastive Learning with Flexible Contextual Scopes (UGCL for short)
Our algorithm builds flexible contextual representations with contextual scopes by controlling the power of an adjacency matrix.
Based on representations from both local and contextual scopes, distL optimises a very simple contrastive loss function for graph representation learning.
arXiv Detail & Related papers (2022-10-17T07:16:17Z) - Uncovering the Structural Fairness in Graph Contrastive Learning [87.65091052291544]
Graph contrastive learning (GCL) has emerged as a promising self-supervised approach for learning node representations.
We show that representations obtained by GCL methods are already fairer to degree bias than those learned by GCN.
We devise a novel graph augmentation method, called GRAph contrastive learning for DEgree bias (GRADE), which applies different strategies to low- and high-degree nodes.
arXiv Detail & Related papers (2022-10-06T15:58:25Z) - Graph Soft-Contrastive Learning via Neighborhood Ranking [19.241089079154044]
Graph Contrastive Learning (GCL) has emerged as a promising approach in the realm of graph self-supervised learning.
We propose a novel paradigm, Graph Soft-Contrastive Learning (GSCL)
GSCL facilitates GCL via neighborhood ranking, avoiding the need to specify absolutely similar pairs.
arXiv Detail & Related papers (2022-09-28T09:52:15Z) - Geometry Contrastive Learning on Heterogeneous Graphs [50.58523799455101]
This paper proposes a novel self-supervised learning method, termed as Geometry Contrastive Learning (GCL)
GCL views a heterogeneous graph from Euclidean and hyperbolic perspective simultaneously, aiming to make a strong merger of the ability of modeling rich semantics and complex structures.
Extensive experiments on four benchmarks data sets show that the proposed approach outperforms the strong baselines.
arXiv Detail & Related papers (2022-06-25T03:54:53Z) - Augmentation-Free Graph Contrastive Learning [16.471928573824854]
Graph contrastive learning (GCL) is the most representative and prevalent self-supervised learning approach for graph-structured data.
Existing GCL methods rely on an augmentation scheme to learn the representations invariant across different augmentation views.
We propose a novel, theoretically-principled, and augmentation-free GCL, named AF-GCL, that leverages the features aggregated by Graph Neural Network to construct the self-supervision signal instead of augmentations.
arXiv Detail & Related papers (2022-04-11T05:37:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.