Unifying Graph Contrastive Learning with Flexible Contextual Scopes
- URL: http://arxiv.org/abs/2210.08792v1
- Date: Mon, 17 Oct 2022 07:16:17 GMT
- Title: Unifying Graph Contrastive Learning with Flexible Contextual Scopes
- Authors: Yizhen Zheng, Yu Zheng, Xiaofei Zhou, Chen Gong, Vincent CS Lee,
Shirui Pan
- Abstract summary: We present a self-supervised learning method termed Unifying Graph Contrastive Learning with Flexible Contextual Scopes (UGCL for short)
Our algorithm builds flexible contextual representations with contextual scopes by controlling the power of an adjacency matrix.
Based on representations from both local and contextual scopes, distL optimises a very simple contrastive loss function for graph representation learning.
- Score: 57.86762576319638
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph contrastive learning (GCL) has recently emerged as an effective
learning paradigm to alleviate the reliance on labelling information for graph
representation learning. The core of GCL is to maximise the mutual information
between the representation of a node and its contextual representation (i.e.,
the corresponding instance with similar semantic information) summarised from
the contextual scope (e.g., the whole graph or 1-hop neighbourhood). This
scheme distils valuable self-supervision signals for GCL training. However,
existing GCL methods still suffer from limitations, such as the incapacity or
inconvenience in choosing a suitable contextual scope for different datasets
and building biased contrastiveness. To address aforementioned problems, we
present a simple self-supervised learning method termed Unifying Graph
Contrastive Learning with Flexible Contextual Scopes (UGCL for short). Our
algorithm builds flexible contextual representations with tunable contextual
scopes by controlling the power of an adjacency matrix. Additionally, our
method ensures contrastiveness is built within connected components to reduce
the bias of contextual representations. Based on representations from both
local and contextual scopes, UGCL optimises a very simple contrastive loss
function for graph representation learning. Essentially, the architecture of
UGCL can be considered as a general framework to unify existing GCL methods. We
have conducted intensive experiments and achieved new state-of-the-art
performance in six out of eight benchmark datasets compared with
self-supervised graph representation learning baselines. Our code has been
open-sourced.
Related papers
- L^2CL: Embarrassingly Simple Layer-to-Layer Contrastive Learning for Graph Collaborative Filtering [33.165094795515785]
Graph neural networks (GNNs) have recently emerged as an effective approach to model neighborhood signals in collaborative filtering.
We propose L2CL, a principled Layer-to-Layer Contrastive Learning framework that contrasts representations from different layers.
We find that L2CL, using only one-hop contrastive learning paradigm, is able to capture intrinsic semantic structures and improve the quality of node representation.
arXiv Detail & Related papers (2024-07-19T12:45:21Z) - Graph-level Protein Representation Learning by Structure Knowledge
Refinement [50.775264276189695]
This paper focuses on learning representation on the whole graph level in an unsupervised manner.
We propose a novel framework called Structure Knowledge Refinement (SKR) which uses data structure to determine the probability of whether a pair is positive or negative.
arXiv Detail & Related papers (2024-01-05T09:05:33Z) - HomoGCL: Rethinking Homophily in Graph Contrastive Learning [64.85392028383164]
HomoGCL is a model-agnostic framework to expand the positive set using neighbor nodes with neighbor-specific significances.
We show that HomoGCL yields multiple state-of-the-art results across six public datasets.
arXiv Detail & Related papers (2023-06-16T04:06:52Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Graph Soft-Contrastive Learning via Neighborhood Ranking [19.241089079154044]
Graph Contrastive Learning (GCL) has emerged as a promising approach in the realm of graph self-supervised learning.
We propose a novel paradigm, Graph Soft-Contrastive Learning (GSCL)
GSCL facilitates GCL via neighborhood ranking, avoiding the need to specify absolutely similar pairs.
arXiv Detail & Related papers (2022-09-28T09:52:15Z) - Let Invariant Rationale Discovery Inspire Graph Contrastive Learning [98.10268114789775]
We argue that a high-performing augmentation should preserve the salient semantics of anchor graphs regarding instance-discrimination.
We propose a new framework, Rationale-aware Graph Contrastive Learning (RGCL)
RGCL uses a rationale generator to reveal salient features about graph instance-discrimination as the rationale, and then creates rationale-aware views for contrastive learning.
arXiv Detail & Related papers (2022-06-16T01:28:40Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Structural and Semantic Contrastive Learning for Self-supervised Node
Representation Learning [32.126228702554144]
Graph Contrastive Learning (GCL) has drawn much research interest for learning generalizable, transferable, and robust node representations in a self-supervised fashion.
In this work, we go beyond the existing unsupervised GCL counterparts and address their limitations by proposing a simple yet effective framework S$3$-CL.
Our experiments demonstrate that the node representations learned by S$3$-CL achieve superior performance on different downstream tasks compared to the state-of-the-art GCL methods.
arXiv Detail & Related papers (2022-02-17T07:20:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.