Coarse-to-Fine Contrastive Learning on Graphs
- URL: http://arxiv.org/abs/2212.06423v1
- Date: Tue, 13 Dec 2022 08:17:20 GMT
- Title: Coarse-to-Fine Contrastive Learning on Graphs
- Authors: Peiyao Zhao, Yuangang Pan, Xin Li, Xu Chen, Ivor W. Tsang, and Lejian
Liao
- Abstract summary: A variety of graph augmentation strategies have been employed to learn node representations in a self-supervised manner.
We introduce a self-ranking paradigm to ensure that the discriminative information among different nodes can be maintained.
Experiment results on various benchmark datasets verify the effectiveness of our algorithm.
- Score: 38.41992365090377
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inspired by the impressive success of contrastive learning (CL), a variety of
graph augmentation strategies have been employed to learn node representations
in a self-supervised manner. Existing methods construct the contrastive samples
by adding perturbations to the graph structure or node attributes. Although
impressive results are achieved, it is rather blind to the wealth of prior
information assumed: with the increase of the perturbation degree applied on
the original graph, 1) the similarity between the original graph and the
generated augmented graph gradually decreases; 2) the discrimination between
all nodes within each augmented view gradually increases. In this paper, we
argue that both such prior information can be incorporated (differently) into
the contrastive learning paradigm following our general ranking framework. In
particular, we first interpret CL as a special case of learning to rank (L2R),
which inspires us to leverage the ranking order among positive augmented views.
Meanwhile, we introduce a self-ranking paradigm to ensure that the
discriminative information among different nodes can be maintained and also be
less altered to the perturbations of different degrees. Experiment results on
various benchmark datasets verify the effectiveness of our algorithm compared
with the supervised and unsupervised models.
Related papers
- Subgraph Networks Based Contrastive Learning [5.736011243152416]
Graph contrastive learning (GCL) can solve the problem of annotated data scarcity.
Most existing GCL methods focus on the design of graph augmentation strategies and mutual information estimation operations.
We propose a novel framework called subgraph network-based contrastive learning (SGNCL)
arXiv Detail & Related papers (2023-06-06T08:52:44Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Graph Contrastive Learning with Implicit Augmentations [36.57536688367965]
Implicit Graph Contrastive Learning (iGCL) uses augmentations in latent space learned from a Variational Graph Auto-Encoder by reconstructing graph topological structure.
Experimental results on both graph-level and node-level tasks show that the proposed method achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-11-07T17:34:07Z) - ARIEL: Adversarial Graph Contrastive Learning [51.14695794459399]
ARIEL consistently outperforms the current graph contrastive learning methods for both node-level and graph-level classification tasks.
ARIEL is more robust in the face of adversarial attacks.
arXiv Detail & Related papers (2022-08-15T01:24:42Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Graph Contrastive Learning with Adaptive Augmentation [23.37786673825192]
We propose a novel graph contrastive representation learning method with adaptive augmentation.
Specifically, we design augmentation schemes based on node centrality measures to highlight important connective structures.
Our proposed method consistently outperforms existing state-of-the-art baselines and even surpasses some supervised counterparts.
arXiv Detail & Related papers (2020-10-27T15:12:21Z) - Iterative Graph Self-Distillation [161.04351580382078]
We propose a novel unsupervised graph learning paradigm called Iterative Graph Self-Distillation (IGSD)
IGSD iteratively performs the teacher-student distillation with graph augmentations.
We show that we achieve significant and consistent performance gain on various graph datasets in both unsupervised and semi-supervised settings.
arXiv Detail & Related papers (2020-10-23T18:37:06Z) - GraphCL: Contrastive Self-Supervised Learning of Graph Representations [20.439666392958284]
We propose Graph Contrastive Learning (GraphCL), a general framework for learning node representations in a self supervised manner.
We use graph neural networks to produce two representations of the same node and leverage a contrastive learning loss to maximize agreement between them.
In both transductive and inductive learning setups, we demonstrate that our approach significantly outperforms the state-of-the-art in unsupervised learning on a number of node classification benchmarks.
arXiv Detail & Related papers (2020-07-15T22:36:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.