Graph Contrastive Learning with Implicit Augmentations
- URL: http://arxiv.org/abs/2211.03710v1
- Date: Mon, 7 Nov 2022 17:34:07 GMT
- Title: Graph Contrastive Learning with Implicit Augmentations
- Authors: Huidong Liang, Xingjian Du, Bilei Zhu, Zejun Ma, Ke Chen, Junbin Gao
- Abstract summary: Implicit Graph Contrastive Learning (iGCL) uses augmentations in latent space learned from a Variational Graph Auto-Encoder by reconstructing graph topological structure.
Experimental results on both graph-level and node-level tasks show that the proposed method achieves state-of-the-art performance.
- Score: 36.57536688367965
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing graph contrastive learning methods rely on augmentation techniques
based on random perturbations (e.g., randomly adding or dropping edges and
nodes). Nevertheless, altering certain edges or nodes can unexpectedly change
the graph characteristics, and choosing the optimal perturbing ratio for each
dataset requires onerous manual tuning. In this paper, we introduce Implicit
Graph Contrastive Learning (iGCL), which utilizes augmentations in the latent
space learned from a Variational Graph Auto-Encoder by reconstructing graph
topological structure. Importantly, instead of explicitly sampling
augmentations from latent distributions, we further propose an upper bound for
the expected contrastive loss to improve the efficiency of our learning
algorithm. Thus, graph semantics can be preserved within the augmentations in
an intelligent way without arbitrary manual design or prior human knowledge.
Experimental results on both graph-level and node-level tasks show that the
proposed method achieves state-of-the-art performance compared to other
benchmarks, where ablation studies in the end demonstrate the effectiveness of
modules in iGCL.
Related papers
- Randomized Schur Complement Views for Graph Contrastive Learning [0.0]
We introduce a randomized topological augmentor based on Schur complements for Graph Contrastive Learning (GCL)
Given a graph laplacian matrix, the technique generates unbiased approximations of its Schur complements and treats the corresponding graphs as augmented views.
arXiv Detail & Related papers (2023-06-06T20:35:20Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Coarse-to-Fine Contrastive Learning on Graphs [38.41992365090377]
A variety of graph augmentation strategies have been employed to learn node representations in a self-supervised manner.
We introduce a self-ranking paradigm to ensure that the discriminative information among different nodes can be maintained.
Experiment results on various benchmark datasets verify the effectiveness of our algorithm.
arXiv Detail & Related papers (2022-12-13T08:17:20Z) - Label-invariant Augmentation for Semi-Supervised Graph Classification [32.591130704357184]
Recently, contrastiveness-based augmentation surges a new climax in the computer vision domain.
Unlike images, it is much more difficult to design reasonable augmentations without changing the nature of graphs.
We propose a label-invariant augmentation for graph-structured data to address this challenge.
arXiv Detail & Related papers (2022-05-19T18:44:02Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Graph Contrastive Learning with Adaptive Augmentation [23.37786673825192]
We propose a novel graph contrastive representation learning method with adaptive augmentation.
Specifically, we design augmentation schemes based on node centrality measures to highlight important connective structures.
Our proposed method consistently outperforms existing state-of-the-art baselines and even surpasses some supervised counterparts.
arXiv Detail & Related papers (2020-10-27T15:12:21Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.