Graph Barlow Twins: A self-supervised representation learning framework
for graphs
- URL: http://arxiv.org/abs/2106.02466v3
- Date: Tue, 12 Sep 2023 14:53:38 GMT
- Title: Graph Barlow Twins: A self-supervised representation learning framework
for graphs
- Authors: Piotr Bielak, Tomasz Kajdanowicz, Nitesh V. Chawla
- Abstract summary: We propose a framework for self-supervised graph representation learning - Graph Barlow Twins.
It utilizes a cross-correlation-based loss function instead of negative samples.
We show that our method achieves as competitive results as the best self-supervised methods and fully supervised ones.
- Score: 25.546290138565393
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The self-supervised learning (SSL) paradigm is an essential exploration area,
which tries to eliminate the need for expensive data labeling. Despite the
great success of SSL methods in computer vision and natural language
processing, most of them employ contrastive learning objectives that require
negative samples, which are hard to define. This becomes even more challenging
in the case of graphs and is a bottleneck for achieving robust representations.
To overcome such limitations, we propose a framework for self-supervised graph
representation learning - Graph Barlow Twins, which utilizes a
cross-correlation-based loss function instead of negative samples. Moreover, it
does not rely on non-symmetric neural network architectures - in contrast to
state-of-the-art self-supervised graph representation learning method BGRL. We
show that our method achieves as competitive results as the best
self-supervised methods and fully supervised ones while requiring fewer
hyperparameters and substantially shorter computation time (ca. 30 times faster
than BGRL).
Related papers
- Disentangled Generative Graph Representation Learning [51.59824683232925]
This paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework.
It aims to learn latent disentangled factors and utilize them to guide graph mask modeling.
Experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods.
arXiv Detail & Related papers (2024-08-24T05:13:02Z) - LocalGCL: Local-aware Contrastive Learning for Graphs [17.04219759259025]
We propose underlineLocal-aware underlineGraph underlineContrastive underlineLearning (textbfmethnametrim) as a graph representation learner.
Experiments validate the superiority of methname against state-of-the-art methods, demonstrating its promise as a comprehensive graph representation learner.
arXiv Detail & Related papers (2024-02-27T09:23:54Z) - Rethinking and Simplifying Bootstrapped Graph Latents [48.76934123429186]
Graph contrastive learning (GCL) has emerged as a representative paradigm in graph self-supervised learning.
We present SGCL, a simple yet effective GCL framework that utilizes the outputs from two consecutive iterations as positive pairs.
We show that SGCL can achieve competitive performance with fewer parameters, lower time and space costs, and significant convergence speedup.
arXiv Detail & Related papers (2023-12-05T09:49:50Z) - A Graph is Worth 1-bit Spikes: When Graph Contrastive Learning Meets
Spiking Neural Networks [35.35462459134551]
SpikeGCL is a novel framework to learn binarized 1-bit representations for graphs.
We provide theoretical guarantees to demonstrate that SpikeGCL has comparable with its full-precision counterparts.
arXiv Detail & Related papers (2023-05-30T16:03:11Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Let Invariant Rationale Discovery Inspire Graph Contrastive Learning [98.10268114789775]
We argue that a high-performing augmentation should preserve the salient semantics of anchor graphs regarding instance-discrimination.
We propose a new framework, Rationale-aware Graph Contrastive Learning (RGCL)
RGCL uses a rationale generator to reveal salient features about graph instance-discrimination as the rationale, and then creates rationale-aware views for contrastive learning.
arXiv Detail & Related papers (2022-06-16T01:28:40Z) - Decoupled Self-supervised Learning for Non-Homophilous Graphs [36.87585427004317]
We develop a decoupled self-supervised learning framework for graph neural networks.
DSSL imitates a generative process of nodes and links from latent variable modeling of the semantic structure.
Our framework is agnostic to the encoders and does not need prefabricated augmentations.
arXiv Detail & Related papers (2022-06-07T21:58:29Z) - Learning Robust Representation through Graph Adversarial Contrastive
Learning [6.332560610460623]
Existing studies show that node representations generated by graph neural networks (GNNs) are vulnerable to adversarial attacks.
We propose a novel Graph Adversarial Contrastive Learning framework (GraphACL) by introducing adversarial augmentations into graph self-supervised learning.
arXiv Detail & Related papers (2022-01-31T07:07:51Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.