Augmentation-Free Self-Supervised Learning on Graphs
- URL: http://arxiv.org/abs/2112.02472v2
- Date: Tue, 7 Dec 2021 02:42:18 GMT
- Title: Augmentation-Free Self-Supervised Learning on Graphs
- Authors: Namkyeong Lee, Junseok Lee, Chanyoung Park
- Abstract summary: We propose a novel augmentation-free self-supervised learning framework for graphs, named AFGRL.
Specifically, we generate an alternative view of a graph by discovering nodes that share the local structural information and the global semantics with the graph.
- Score: 7.146027549101716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inspired by the recent success of self-supervised methods applied on images,
self-supervised learning on graph structured data has seen rapid growth
especially centered on augmentation-based contrastive methods. However, we
argue that without carefully designed augmentation techniques, augmentations on
graphs may behave arbitrarily in that the underlying semantics of graphs can
drastically change. As a consequence, the performance of existing
augmentation-based methods is highly dependent on the choice of augmentation
scheme, i.e., hyperparameters associated with augmentations. In this paper, we
propose a novel augmentation-free self-supervised learning framework for
graphs, named AFGRL. Specifically, we generate an alternative view of a graph
by discovering nodes that share the local structural information and the global
semantics with the graph. Extensive experiments towards various node-level
tasks, i.e., node classification, clustering, and similarity search on various
real-world datasets demonstrate the superiority of AFGRL. The source code for
AFGRL is available at https://github.com/Namkyeong/AFGRL.
Related papers
- Explanation-Preserving Augmentation for Semi-Supervised Graph Representation Learning [13.494832603509897]
Graph representation learning (GRL) has emerged as an effective technique achieving performance improvements in wide tasks such as node classification and graph classification.
We propose a novel method, Explanation-Preserving Augmentation (EPA), that leverages graph explanation techniques for generating augmented graphs.
EPA first uses a small number of labels to train a graph explainer to infer the sub-structures (explanations) that are most relevant to a graph's semantics.
arXiv Detail & Related papers (2024-10-16T15:18:03Z) - Hybrid Augmented Automated Graph Contrastive Learning [3.785553471764994]
We propose a framework called Hybrid Augmented Automated Graph Contrastive Learning (HAGCL)
HAGCL consists of a feature-level learnable view generator and an edge-level learnable view generator.
It insures to learn the most semantically meaningful structure in terms of features and topology.
arXiv Detail & Related papers (2023-03-24T03:26:20Z) - Graph Contrastive Learning with Personalized Augmentation [17.714437631216516]
Graph contrastive learning (GCL) has emerged as an effective tool for learning unsupervised representations of graphs.
We propose a principled framework, termed as textitGraph contrastive learning with textitPersonalized textitAugmentation (GPA)
GPA infers tailored augmentation strategies for each graph based on its topology and node attributes via a learnable augmentation selector.
arXiv Detail & Related papers (2022-09-14T11:37:48Z) - Let Invariant Rationale Discovery Inspire Graph Contrastive Learning [98.10268114789775]
We argue that a high-performing augmentation should preserve the salient semantics of anchor graphs regarding instance-discrimination.
We propose a new framework, Rationale-aware Graph Contrastive Learning (RGCL)
RGCL uses a rationale generator to reveal salient features about graph instance-discrimination as the rationale, and then creates rationale-aware views for contrastive learning.
arXiv Detail & Related papers (2022-06-16T01:28:40Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Learning Graph Augmentations to Learn Graph Representations [13.401746329218017]
LG2AR is an end-to-end automatic graph augmentation framework.
It helps encoders learn generalizable representations on both node and graph levels.
It achieves state-of-the-art results on 18 out of 20 graph-level and node-level benchmarks.
arXiv Detail & Related papers (2022-01-24T17:50:06Z) - Bootstrapping Informative Graph Augmentation via A Meta Learning
Approach [21.814940639910358]
In graph contrastive learning, benchmark methods apply various graph augmentation approaches.
Most of the augmentation methods are non-learnable, which causes the issue of generating unbeneficial augmented graphs.
We motivate our method to generate augmented graph by a learnable graph augmenter, called MEta Graph Augmentation (MEGA)
arXiv Detail & Related papers (2022-01-11T07:15:13Z) - Graph Representation Learning via Contrasting Cluster Assignments [57.87743170674533]
We propose a novel unsupervised graph representation model by contrasting cluster assignments, called as GRCCA.
It is motivated to make good use of local and global information synthetically through combining clustering algorithms and contrastive learning.
GRCCA has strong competitiveness in most tasks.
arXiv Detail & Related papers (2021-12-15T07:28:58Z) - Graph Contrastive Learning Automated [94.41860307845812]
Graph contrastive learning (GraphCL) has emerged with promising representation learning performance.
The effectiveness of GraphCL hinges on ad-hoc data augmentations, which have to be manually picked per dataset.
This paper proposes a unified bi-level optimization framework to automatically, adaptively and dynamically select data augmentations when performing GraphCL on specific graph data.
arXiv Detail & Related papers (2021-06-10T16:35:27Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.