Pairwise Half-graph Discrimination: A Simple Graph-level Self-supervised
Strategy for Pre-training Graph Neural Networks
- URL: http://arxiv.org/abs/2110.13567v1
- Date: Tue, 26 Oct 2021 10:51:13 GMT
- Title: Pairwise Half-graph Discrimination: A Simple Graph-level Self-supervised
Strategy for Pre-training Graph Neural Networks
- Authors: Pengyong Li, Jun Wang, Ziliang Li, Yixuan Qiao, Xianggen Liu, Fei Ma,
Peng Gao, Seng Song, Guotong Xie
- Abstract summary: We propose a simple and effective self-supervised pre-training strategy, named Pairwise Half-graph Discrimination (PHD)
PHD explicitly pre-trains a graph neural network at graph-level.
- Score: 17.976090901276905
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-supervised learning has gradually emerged as a powerful technique for
graph representation learning. However, transferable, generalizable, and robust
representation learning on graph data still remains a challenge for
pre-training graph neural networks. In this paper, we propose a simple and
effective self-supervised pre-training strategy, named Pairwise Half-graph
Discrimination (PHD), that explicitly pre-trains a graph neural network at
graph-level. PHD is designed as a simple binary classification task to
discriminate whether two half-graphs come from the same source. Experiments
demonstrate that the PHD is an effective pre-training strategy that offers
comparable or superior performance on 13 graph classification tasks compared
with state-of-the-art strategies, and achieves notable improvements when
combined with node-level strategies. Moreover, the visualization of learned
representation revealed that PHD strategy indeed empowers the model to learn
graph-level knowledge like the molecular scaffold. These results have
established PHD as a powerful and effective self-supervised learning strategy
in graph-level representation learning.
Related papers
- Disentangled Generative Graph Representation Learning [51.59824683232925]
This paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework.
It aims to learn latent disentangled factors and utilize them to guide graph mask modeling.
Experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods.
arXiv Detail & Related papers (2024-08-24T05:13:02Z) - Imbalanced Graph Classification with Multi-scale Oversampling Graph Neural Networks [25.12261412297796]
We introduce a novel multi-scale oversampling graph neural network (MOSGNN) that learns expressive minority graph representations.
It achieves this by jointly optimizing subgraph-level, graph-level, and pairwise-graph learning tasks.
Experiments on 16 imbalanced graph datasets show that MOSGNN i) significantly outperforms five state-of-the-art models.
arXiv Detail & Related papers (2024-05-08T09:16:54Z) - Self-supervised Learning and Graph Classification under Heterophily [4.358149865548289]
We propose a novel self-supervised strategy for Pre-training Graph neural networks (GNNs) based on the Metric (PGM)
Our strategy achieves state-of-the-art performance for molecular property prediction and protein function prediction.
arXiv Detail & Related papers (2023-06-14T12:32:38Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - ARIEL: Adversarial Graph Contrastive Learning [51.14695794459399]
ARIEL consistently outperforms the current graph contrastive learning methods for both node-level and graph-level classification tasks.
ARIEL is more robust in the face of adversarial attacks.
arXiv Detail & Related papers (2022-08-15T01:24:42Z) - GraphHD: Efficient graph classification using hyperdimensional computing [58.720142291102135]
We present a baseline approach for graph classification with HDC.
We evaluate GraphHD on real-world graph classification problems.
Our results show that when compared to the state-of-the-art Graph Neural Networks (GNNs) the proposed model achieves comparable accuracy.
arXiv Detail & Related papers (2022-05-16T17:32:58Z) - Learning Robust Representation through Graph Adversarial Contrastive
Learning [6.332560610460623]
Existing studies show that node representations generated by graph neural networks (GNNs) are vulnerable to adversarial attacks.
We propose a novel Graph Adversarial Contrastive Learning framework (GraphACL) by introducing adversarial augmentations into graph self-supervised learning.
arXiv Detail & Related papers (2022-01-31T07:07:51Z) - Hierarchical Adaptive Pooling by Capturing High-order Dependency for
Graph Representation Learning [18.423192209359158]
Graph neural networks (GNN) have been proven to be mature enough for handling graph-structured data on node-level graph representation learning tasks.
This paper proposes a hierarchical graph-level representation learning framework, which is adaptively sensitive to graph structures.
arXiv Detail & Related papers (2021-04-13T06:22:24Z) - Graph Representation Learning by Ensemble Aggregating Subgraphs via
Mutual Information Maximization [5.419711903307341]
We introduce a self-supervised learning method to enhance the representations of graph-level learned by Graph Neural Networks.
To get a comprehensive understanding of the graph structure, we propose an ensemble-learning like subgraph method.
And to achieve efficient and effective contrasive learning, a Head-Tail contrastive samples construction method is proposed.
arXiv Detail & Related papers (2021-03-24T12:06:12Z) - Sub-graph Contrast for Scalable Self-Supervised Graph Representation
Learning [21.0019144298605]
Existing graph neural networks fed with the complete graph data are not scalable due to limited computation and memory costs.
textscSubg-Con is proposed by utilizing the strong correlation between central nodes and their sampled subgraphs to capture regional structure information.
Compared with existing graph representation learning approaches, textscSubg-Con has prominent performance advantages in weaker supervision requirements, model learning scalability, and parallelization.
arXiv Detail & Related papers (2020-09-22T01:58:19Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.