Improving Subgraph Representation Learning via Multi-View Augmentation
- URL: http://arxiv.org/abs/2205.13038v1
- Date: Wed, 25 May 2022 20:17:13 GMT
- Title: Improving Subgraph Representation Learning via Multi-View Augmentation
- Authors: Yili Shen, Jiaxu Yan, Cheng-Wei Ju, Jun Yi, Zhou Lin and Hui Guan
- Abstract summary: Subgraph representation learning based on Graph Neural Network (GNN) has broad applications in chemistry and biology.
We develop a novel multiview augmentation mechanism to improve subgraph representation learning and thus the accuracy of downstream prediction tasks.
- Score: 6.907772294522709
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Subgraph representation learning based on Graph Neural Network (GNN) has
broad applications in chemistry and biology, such as molecule property
prediction and gene collaborative function prediction. On the other hand, graph
augmentation techniques have shown promising results in improving graph-based
and node-based classification tasks but are rarely explored in the GNN-based
subgraph representation learning literature. In this work, we developed a novel
multiview augmentation mechanism to improve subgraph representation learning
and thus the accuracy of downstream prediction tasks. The augmentation
technique creates multiple variants of subgraphs and embeds these variants into
the original graph to achieve both high training efficiency, scalability, and
improved accuracy. Experiments on several real-world subgraph benchmarks
demonstrate the superiority of our proposed multi-view augmentation techniques.
Related papers
- SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Learning from Heterogeneity: A Dynamic Learning Framework for Hypergraphs [22.64740740462169]
We propose a hypergraph learning framework named LFH that is capable of dynamic hyperedge construction and attentive embedding update.
To evaluate the effectiveness of our proposed framework, we conduct comprehensive experiments on several popular datasets.
arXiv Detail & Related papers (2023-07-07T06:26:44Z) - Gradient Gating for Deep Multi-Rate Learning on Graphs [62.25886489571097]
We present Gradient Gating (G$2$), a novel framework for improving the performance of Graph Neural Networks (GNNs)
Our framework is based on gating the output of GNN layers with a mechanism for multi-rate flow of message passing information across nodes of the underlying graph.
arXiv Detail & Related papers (2022-10-02T13:19:48Z) - Ordered Subgraph Aggregation Networks [19.18478955240166]
Subgraph-enhanced graph neural networks (GNNs) have emerged, provably boosting the expressive power of standard (message-passing) GNNs.
Here, we introduce a theoretical framework and extend the known expressivity results of subgraph-enhanced GNNs.
We show that increasing subgraph size always increases the expressive power and develop a better understanding of their limitations.
arXiv Detail & Related papers (2022-06-22T15:19:34Z) - An Empirical Study of Retrieval-enhanced Graph Neural Networks [48.99347386689936]
Graph Neural Networks (GNNs) are effective tools for graph representation learning.
We propose a retrieval-enhanced scheme called GRAPHRETRIEVAL, which is agnostic to the choice of graph neural network models.
We conduct comprehensive experiments over 13 datasets, and we observe that GRAPHRETRIEVAL is able to reach substantial improvements over existing GNNs.
arXiv Detail & Related papers (2022-06-01T09:59:09Z) - Bag of Tricks of Semi-Supervised Classification with Graph Neural
Networks [0.0]
In this paper, we first summarize a collection of existing refinements, and then propose several novel techniques regarding these model designs and label usage.
We empirically evaluate their impacts on the final model accuracy through ablation studies, and show that we are able to significantly improve various GNN models to the extent that they outweigh the gains from model architecture improvement.
arXiv Detail & Related papers (2021-03-24T17:24:26Z) - Diversified Multiscale Graph Learning with Graph Self-Correction [55.43696999424127]
We propose a diversified multiscale graph learning model equipped with two core ingredients.
A graph self-correction (GSC) mechanism to generate informative embedded graphs, and a diversity boosting regularizer (DBR) to achieve a comprehensive characterization of the input graph.
Experiments on popular graph classification benchmarks show that the proposed GSC mechanism leads to significant improvements over state-of-the-art graph pooling methods.
arXiv Detail & Related papers (2021-03-17T16:22:24Z) - SUGAR: Subgraph Neural Network with Reinforcement Pooling and
Self-Supervised Mutual Information Mechanism [33.135006052347194]
This paper presents a novel hierarchical subgraph-level selection and embedding based graph neural network for graph classification, namely SUGAR.
SUGAR reconstructs a sketched graph by extracting striking subgraphs as the representative part of the original graph to reveal subgraph-level patterns.
To differentiate subgraph representations among graphs, we present a self-supervised mutual information mechanism to encourage subgraph embedding.
arXiv Detail & Related papers (2021-01-20T15:06:16Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Subgraph Neural Networks [14.222887950206662]
We introduce SubGNN, a subgraph neural network to learn disentangled subgraph representations.
SubGNN performs exceptionally well on challenging biomedical datasets.
arXiv Detail & Related papers (2020-06-18T13:54:30Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.