An Analysis of Attentive Walk-Aggregating Graph Neural Networks
- URL: http://arxiv.org/abs/2110.02667v1
- Date: Wed, 6 Oct 2021 11:41:12 GMT
- Title: An Analysis of Attentive Walk-Aggregating Graph Neural Networks
- Authors: Mehmet F. Demirel, Shengchao Liu, Siddhant Garg, Yingyu Liang
- Abstract summary: Graph neural networks (GNNs) have been shown to possess strong representation power.
We propose a novel GNN model, called AWARE, that aggregates information about the walks in the graph using attention schemes.
- Score: 34.866935881726256
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have been shown to possess strong representation
power, which can be exploited for downstream prediction tasks on
graph-structured data, such as molecules and social networks. They typically
learn representations by aggregating information from the K-hop neighborhood of
individual vertices or from the enumerated walks in the graph. Prior studies
have demonstrated the effectiveness of incorporating weighting schemes into
GNNs; however, this has been primarily limited to K-hop neighborhood GNNs so
far. In this paper, we aim to extensively analyze the effect of incorporating
weighting schemes into walk-aggregating GNNs. Towards this objective, we
propose a novel GNN model, called AWARE, that aggregates information about the
walks in the graph using attention schemes in a principled way to obtain an
end-to-end supervised learning method for graph-level prediction tasks. We
perform theoretical, empirical, and interpretability analyses of AWARE. Our
theoretical analysis provides the first provable guarantees for weighted GNNs,
demonstrating how the graph information is encoded in the representation, and
how the weighting schemes in AWARE affect the representation and learning
performance. We empirically demonstrate the superiority of AWARE over prior
baselines in the domains of molecular property prediction (61 tasks) and social
networks (4 tasks). Our interpretation study illustrates that AWARE can
successfully learn to capture the important substructures of the input graph.
Related papers
- Rethinking Propagation for Unsupervised Graph Domain Adaptation [17.443218657417454]
Unlabelled Graph Domain Adaptation (UGDA) aims to transfer knowledge from a labelled source graph to an unsupervised target graph.
We propose a simple yet effective approach called A2GNN for graph domain adaptation.
arXiv Detail & Related papers (2024-02-08T13:24:57Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - An Empirical Study of Retrieval-enhanced Graph Neural Networks [48.99347386689936]
Graph Neural Networks (GNNs) are effective tools for graph representation learning.
We propose a retrieval-enhanced scheme called GRAPHRETRIEVAL, which is agnostic to the choice of graph neural network models.
We conduct comprehensive experiments over 13 datasets, and we observe that GRAPHRETRIEVAL is able to reach substantial improvements over existing GNNs.
arXiv Detail & Related papers (2022-06-01T09:59:09Z) - Transfer Learning of Graph Neural Networks with Ego-graph Information
Maximization [41.867290324754094]
Graph neural networks (GNNs) have achieved superior performance in various applications, but training dedicated GNNs can be costly for large-scale graphs.
In this work, we establish a theoretically grounded and practically useful framework for the transfer learning of GNNs.
arXiv Detail & Related papers (2020-09-11T02:31:18Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.