Personalized PageRank Graph Attention Networks
- URL: http://arxiv.org/abs/2205.14259v1
- Date: Fri, 27 May 2022 22:36:47 GMT
- Title: Personalized PageRank Graph Attention Networks
- Authors: Julie Choi
- Abstract summary: A graph neural network (GNN) is a framework to learn from graph-structured data.
GNNs typically only use the information of a very limited neighborhood for each node to avoid over-smoothing.
In this work, we incorporate the limit distribution of Personalized PageRank (PPR) into graph attention networks (GATs) to reflect the larger neighbor information without introducing over-smoothing.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There has been a rising interest in graph neural networks (GNNs) for
representation learning over the past few years. GNNs provide a general and
efficient framework to learn from graph-structured data. However, GNNs
typically only use the information of a very limited neighborhood for each node
to avoid over-smoothing. A larger neighborhood would be desirable to provide
the model with more information. In this work, we incorporate the limit
distribution of Personalized PageRank (PPR) into graph attention networks
(GATs) to reflect the larger neighbor information without introducing
over-smoothing. Intuitively, message aggregation based on Personalized PageRank
corresponds to infinitely many neighborhood aggregation layers. We show that
our models outperform a variety of baseline models for four widely used
benchmark datasets. Our implementation is publicly available online.
Related papers
- Graph Ladling: Shockingly Simple Parallel GNN Training without
Intermediate Communication [100.51884192970499]
GNNs are a powerful family of neural networks for learning over graphs.
scaling GNNs either by deepening or widening suffers from prevalent issues of unhealthy gradients, over-smoothening, information squashing.
We propose not to deepen or widen current GNNs, but instead present a data-centric perspective of model soups tailored for GNNs.
arXiv Detail & Related papers (2023-06-18T03:33:46Z) - Graph Mixture of Experts: Learning on Large-Scale Graphs with Explicit
Diversity Modeling [60.0185734837814]
Graph neural networks (GNNs) have found extensive applications in learning from graph data.
To bolster the generalization capacity of GNNs, it has become customary to augment training graph structures with techniques like graph augmentations.
This study introduces the concept of Mixture-of-Experts (MoE) to GNNs, with the aim of augmenting their capacity to adapt to a diverse range of training graph structures.
arXiv Detail & Related papers (2023-04-06T01:09:36Z) - Improving Graph Neural Networks at Scale: Combining Approximate PageRank
and CoreRank [20.948992161528466]
We propose a scalable solution to propagate information on Graph Neural Networks (GNNs)
The CorePPR model uses a learnable convex combination of the approximate personalised PageRank and the CoreRank to diffuse multi-hop neighbourhood information in GNNs.
We demonstrate that CorePPR outperforms PPRGo on large graphs where selecting the most influential nodes is particularly relevant for scalability.
arXiv Detail & Related papers (2022-11-08T13:51:49Z) - Transforming PageRank into an Infinite-Depth Graph Neural Network [1.0965065178451106]
Popular graph neural networks are shallow models, despite the success of very deep architectures in other application domains of deep learning.
We build on the close connection between GNNs and PageRank, for which personalized PageRank introduces the consideration of a personalization vector.
Adopting this idea, we propose the Personalized PageRank Graph Neural Network (PPRGNN), which extends the graph convolutional network to an infinite-depth model.
arXiv Detail & Related papers (2022-07-01T23:17:40Z) - A Robust Stacking Framework for Training Deep Graph Models with
Multifaceted Node Features [61.92791503017341]
Graph Neural Networks (GNNs) with numerical node features and graph structure as inputs have demonstrated superior performance on various supervised learning tasks with graph data.
The best models for such data types in most standard supervised learning settings with IID (non-graph) data are not easily incorporated into a GNN.
Here we propose a robust stacking framework that fuses graph-aware propagation with arbitrary models intended for IID data.
arXiv Detail & Related papers (2022-06-16T22:46:33Z) - Deep Ensembles for Graphs with Higher-order Dependencies [13.164412455321907]
Graph neural networks (GNNs) continue to achieve state-of-the-art performance on many graph learning tasks.
We show that the tendency of traditional graph representations to underfit each node's neighborhood causes existing GNNs to generalize poorly.
We propose a novel Deep Graph Ensemble (DGE) which captures neighborhood variance by training an ensemble of GNNs on different neighborhood subspaces of the same node.
arXiv Detail & Related papers (2022-05-27T14:01:08Z) - IV-GNN : Interval Valued Data Handling Using Graph Neural Network [12.651341660194534]
Graph Neural Network (GNN) is a powerful tool to perform standard machine learning on graphs.
This article proposes an Interval-ValuedGraph Neural Network, a novel GNN model where, for the first time, we relax the restriction of the feature space being countable.
Our model is much more general than existing models as any countable set is always a subset of the universal set $Rn$, which is uncountable.
arXiv Detail & Related papers (2021-11-17T15:37:09Z) - Learning Graph Neural Networks with Positive and Unlabeled Nodes [34.903471348798725]
Graph neural networks (GNNs) are important tools for transductive learning tasks, such as node classification in graphs.
Most GNN models aggregate information from short distances in each round, and fail to capture long distance relationship in graphs.
In this paper, we propose a novel graph neural network framework, long-short distance aggregation networks (LSDAN) to overcome these limitations.
arXiv Detail & Related papers (2021-03-08T11:43:37Z) - From Local Structures to Size Generalization in Graph Neural Networks [53.3202754533658]
Graph neural networks (GNNs) can process graphs of different sizes.
Their ability to generalize across sizes, specifically from small to large graphs, is still not well understood.
arXiv Detail & Related papers (2020-10-17T19:36:54Z) - Scaling Graph Neural Networks with Approximate PageRank [64.92311737049054]
We present the PPRGo model which utilizes an efficient approximation of information diffusion in GNNs.
In addition to being faster, PPRGo is inherently scalable, and can be trivially parallelized for large datasets like those found in industry settings.
We show that training PPRGo and predicting labels for all nodes in this graph takes under 2 minutes on a single machine, far outpacing other baselines on the same graph.
arXiv Detail & Related papers (2020-07-03T09:30:07Z) - EdgeNets:Edge Varying Graph Neural Networks [179.99395949679547]
This paper puts forth a general framework that unifies state-of-the-art graph neural networks (GNNs) through the concept of EdgeNet.
An EdgeNet is a GNN architecture that allows different nodes to use different parameters to weigh the information of different neighbors.
This is a general linear and local operation that a node can perform and encompasses under one formulation all existing graph convolutional neural networks (GCNNs) as well as graph attention networks (GATs)
arXiv Detail & Related papers (2020-01-21T15:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.