Neo-GNNs: Neighborhood Overlap-aware Graph Neural Networks for Link
Prediction
- URL: http://arxiv.org/abs/2206.04216v1
- Date: Thu, 9 Jun 2022 01:43:49 GMT
- Title: Neo-GNNs: Neighborhood Overlap-aware Graph Neural Networks for Link
Prediction
- Authors: Seongjun Yun, Seoyoon Kim, Junhyun Lee, Jaewoo Kang, Hyunwoo J. Kim
- Abstract summary: Graph Neural Networks (GNNs) have been widely applied to various fields for learning over graphstructured data.
We propose Neighborhood Overlap-aware Graph Neural Networks (Neo-GNNs) that learn useful structural features from an adjacency overlapped neighborhoods for link prediction.
- Score: 23.545059901853815
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) have been widely applied to various fields for
learning over graph-structured data. They have shown significant improvements
over traditional heuristic methods in various tasks such as node classification
and graph classification. However, since GNNs heavily rely on smoothed node
features rather than graph structure, they often show poor performance than
simple heuristic methods in link prediction where the structural information,
e.g., overlapped neighborhoods, degrees, and shortest paths, is crucial. To
address this limitation, we propose Neighborhood Overlap-aware Graph Neural
Networks (Neo-GNNs) that learn useful structural features from an adjacency
matrix and estimate overlapped neighborhoods for link prediction. Our Neo-GNNs
generalize neighborhood overlap-based heuristic methods and handle overlapped
multi-hop neighborhoods. Our extensive experiments on Open Graph Benchmark
datasets (OGB) demonstrate that Neo-GNNs consistently achieve state-of-the-art
performance in link prediction. Our code is publicly available at
https://github.com/seongjunyun/Neo_GNNs.
Related papers
- Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - Learning Scalable Structural Representations for Link Prediction with
Bloom Signatures [39.63963077346406]
Graph neural networks (GNNs) are known to perform sub-optimally on link prediction tasks.
We propose to learn structural link representations by augmenting the message-passing framework of GNNs with Bloom signatures.
Our proposed model achieves comparable or better performance than existing edge-wise GNN models.
arXiv Detail & Related papers (2023-12-28T02:21:40Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Geodesic Graph Neural Network for Efficient Graph Representation
Learning [34.047527874184134]
We propose an efficient GNN framework called Geodesic GNN (GDGNN)
It injects conditional relationships between nodes into the model without labeling.
Conditioned on the geodesic representations, GDGNN is able to generate node, link, and graph representations that carry much richer structural information than plain GNNs.
arXiv Detail & Related papers (2022-10-06T02:02:35Z) - KerGNNs: Interpretable Graph Neural Networks with Graph Kernels [14.421535610157093]
Graph neural networks (GNNs) have become the state-of-the-art method in downstream graph-related tasks.
We propose a novel GNN framework, termed textit Kernel Graph Neural Networks (KerGNNs)
KerGNNs integrate graph kernels into the message passing process of GNNs.
We show that our method achieves competitive performance compared with existing state-of-the-art methods.
arXiv Detail & Related papers (2022-01-03T06:16:30Z) - Customizing Graph Neural Networks using Path Reweighting [23.698877985105312]
We propose a novel GNN solution, namely Customized Graph Neural Network with Path Reweighting (CustomGNN for short)
Specifically, the proposed CustomGNN can automatically learn the high-level semantics for specific downstream tasks to highlight semantically relevant paths as well to filter out task-irrelevant noises in a graph.
In experiments with the node classification task, CustomGNN achieves state-of-the-art accuracies on three standard graph datasets and four large graph datasets.
arXiv Detail & Related papers (2021-06-21T05:38:26Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Graph Neural Networks: Architectures, Stability and Transferability [176.3960927323358]
Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
arXiv Detail & Related papers (2020-08-04T18:57:36Z) - Higher-Order Explanations of Graph Neural Networks via Relevant Walks [3.1510406584101776]
Graph Neural Networks (GNNs) are a popular approach for predicting graph structured data.
In this paper, we show that GNNs can in fact be naturally explained using higher-order expansions.
We extract practically relevant insights on sentiment analysis of text data, structure-property relationships in quantum chemistry, and image classification.
arXiv Detail & Related papers (2020-06-05T17:59:14Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z) - EdgeNets:Edge Varying Graph Neural Networks [179.99395949679547]
This paper puts forth a general framework that unifies state-of-the-art graph neural networks (GNNs) through the concept of EdgeNet.
An EdgeNet is a GNN architecture that allows different nodes to use different parameters to weigh the information of different neighbors.
This is a general linear and local operation that a node can perform and encompasses under one formulation all existing graph convolutional neural networks (GCNNs) as well as graph attention networks (GATs)
arXiv Detail & Related papers (2020-01-21T15:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.