Link Prediction without Graph Neural Networks
- URL: http://arxiv.org/abs/2305.13656v1
- Date: Tue, 23 May 2023 03:59:21 GMT
- Title: Link Prediction without Graph Neural Networks
- Authors: Zexi Huang, Mert Kosan, Arlei Silva, Ambuj Singh
- Abstract summary: Link prediction is a fundamental task in many graph applications.
Graph Neural Networks (GNNs) have become the predominant framework for link prediction.
We propose Gelato, a novel framework that applies a topological-centric framework to a graph enhanced by attribute information via graph learning.
- Score: 7.436429318051601
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Link prediction, which consists of predicting edges based on graph features,
is a fundamental task in many graph applications. As for several related
problems, Graph Neural Networks (GNNs), which are based on an attribute-centric
message-passing paradigm, have become the predominant framework for link
prediction. GNNs have consistently outperformed traditional topology-based
heuristics, but what contributes to their performance? Are there simpler
approaches that achieve comparable or better results? To answer these
questions, we first identify important limitations in how GNN-based link
prediction methods handle the intrinsic class imbalance of the problem -- due
to the graph sparsity -- in their training and evaluation. Moreover, we propose
Gelato, a novel topology-centric framework that applies a topological heuristic
to a graph enhanced by attribute information via graph learning. Our model is
trained end-to-end with an N-pair loss on an unbiased training set to address
class imbalance. Experiments show that Gelato is 145% more accurate, trains 11
times faster, infers 6,000 times faster, and has less than half of the
trainable parameters compared to state-of-the-art GNNs for link prediction.
Related papers
- Two Heads Are Better Than One: Boosting Graph Sparse Training via
Semantic and Topological Awareness [80.87683145376305]
Graph Neural Networks (GNNs) excel in various graph learning tasks but face computational challenges when applied to large-scale graphs.
We propose Graph Sparse Training ( GST), which dynamically manipulates sparsity at the data level.
GST produces a sparse graph with maximum topological integrity and no performance degradation.
arXiv Detail & Related papers (2024-02-02T09:10:35Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Robust Graph Neural Network based on Graph Denoising [10.564653734218755]
Graph Neural Networks (GNNs) have emerged as a notorious alternative to address learning problems dealing with non-Euclidean datasets.
This work proposes a robust implementation of GNNs that explicitly accounts for the presence of perturbations in the observed topology.
arXiv Detail & Related papers (2023-12-11T17:43:57Z) - Self-Pro: A Self-Prompt and Tuning Framework for Graph Neural Networks [10.794305560114903]
Self-Prompt is a prompting framework for graphs based on the model and data itself.
We introduce asymmetric graph contrastive learning for pretext to address heterophily and align the objectives of pretext and downstream tasks.
We conduct extensive experiments on 11 benchmark datasets to demonstrate its superiority.
arXiv Detail & Related papers (2023-10-16T12:58:04Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - How Neural Processes Improve Graph Link Prediction [35.652234989200956]
We propose a meta-learning approach with graph neural networks for link prediction: Neural Processes for Graph Neural Networks (NPGNN)
NPGNN can perform both transductive and inductive learning tasks and adapt to patterns in a large new graph after training with a small subgraph.
arXiv Detail & Related papers (2021-09-30T07:35:13Z) - Very Deep Graph Neural Networks Via Noise Regularisation [57.450532911995516]
Graph Neural Networks (GNNs) perform learned message passing over an input graph.
We train a deep GNN with up to 100 message passing steps and achieve several state-of-the-art results.
arXiv Detail & Related papers (2021-06-15T08:50:10Z) - Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph
Link Prediction [69.1473775184952]
We introduce a realistic problem of few-shot out-of-graph link prediction.
We tackle this problem with a novel transductive meta-learning framework.
We validate our model on multiple benchmark datasets for knowledge graph completion and drug-drug interaction prediction.
arXiv Detail & Related papers (2020-06-11T17:42:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.