Linkless Link Prediction via Relational Distillation
- URL: http://arxiv.org/abs/2210.05801v3
- Date: Mon, 5 Jun 2023 14:52:42 GMT
- Title: Linkless Link Prediction via Relational Distillation
- Authors: Zhichun Guo, William Shiao, Shichang Zhang, Yozen Liu, Nitesh V.
Chawla, Neil Shah, Tong Zhao
- Abstract summary: Graph Networks (GNNs) have shown exceptional performance in the task of link prediction.
Despite their effectiveness, the high latency brought by non-trivial neighborhood data dependency limits GNNs in practical deployments.
- Score: 24.928349760334413
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have shown exceptional performance in the task
of link prediction. Despite their effectiveness, the high latency brought by
non-trivial neighborhood data dependency limits GNNs in practical deployments.
Conversely, the known efficient MLPs are much less effective than GNNs due to
the lack of relational knowledge. In this work, to combine the advantages of
GNNs and MLPs, we start with exploring direct knowledge distillation (KD)
methods for link prediction, i.e., predicted logit-based matching and node
representation-based matching. Upon observing direct KD analogs do not perform
well for link prediction, we propose a relational KD framework, Linkless Link
Prediction (LLP), to distill knowledge for link prediction with MLPs. Unlike
simple KD methods that match independent link logits or node representations,
LLP distills relational knowledge that is centered around each (anchor) node to
the student MLP. Specifically, we propose rank-based matching and
distribution-based matching strategies that complement each other. Extensive
experiments demonstrate that LLP boosts the link prediction performance of MLPs
with significant margins, and even outperforms the teacher GNNs on 7 out of 8
benchmarks. LLP also achieves a 70.68x speedup in link prediction inference
compared to GNNs on the large-scale OGB dataset.
Related papers
- Heuristic Methods are Good Teachers to Distill MLPs for Graph Link Prediction [61.70012924088756]
Distilling Graph Neural Networks (GNNs) teachers into Multi-Layer Perceptrons (MLPs) students has emerged as an effective approach to achieve strong performance.
However, existing distillation methods only use standard GNNs and overlook alternative teachers such as specialized model for link prediction (GNN4LP) and methods (e.g., common neighbors)
This paper first explores the impact of different teachers in GNN-to-MLP distillation, we find that stronger teachers do not always produce stronger students, while weaker methods can teachs to near-GNN performance with drastically reduced training costs
arXiv Detail & Related papers (2025-04-08T16:35:11Z) - Can GNNs Learn Link Heuristics? A Concise Review and Evaluation of Link Prediction Methods [16.428742189544955]
This paper explores the ability of Graph Neural Networks (GNNs) in learning various forms of information for link prediction.
Our analysis reveals that GNNs cannot effectively learn structural information related to the number of common neighbors between two nodes.
Also, our extensive experiments indicate that trainable node embeddings can improve the performance of GNN-based link prediction models.
arXiv Detail & Related papers (2024-11-22T03:38:20Z) - Teaching MLPs to Master Heterogeneous Graph-Structured Knowledge for Efficient and Accurate Inference [53.38082028252104]
We introduce HG2M and HG2M+ to combine both HGNN's superior performance and relational's efficient inference.
HG2M directly trains students with node features as input and soft labels from teacher HGNNs as targets.
HG2Ms demonstrate a 379.24$times$ speedup in inference over HGNNs on the large-scale IGB-3M-19 dataset.
arXiv Detail & Related papers (2024-11-21T11:39:09Z) - A Teacher-Free Graph Knowledge Distillation Framework with Dual
Self-Distillation [58.813991312803246]
We propose a Teacher-Free Graph Self-Distillation (TGS) framework that does not require any teacher model or GNNs during both training and inference.
TGS enjoys the benefits of graph topology awareness in training but is free from data dependency in inference.
arXiv Detail & Related papers (2024-03-06T05:52:13Z) - Mixture of Link Predictors [40.32089688353189]
Link prediction aims to forecast unseen connections in graphs.
Heuristic methods, leveraging a range of different pairwise measures, often rival the performance of vanilla Graph Neural Networks (GNNs)
arXiv Detail & Related papers (2024-02-13T16:36:50Z) - Pure Message Passing Can Estimate Common Neighbor for Link Prediction [25.044734252779975]
We study the proficiency of MPNNs in approximating Common Neighbor (CN)
We introduce the Message Passing Link Predictor (MPLP), a novel link prediction model.
arXiv Detail & Related papers (2023-09-02T16:20:41Z) - Graph Neural Networks are Inherently Good Generalizers: Insights by
Bridging GNNs and MLPs [71.93227401463199]
This paper pinpoints the major source of GNNs' performance gain to their intrinsic capability, by introducing an intermediate model class dubbed as P(ropagational)MLP.
We observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts, while being much more efficient in training.
arXiv Detail & Related papers (2022-12-18T08:17:32Z) - Teaching Yourself: Graph Self-Distillation on Neighborhood for Node
Classification [42.840122801915996]
We propose a Graph Self-Distillation on Neighborhood (GSDN) framework to reduce the gap between GNNs and Neurals.
GSDN infers 75XX faster than existing GNNs and 16X-25X faster than other inference acceleration methods.
arXiv Detail & Related papers (2022-10-05T08:35:34Z) - MLPInit: Embarrassingly Simple GNN Training Acceleration with MLP
Initialization [51.76758674012744]
Training graph neural networks (GNNs) on large graphs is complex and extremely time consuming.
We propose an embarrassingly simple, yet hugely effective method for GNN training acceleration, called PeerInit.
arXiv Detail & Related papers (2022-09-30T21:33:51Z) - Graph-less Neural Networks: Teaching Old MLPs New Tricks via
Distillation [34.676755383361005]
Graph-less Neural Networks (GLNNs) have no inference graph dependency.
We show that GLNNs with competitive performance infer faster than GNNs by 146X-273X and faster than other acceleration methods by 14X-27X.
A comprehensive analysis of GLNN shows when and why GLNN can achieve competitive results to Gs and suggests GLNN as a handy choice for latency-constrained applications.
arXiv Detail & Related papers (2021-10-17T05:16:58Z) - Optimization of Graph Neural Networks: Implicit Acceleration by Skip
Connections and More Depth [57.10183643449905]
Graph Neural Networks (GNNs) have been studied from the lens of expressive power and generalization.
We study the dynamics of GNNs by studying deep skip optimization.
Our results provide first theoretical support for the success of GNNs.
arXiv Detail & Related papers (2021-05-10T17:59:01Z) - Bayesian Graph Neural Networks with Adaptive Connection Sampling [62.51689735630133]
We propose a unified framework for adaptive connection sampling in graph neural networks (GNNs)
The proposed framework not only alleviates over-smoothing and over-fitting tendencies of deep GNNs, but also enables learning with uncertainty in graph analytic tasks with GNNs.
arXiv Detail & Related papers (2020-06-07T07:06:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.