Graph Inference Acceleration by Learning MLPs on Graphs without
Supervision
- URL: http://arxiv.org/abs/2402.08918v1
- Date: Wed, 14 Feb 2024 03:16:13 GMT
- Title: Graph Inference Acceleration by Learning MLPs on Graphs without
Supervision
- Authors: Zehong Wang, Zheyuan Zhang, Chuxu Zhang, Yanfang Ye
- Abstract summary: We present textbftextscSimMLP, an effective framework for learning textbftextscMLPs on graphs without supervision.
textscSimMLP outperforms state-of-the-art baselines, especially in settings with unseen nodes.
- Score: 42.20656109231714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have demonstrated effectiveness in various graph
learning tasks, yet their reliance on message-passing constraints their
deployment in latency-sensitive applications such as financial fraud detection.
Recent works have explored distilling knowledge from GNNs to Multi-Layer
Perceptrons (MLPs) to accelerate inference. However, this task-specific
supervised distillation limits generalization to unseen nodes, which are
prevalent in latency-sensitive applications. To this end, we present
\textbf{\textsc{SimMLP}}, a \textbf{\textsc{Sim}}ple yet effective framework
for learning \textbf{\textsc{MLP}}s on graphs without supervision, to enhance
generalization. \textsc{SimMLP} employs self-supervised alignment between GNNs
and MLPs to capture the fine-grained and generalizable correlation between node
features and graph structures, and proposes two strategies to alleviate the
risk of trivial solutions. Theoretically, we comprehensively analyze
\textsc{SimMLP} to demonstrate its equivalence to GNNs in the optimal case and
its generalization capability. Empirically, \textsc{SimMLP} outperforms
state-of-the-art baselines, especially in settings with unseen nodes. In
particular, it obtains significant performance gains {\bf (7$\sim$26\%)} over
MLPs and inference acceleration over GNNs {\bf (90$\sim$126$\times$)} on
large-scale graph datasets. Our codes are available at:
\url{https://github.com/Zehong-Wang/SimMLP}.
Related papers
- Training MLPs on Graphs without Supervision [38.63554842214315]
We introduce SimMLP, a Self-supervised framework for learnings on graphs.
SimMLP is the first-learning method that can achieve equivalence to GNNs in the optimal case.
We provide a comprehensive theoretical analysis, demonstrating the equivalence between SimMLP and GNNs based on mutual information and inductive bias.
arXiv Detail & Related papers (2024-12-05T04:20:54Z) - AdaGMLP: AdaBoosting GNN-to-MLP Knowledge Distillation [15.505402580010104]
A new wave of methods, collectively known as GNN-to-MLP Knowledge Distillation, has emerged.
They aim to transfer GNN-learned knowledge to a more efficient student.
These methods face challenges in situations with insufficient training data and incomplete test data.
We propose AdaGMLP, an AdaBoosting GNN-to-MLP Knowledge Distillation framework.
arXiv Detail & Related papers (2024-05-23T08:28:44Z) - VQGraph: Rethinking Graph Representation Space for Bridging GNNs and
MLPs [97.63412451659826]
VQGraph learns a structure-aware tokenizer on graph data that can encode each node's local substructure as a discrete code.
VQGraph achieves new state-of-the-art performance on GNN-to-MLP distillation in both transductive and inductive settings.
arXiv Detail & Related papers (2023-08-04T02:58:08Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - Graph Neural Networks are Inherently Good Generalizers: Insights by
Bridging GNNs and MLPs [71.93227401463199]
This paper pinpoints the major source of GNNs' performance gain to their intrinsic capability, by introducing an intermediate model class dubbed as P(ropagational)MLP.
We observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts, while being much more efficient in training.
arXiv Detail & Related papers (2022-12-18T08:17:32Z) - SA-MLP: Distilling Graph Knowledge from GNNs into Structure-Aware MLP [46.52398427166938]
One promising inference acceleration direction is to distill the GNNs into message-passing-free student multi-layer perceptrons.
We introduce a novel structure-mixing knowledge strategy to enhance the learning ability of students for structure information.
Our SA-MLP can consistently outperform the teacher GNNs, while maintaining faster inference assitance.
arXiv Detail & Related papers (2022-10-18T05:55:36Z) - MLPInit: Embarrassingly Simple GNN Training Acceleration with MLP
Initialization [51.76758674012744]
Training graph neural networks (GNNs) on large graphs is complex and extremely time consuming.
We propose an embarrassingly simple, yet hugely effective method for GNN training acceleration, called PeerInit.
arXiv Detail & Related papers (2022-09-30T21:33:51Z) - NOSMOG: Learning Noise-robust and Structure-aware MLPs on Graphs [41.85649409565574]
Graph Networks (GNNs) have demonstrated their efficacy in dealing with non-Euclidean structural data.
Existing methods attempt to address this scalability issue by training multi-layer perceptrons (MLPs) exclusively on node content features.
In this paper, we propose to learn NOise-robust Structure-awares On Graphs (NOSMOG) to overcome the challenges.
arXiv Detail & Related papers (2022-08-22T01:47:07Z) - Graph-MLP: Node Classification without Message Passing in Graph [28.604893350871777]
Graph Neural Network (GNN) has been demonstrated its effectiveness in dealing with non-Euclidean structural data.
Recent works have mainly focused on powerful message passing modules, however, in this paper, we show that none of the message passing modules is necessary.
We propose a pure multilayer-perceptron-based framework, Graph-MLP with the supervision signal leveraging graph structure.
arXiv Detail & Related papers (2021-06-08T02:07:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.