SimMLP: Training MLPs on Graphs without Supervision
- URL: http://arxiv.org/abs/2402.08918v2
- Date: Mon, 28 Oct 2024 04:34:42 GMT
- Title: SimMLP: Training MLPs on Graphs without Supervision
- Authors: Zehong Wang, Zheyuan Zhang, Chuxu Zhang, Yanfang Ye,
- Abstract summary: We introduce SimMLP, a Self-supervised framework for learnings on graphs.
SimMLP is the first-learning method that can achieve equivalence to GNNs in the optimal case.
We provide a comprehensive theoretical analysis, demonstrating the equivalence between SimMLP and GNNs based on mutual information and inductive bias.
- Score: 38.63554842214315
- License:
- Abstract: Graph Neural Networks (GNNs) have demonstrated their effectiveness in various graph learning tasks, yet their reliance on neighborhood aggregation during inference poses challenges for deployment in latency-sensitive applications, such as real-time financial fraud detection. To address this limitation, recent studies have proposed distilling knowledge from teacher GNNs into student Multi-Layer Perceptrons (MLPs) trained on node content, aiming to accelerate inference. However, these approaches often inadequately explore structural information when inferring unseen nodes. To this end, we introduce SimMLP, a Self-supervised framework for learning MLPs on graphs, designed to fully integrate rich structural information into MLPs. Notably, SimMLP is the first MLP-learning method that can achieve equivalence to GNNs in the optimal case. The key idea is to employ self-supervised learning to align the representations encoded by graph context-aware GNNs and neighborhood dependency-free MLPs, thereby fully integrating the structural information into MLPs. We provide a comprehensive theoretical analysis, demonstrating the equivalence between SimMLP and GNNs based on mutual information and inductive bias, highlighting SimMLP's advanced structural learning capabilities. Additionally, we conduct extensive experiments on 20 benchmark datasets, covering node classification, link prediction, and graph classification, to showcase SimMLP's superiority over state-of-the-art baselines, particularly in scenarios involving unseen nodes (e.g., inductive and cold-start node classification) where structural insights are crucial. Our codes are available at: https://github.com/Zehong-Wang/SimMLP.
Related papers
- AdaGMLP: AdaBoosting GNN-to-MLP Knowledge Distillation [15.505402580010104]
A new wave of methods, collectively known as GNN-to-MLP Knowledge Distillation, has emerged.
They aim to transfer GNN-learned knowledge to a more efficient student.
These methods face challenges in situations with insufficient training data and incomplete test data.
We propose AdaGMLP, an AdaBoosting GNN-to-MLP Knowledge Distillation framework.
arXiv Detail & Related papers (2024-05-23T08:28:44Z) - VQGraph: Rethinking Graph Representation Space for Bridging GNNs and
MLPs [97.63412451659826]
VQGraph learns a structure-aware tokenizer on graph data that can encode each node's local substructure as a discrete code.
VQGraph achieves new state-of-the-art performance on GNN-to-MLP distillation in both transductive and inductive settings.
arXiv Detail & Related papers (2023-08-04T02:58:08Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - Graph Neural Networks are Inherently Good Generalizers: Insights by
Bridging GNNs and MLPs [71.93227401463199]
This paper pinpoints the major source of GNNs' performance gain to their intrinsic capability, by introducing an intermediate model class dubbed as P(ropagational)MLP.
We observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts, while being much more efficient in training.
arXiv Detail & Related papers (2022-12-18T08:17:32Z) - SA-MLP: Distilling Graph Knowledge from GNNs into Structure-Aware MLP [46.52398427166938]
One promising inference acceleration direction is to distill the GNNs into message-passing-free student multi-layer perceptrons.
We introduce a novel structure-mixing knowledge strategy to enhance the learning ability of students for structure information.
Our SA-MLP can consistently outperform the teacher GNNs, while maintaining faster inference assitance.
arXiv Detail & Related papers (2022-10-18T05:55:36Z) - MLPInit: Embarrassingly Simple GNN Training Acceleration with MLP
Initialization [51.76758674012744]
Training graph neural networks (GNNs) on large graphs is complex and extremely time consuming.
We propose an embarrassingly simple, yet hugely effective method for GNN training acceleration, called PeerInit.
arXiv Detail & Related papers (2022-09-30T21:33:51Z) - NOSMOG: Learning Noise-robust and Structure-aware MLPs on Graphs [41.85649409565574]
Graph Networks (GNNs) have demonstrated their efficacy in dealing with non-Euclidean structural data.
Existing methods attempt to address this scalability issue by training multi-layer perceptrons (MLPs) exclusively on node content features.
In this paper, we propose to learn NOise-robust Structure-awares On Graphs (NOSMOG) to overcome the challenges.
arXiv Detail & Related papers (2022-08-22T01:47:07Z) - Graph-MLP: Node Classification without Message Passing in Graph [28.604893350871777]
Graph Neural Network (GNN) has been demonstrated its effectiveness in dealing with non-Euclidean structural data.
Recent works have mainly focused on powerful message passing modules, however, in this paper, we show that none of the message passing modules is necessary.
We propose a pure multilayer-perceptron-based framework, Graph-MLP with the supervision signal leveraging graph structure.
arXiv Detail & Related papers (2021-06-08T02:07:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.