Graph Inference Acceleration by Learning MLPs on Graphs without
Supervision
- URL: http://arxiv.org/abs/2402.08918v1
- Date: Wed, 14 Feb 2024 03:16:13 GMT
- Title: Graph Inference Acceleration by Learning MLPs on Graphs without
Supervision
- Authors: Zehong Wang, Zheyuan Zhang, Chuxu Zhang, Yanfang Ye
- Abstract summary: We present textbftextscSimMLP, an effective framework for learning textbftextscMLPs on graphs without supervision.
textscSimMLP outperforms state-of-the-art baselines, especially in settings with unseen nodes.
- Score: 42.20656109231714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have demonstrated effectiveness in various graph
learning tasks, yet their reliance on message-passing constraints their
deployment in latency-sensitive applications such as financial fraud detection.
Recent works have explored distilling knowledge from GNNs to Multi-Layer
Perceptrons (MLPs) to accelerate inference. However, this task-specific
supervised distillation limits generalization to unseen nodes, which are
prevalent in latency-sensitive applications. To this end, we present
\textbf{\textsc{SimMLP}}, a \textbf{\textsc{Sim}}ple yet effective framework
for learning \textbf{\textsc{MLP}}s on graphs without supervision, to enhance
generalization. \textsc{SimMLP} employs self-supervised alignment between GNNs
and MLPs to capture the fine-grained and generalizable correlation between node
features and graph structures, and proposes two strategies to alleviate the
risk of trivial solutions. Theoretically, we comprehensively analyze
\textsc{SimMLP} to demonstrate its equivalence to GNNs in the optimal case and
its generalization capability. Empirically, \textsc{SimMLP} outperforms
state-of-the-art baselines, especially in settings with unseen nodes. In
particular, it obtains significant performance gains {\bf (7$\sim$26\%)} over
MLPs and inference acceleration over GNNs {\bf (90$\sim$126$\times$)} on
large-scale graph datasets. Our codes are available at:
\url{https://github.com/Zehong-Wang/SimMLP}.
Related papers
- VQGraph: Rethinking Graph Representation Space for Bridging GNNs and
MLPs [97.63412451659826]
VQGraph learns a structure-aware tokenizer on graph data that can encode each node's local substructure as a discrete code.
VQGraph achieves new state-of-the-art performance on GNN-to-MLP distillation in both transductive and inductive settings.
arXiv Detail & Related papers (2023-08-04T02:58:08Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - MLPInit: Embarrassingly Simple GNN Training Acceleration with MLP
Initialization [51.76758674012744]
Training graph neural networks (GNNs) on large graphs is complex and extremely time consuming.
We propose an embarrassingly simple, yet hugely effective method for GNN training acceleration, called PeerInit.
arXiv Detail & Related papers (2022-09-30T21:33:51Z) - From Local to Global: Spectral-Inspired Graph Neural Networks [28.858773653743075]
Graph Neural Networks (GNNs) are powerful deep learning methods for Non-Euclidean data.
MPNNs are message-passing algorithms that aggregate and combine signals in a local graph neighborhood.
MPNNs can suffer from issues like over-smoothing or over-squashing.
arXiv Detail & Related papers (2022-09-24T17:19:00Z) - NOSMOG: Learning Noise-robust and Structure-aware MLPs on Graphs [41.85649409565574]
Graph Networks (GNNs) have demonstrated their efficacy in dealing with non-Euclidean structural data.
Existing methods attempt to address this scalability issue by training multi-layer perceptrons (MLPs) exclusively on node content features.
In this paper, we propose to learn NOise-robust Structure-awares On Graphs (NOSMOG) to overcome the challenges.
arXiv Detail & Related papers (2022-08-22T01:47:07Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - Graph-MLP: Node Classification without Message Passing in Graph [28.604893350871777]
Graph Neural Network (GNN) has been demonstrated its effectiveness in dealing with non-Euclidean structural data.
Recent works have mainly focused on powerful message passing modules, however, in this paper, we show that none of the message passing modules is necessary.
We propose a pure multilayer-perceptron-based framework, Graph-MLP with the supervision signal leveraging graph structure.
arXiv Detail & Related papers (2021-06-08T02:07:21Z) - On Graph Neural Networks versus Graph-Augmented MLPs [51.23890789522705]
Graph-Augmented Multi-Layer Perceptrons (GA-MLPs) first augments node features with certain multi-hop operators on the graph.
We prove a separation in expressive power between GA-MLPs and GNNs that grows exponentially in depth.
arXiv Detail & Related papers (2020-10-28T17:59:59Z) - SAIL: Self-Augmented Graph Contrastive Learning [40.76236706250037]
This paper studies learning node representations with graph neural networks (GNNs) for unsupervised scenario.
We derive a theoretical analysis and provide an empirical demonstration about the non-steady performance of GNNs over different graph datasets.
arXiv Detail & Related papers (2020-09-02T10:27:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.