Edge-free but Structure-aware: Prototype-Guided Knowledge Distillation
from GNNs to MLPs
- URL: http://arxiv.org/abs/2303.13763v2
- Date: Mon, 27 Mar 2023 12:06:35 GMT
- Title: Edge-free but Structure-aware: Prototype-Guided Knowledge Distillation
from GNNs to MLPs
- Authors: Taiqiang Wu, Zhe Zhao, Jiahao Wang, Xingyu Bai, Lei Wang, Ngai Wong,
Yujiu Yang
- Abstract summary: Distilling high-accuracy Graph Neural Networks(GNNs) to low-latency multilayer perceptrons(MLPs) on graph tasks has become a hot research topic.
We propose a Prototype-Guided Knowledge Distillation(PGKD) method, which does not require graph edges(edge-free) yet learns structure-awares.
- Score: 22.541655587228203
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distilling high-accuracy Graph Neural Networks~(GNNs) to low-latency
multilayer perceptrons~(MLPs) on graph tasks has become a hot research topic.
However, MLPs rely exclusively on the node features and fail to capture the
graph structural information. Previous methods address this issue by processing
graph edges into extra inputs for MLPs, but such graph structures may be
unavailable for various scenarios. To this end, we propose a Prototype-Guided
Knowledge Distillation~(PGKD) method, which does not require graph
edges~(edge-free) yet learns structure-aware MLPs. Specifically, we analyze the
graph structural information in GNN teachers, and distill such information from
GNNs to MLPs via prototypes in an edge-free setting. Experimental results on
popular graph benchmarks demonstrate the effectiveness and robustness of the
proposed PGKD.
Related papers
- SimMLP: Training MLPs on Graphs without Supervision [38.63554842214315]
We introduce SimMLP, a Self-supervised framework for learnings on graphs.
SimMLP is the first-learning method that can achieve equivalence to GNNs in the optimal case.
We provide a comprehensive theoretical analysis, demonstrating the equivalence between SimMLP and GNNs based on mutual information and inductive bias.
arXiv Detail & Related papers (2024-02-14T03:16:13Z) - VQGraph: Rethinking Graph Representation Space for Bridging GNNs and
MLPs [97.63412451659826]
VQGraph learns a structure-aware tokenizer on graph data that can encode each node's local substructure as a discrete code.
VQGraph achieves new state-of-the-art performance on GNN-to-MLP distillation in both transductive and inductive settings.
arXiv Detail & Related papers (2023-08-04T02:58:08Z) - SA-MLP: Distilling Graph Knowledge from GNNs into Structure-Aware MLP [46.52398427166938]
One promising inference acceleration direction is to distill the GNNs into message-passing-free student multi-layer perceptrons.
We introduce a novel structure-mixing knowledge strategy to enhance the learning ability of students for structure information.
Our SA-MLP can consistently outperform the teacher GNNs, while maintaining faster inference assitance.
arXiv Detail & Related papers (2022-10-18T05:55:36Z) - NOSMOG: Learning Noise-robust and Structure-aware MLPs on Graphs [41.85649409565574]
Graph Networks (GNNs) have demonstrated their efficacy in dealing with non-Euclidean structural data.
Existing methods attempt to address this scalability issue by training multi-layer perceptrons (MLPs) exclusively on node content features.
In this paper, we propose to learn NOise-robust Structure-awares On Graphs (NOSMOG) to overcome the challenges.
arXiv Detail & Related papers (2022-08-22T01:47:07Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - Graph-MLP: Node Classification without Message Passing in Graph [28.604893350871777]
Graph Neural Network (GNN) has been demonstrated its effectiveness in dealing with non-Euclidean structural data.
Recent works have mainly focused on powerful message passing modules, however, in this paper, we show that none of the message passing modules is necessary.
We propose a pure multilayer-perceptron-based framework, Graph-MLP with the supervision signal leveraging graph structure.
arXiv Detail & Related papers (2021-06-08T02:07:21Z) - On Graph Neural Networks versus Graph-Augmented MLPs [51.23890789522705]
Graph-Augmented Multi-Layer Perceptrons (GA-MLPs) first augments node features with certain multi-hop operators on the graph.
We prove a separation in expressive power between GA-MLPs and GNNs that grows exponentially in depth.
arXiv Detail & Related papers (2020-10-28T17:59:59Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.