Extracting Low-/High- Frequency Knowledge from Graph Neural Networks and
Injecting it into MLPs: An Effective GNN-to-MLP Distillation Framework
- URL: http://arxiv.org/abs/2305.10758v2
- Date: Sun, 4 Jun 2023 14:48:07 GMT
- Title: Extracting Low-/High- Frequency Knowledge from Graph Neural Networks and
Injecting it into MLPs: An Effective GNN-to-MLP Distillation Framework
- Authors: Lirong Wu, Haitao Lin, Yufei Huang, Tianyu Fan, and Stan Z. Li
- Abstract summary: We propose an efficient Full-Frequency GNN-to-MLP (FFG2M) distillation framework.
We factorize the knowledge learned by GNNs into low- and high-frequency components in the spectral domain.
We identify a potential information drowning problem for existing GNN-to-MLP distillation.
- Score: 36.160251860788314
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent years have witnessed the great success of Graph Neural Networks (GNNs)
in handling graph-related tasks. However, MLPs remain the primary workhorse for
practical industrial applications due to their desirable inference efficiency
and scalability. To reduce their gaps, one can directly distill knowledge from
a well-designed teacher GNN to a student MLP, which is termed as GNN-to-MLP
distillation. However, the process of distillation usually entails a loss of
information, and ``which knowledge patterns of GNNs are more likely to be left
and distilled into MLPs?" becomes an important question. In this paper, we
first factorize the knowledge learned by GNNs into low- and high-frequency
components in the spectral domain and then derive their correspondence in the
spatial domain. Furthermore, we identified a potential information drowning
problem for existing GNN-to-MLP distillation, i.e., the high-frequency
knowledge of the pre-trained GNNs may be overwhelmed by the low-frequency
knowledge during distillation; we have described in detail what it represents,
how it arises, what impact it has, and how to deal with it. In this paper, we
propose an efficient Full-Frequency GNN-to-MLP (FF-G2M) distillation framework,
which extracts both low-frequency and high-frequency knowledge from GNNs and
injects it into MLPs. Extensive experiments show that FF-G2M improves over the
vanilla MLPs by 12.6% and outperforms its corresponding teacher GNNs by 2.6%
averaged over six graph datasets and three common GNN architectures.
Related papers
- Teach Harder, Learn Poorer: Rethinking Hard Sample Distillation for GNN-to-MLP Knowledge Distillation [56.912354708167534]
Graph Neural Networks (GNNs) and lightweight Multi-Layer Perceptron (MLPs)
GNNto-MLP Knowledge Distillation (KD) proposes to distill knowledge from a well-trained teacher GNN into a student.
This paper proposes a simple yet effective Hardness-aware GNN-to-MLP Distillation (HGMD) framework.
arXiv Detail & Related papers (2024-07-20T06:13:00Z) - A Teacher-Free Graph Knowledge Distillation Framework with Dual
Self-Distillation [58.813991312803246]
We propose a Teacher-Free Graph Self-Distillation (TGS) framework that does not require any teacher model or GNNs during both training and inference.
TGS enjoys the benefits of graph topology awareness in training but is free from data dependency in inference.
arXiv Detail & Related papers (2024-03-06T05:52:13Z) - VQGraph: Rethinking Graph Representation Space for Bridging GNNs and
MLPs [97.63412451659826]
VQGraph learns a structure-aware tokenizer on graph data that can encode each node's local substructure as a discrete code.
VQGraph achieves new state-of-the-art performance on GNN-to-MLP distillation in both transductive and inductive settings.
arXiv Detail & Related papers (2023-08-04T02:58:08Z) - Quantifying the Knowledge in GNNs for Reliable Distillation into MLPs [42.38007308086495]
To bridge the gaps between topology-aware Graph Neural Networks (GNNs) and inference-efficient Multi-Layer Perceptron (MLPs), GLNN proposes to distill knowledge from a well-trained teacher GNN into a student.
We first quantify the knowledge reliability in GNN by measuring the invariance of their information entropy to noise perturbations.
We propose an effective approach, namely Knowledge-inspired Reliable Distillation (KRD), that models the probability of each node being an informative and reliable knowledge point.
arXiv Detail & Related papers (2023-06-09T02:23:37Z) - Graph Neural Networks are Inherently Good Generalizers: Insights by
Bridging GNNs and MLPs [71.93227401463199]
This paper pinpoints the major source of GNNs' performance gain to their intrinsic capability, by introducing an intermediate model class dubbed as P(ropagational)MLP.
We observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts, while being much more efficient in training.
arXiv Detail & Related papers (2022-12-18T08:17:32Z) - Teaching Yourself: Graph Self-Distillation on Neighborhood for Node
Classification [42.840122801915996]
We propose a Graph Self-Distillation on Neighborhood (GSDN) framework to reduce the gap between GNNs and Neurals.
GSDN infers 75XX faster than existing GNNs and 16X-25X faster than other inference acceleration methods.
arXiv Detail & Related papers (2022-10-05T08:35:34Z) - Graph-less Neural Networks: Teaching Old MLPs New Tricks via
Distillation [34.676755383361005]
Graph-less Neural Networks (GLNNs) have no inference graph dependency.
We show that GLNNs with competitive performance infer faster than GNNs by 146X-273X and faster than other acceleration methods by 14X-27X.
A comprehensive analysis of GLNN shows when and why GLNN can achieve competitive results to Gs and suggests GLNN as a handy choice for latency-constrained applications.
arXiv Detail & Related papers (2021-10-17T05:16:58Z) - On Self-Distilling Graph Neural Network [64.00508355508106]
We propose the first teacher-free knowledge distillation method for GNNs, termed GNN Self-Distillation (GNN-SD)
The method is built upon the proposed neighborhood discrepancy rate (NDR), which quantifies the non-smoothness of the embedded graph in an efficient way.
We also summarize a generic GNN-SD framework that could be exploited to induce other distillation strategies.
arXiv Detail & Related papers (2020-11-04T12:29:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.