Graph-Free Knowledge Distillation for Graph Neural Networks
- URL: http://arxiv.org/abs/2105.07519v1
- Date: Sun, 16 May 2021 21:38:24 GMT
- Title: Graph-Free Knowledge Distillation for Graph Neural Networks
- Authors: Xiang Deng and Zhongfei Zhang
- Abstract summary: We propose the first dedicated approach to distilling knowledge from a graph neural network without graph data.
The proposed graph-free KD (GFKD) learns graph topology structures for knowledge transfer by modeling them with multinomial distribution.
We provide the strategies for handling different types of prior knowledge in the graph data or the GNNs.
- Score: 30.38128029453977
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge distillation (KD) transfers knowledge from a teacher network to a
student by enforcing the student to mimic the outputs of the pretrained teacher
on training data. However, data samples are not always accessible in many cases
due to large data sizes, privacy, or confidentiality. Many efforts have been
made on addressing this problem for convolutional neural networks (CNNs) whose
inputs lie in a grid domain within a continuous space such as images and
videos, but largely overlook graph neural networks (GNNs) that handle non-grid
data with different topology structures within a discrete space. The inherent
differences between their inputs make these CNN-based approaches not applicable
to GNNs. In this paper, we propose to our best knowledge the first dedicated
approach to distilling knowledge from a GNN without graph data. The proposed
graph-free KD (GFKD) learns graph topology structures for knowledge transfer by
modeling them with multinomial distribution. We then introduce a gradient
estimator to optimize this framework. Essentially, the gradients w.r.t. graph
structures are obtained by only using GNN forward-propagation without
back-propagation, which means that GFKD is compatible with modern GNN libraries
such as DGL and Geometric. Moreover, we provide the strategies for handling
different types of prior knowledge in the graph data or the GNNs. Extensive
experiments demonstrate that GFKD achieves the state-of-the-art performance for
distilling knowledge from GNNs without training data.
Related papers
- Improved Image Classification with Manifold Neural Networks [13.02854405679453]
Graph Neural Networks (GNNs) have gained popularity in various learning tasks.
In this paper, we explore GNNs' potential in general data representations, especially in the image domain.
We train a GNN to predict node labels corresponding to the image labels in the classification task, and leverage convergence of GNNs to analyze GNN generalization.
arXiv Detail & Related papers (2024-09-19T19:55:33Z) - Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity [30.2972965458946]
Graph Networks (GNNs) are widely applied to graph learning problems such as node classification.
When scaling up the underlying graphs of GNNs to a larger size, we are forced to either train on the complete graph or keep the full graph adjacency and node embeddings in memory.
This paper proposes a sketch-based algorithm whose training time and memory grow sublinearly with respect to graph size.
arXiv Detail & Related papers (2024-06-21T18:22:11Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z) - Graphs, Convolutions, and Neural Networks: From Graph Filters to Graph
Neural Networks [183.97265247061847]
We leverage graph signal processing to characterize the representation space of graph neural networks (GNNs)
We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
We also study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
arXiv Detail & Related papers (2020-03-08T13:02:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.