Graph-based Knowledge Distillation: A survey and experimental evaluation
- URL: http://arxiv.org/abs/2302.14643v1
- Date: Mon, 27 Feb 2023 11:39:23 GMT
- Title: Graph-based Knowledge Distillation: A survey and experimental evaluation
- Authors: Jing Liu, Tongya Zheng, Guanzheng Zhang, Qinfen Hao
- Abstract summary: Knowledge Distillation (KD) has been introduced to enhance existing Graph Neural Networks (GNNs)
KD involves transferring the soft-label supervision of the large teacher model to the small student model while maintaining prediction performance.
This paper first introduces the background of graph and KD. It then provides a comprehensive summary of three types of Graph-based Knowledge Distillation methods.
- Score: 4.713436329217004
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph, such as citation networks, social networks, and transportation
networks, are prevalent in the real world. Graph Neural Networks (GNNs) have
gained widespread attention for their robust expressiveness and exceptional
performance in various graph applications. However, the efficacy of GNNs is
heavily reliant on sufficient data labels and complex network models, with the
former obtaining hardly and the latter computing costly. To address the labeled
data scarcity and high complexity of GNNs, Knowledge Distillation (KD) has been
introduced to enhance existing GNNs. This technique involves transferring the
soft-label supervision of the large teacher model to the small student model
while maintaining prediction performance. This survey offers a comprehensive
overview of Graph-based Knowledge Distillation methods, systematically
categorizing and summarizing them while discussing their limitations and future
directions. This paper first introduces the background of graph and KD. It then
provides a comprehensive summary of three types of Graph-based Knowledge
Distillation methods, namely Graph-based Knowledge Distillation for deep neural
networks (DKD), Graph-based Knowledge Distillation for GNNs (GKD), and
Self-Knowledge Distillation based Graph-based Knowledge Distillation (SKD).
Each type is further divided into knowledge distillation methods based on the
output layer, middle layer, and constructed graph. Subsequently, various
algorithms' ideas are analyzed and compared, concluding with the advantages and
disadvantages of each algorithm supported by experimental results. In addition,
the applications of graph-based knowledge distillation in CV, NLP, RS, and
other fields are listed. Finally, the graph-based knowledge distillation is
summarized and prospectively discussed. We have also released related resources
at https://github.com/liujing1023/Graph-based-Knowledge-Distillation.
Related papers
- Graph Relation Distillation for Efficient Biomedical Instance
Segmentation [80.51124447333493]
We propose a graph relation distillation approach for efficient biomedical instance segmentation.
We introduce two graph distillation schemes deployed at both the intra-image level and the inter-image level.
Experimental results on a number of biomedical datasets validate the effectiveness of our approach.
arXiv Detail & Related papers (2024-01-12T04:41:23Z) - A Study on Knowledge Graph Embeddings and Graph Neural Networks for Web
Of Things [0.0]
In the future, Orange's take on a knowledge graph in the domain of the Web Of Things (WoT) is to provide a digital representation of the physical world.
In this paper, we explore state-of-the-art knowledge graph embedding (KGE) methods to learn numerical representations of the graph entities.
We also investigate Graph neural networks (GNN) alongside KGEs and compare their performance on the same downstream tasks.
arXiv Detail & Related papers (2023-10-23T12:36:33Z) - Knowledge Enhanced Graph Neural Networks for Graph Completion [0.0]
Knowledge Enhanced Graph Neural Networks (KeGNN) is a neuro-symbolic framework for graph completion.
KeGNN consists of a graph neural network as a base upon which knowledge enhancement layers are stacked.
We instantiate KeGNN in conjunction with two state-of-the-art graph neural networks, Graph Convolutional Networks and Graph Attention Networks.
arXiv Detail & Related papers (2023-03-27T07:53:43Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - An Empirical Study of Retrieval-enhanced Graph Neural Networks [48.99347386689936]
Graph Neural Networks (GNNs) are effective tools for graph representation learning.
We propose a retrieval-enhanced scheme called GRAPHRETRIEVAL, which is agnostic to the choice of graph neural network models.
We conduct comprehensive experiments over 13 datasets, and we observe that GRAPHRETRIEVAL is able to reach substantial improvements over existing GNNs.
arXiv Detail & Related papers (2022-06-01T09:59:09Z) - Graph-level Neural Networks: Current Progress and Future Directions [61.08696673768116]
Graph-level Neural Networks (GLNNs, deep learning-based graph-level learning methods) have been attractive due to their superiority in modeling high-dimensional data.
We propose a systematic taxonomy covering GLNNs upon deep neural networks, graph neural networks, and graph pooling.
arXiv Detail & Related papers (2022-05-31T06:16:55Z) - Compressing Deep Graph Neural Networks via Adversarial Knowledge
Distillation [41.00398052556643]
We propose a novel Adversarial Knowledge Distillation framework for graph models named GraphAKD.
The discriminator distinguishes between teacher knowledge and what the student inherits, while the student GNN works as a generator and aims to fool the discriminator.
The results imply that GraphAKD can precisely transfer knowledge from a complicated teacher GNN to a compact student GNN.
arXiv Detail & Related papers (2022-05-24T00:04:43Z) - GraphHD: Efficient graph classification using hyperdimensional computing [58.720142291102135]
We present a baseline approach for graph classification with HDC.
We evaluate GraphHD on real-world graph classification problems.
Our results show that when compared to the state-of-the-art Graph Neural Networks (GNNs) the proposed model achieves comparable accuracy.
arXiv Detail & Related papers (2022-05-16T17:32:58Z) - Data-Free Adversarial Knowledge Distillation for Graph Neural Networks [62.71646916191515]
We propose the first end-to-end framework for data-free adversarial knowledge distillation on graph structured data (DFAD-GNN)
To be specific, our DFAD-GNN employs a generative adversarial network, which mainly consists of three components: a pre-trained teacher model and a student model are regarded as two discriminators, and a generator is utilized for deriving training graphs to distill knowledge from the teacher model into the student model.
Our DFAD-GNN significantly surpasses state-of-the-art data-free baselines in the graph classification task.
arXiv Detail & Related papers (2022-05-08T08:19:40Z) - Graph Representation Learning by Ensemble Aggregating Subgraphs via
Mutual Information Maximization [5.419711903307341]
We introduce a self-supervised learning method to enhance the representations of graph-level learned by Graph Neural Networks.
To get a comprehensive understanding of the graph structure, we propose an ensemble-learning like subgraph method.
And to achieve efficient and effective contrasive learning, a Head-Tail contrastive samples construction method is proposed.
arXiv Detail & Related papers (2021-03-24T12:06:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.