RELIANT: Fair Knowledge Distillation for Graph Neural Networks
- URL: http://arxiv.org/abs/2301.01150v2
- Date: Wed, 4 Jan 2023 05:09:38 GMT
- Title: RELIANT: Fair Knowledge Distillation for Graph Neural Networks
- Authors: Yushun Dong, Binchi Zhang, Yiling Yuan, Na Zou, Qi Wang, Jundong Li
- Abstract summary: Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks.
Knowledge Distillation (KD) is a common solution to compress GNNs.
We propose a principled framework named RELIANT to mitigate the bias exhibited by the student model.
- Score: 39.22568244059485
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have shown satisfying performance on various
graph learning tasks. To achieve better fitting capability, most GNNs are with
a large number of parameters, which makes these GNNs computationally expensive.
Therefore, it is difficult to deploy them onto edge devices with scarce
computational resources, e.g., mobile phones and wearable smart devices.
Knowledge Distillation (KD) is a common solution to compress GNNs, where a
light-weighted model (i.e., the student model) is encouraged to mimic the
behavior of a computationally expensive GNN (i.e., the teacher GNN model).
Nevertheless, most existing GNN-based KD methods lack fairness consideration.
As a consequence, the student model usually inherits and even exaggerates the
bias from the teacher GNN. To handle such a problem, we take initial steps
towards fair knowledge distillation for GNNs. Specifically, we first formulate
a novel problem of fair knowledge distillation for GNN-based teacher-student
frameworks. Then we propose a principled framework named RELIANT to mitigate
the bias exhibited by the student model. Notably, the design of RELIANT is
decoupled from any specific teacher and student model structures, and thus can
be easily adapted to various GNN-based KD frameworks. We perform extensive
experiments on multiple real-world datasets, which corroborates that RELIANT
achieves less biased GNN knowledge distillation while maintaining high
prediction utility.
Related papers
- IDEA: A Flexible Framework of Certified Unlearning for Graph Neural Networks [68.6374698896505]
Graph Neural Networks (GNNs) have been increasingly deployed in a plethora of applications.
Privacy leakage may happen when the trained GNNs are deployed and exposed to potential attackers.
We propose a principled framework named IDEA to achieve flexible and certified unlearning for GNNs.
arXiv Detail & Related papers (2024-07-28T04:59:59Z) - A Teacher-Free Graph Knowledge Distillation Framework with Dual
Self-Distillation [58.813991312803246]
We propose a Teacher-Free Graph Self-Distillation (TGS) framework that does not require any teacher model or GNNs during both training and inference.
TGS enjoys the benefits of graph topology awareness in training but is free from data dependency in inference.
arXiv Detail & Related papers (2024-03-06T05:52:13Z) - ELEGANT: Certified Defense on the Fairness of Graph Neural Networks [94.10433608311604]
Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks.
malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data.
We propose a principled framework named ELEGANT to study a novel problem of certifiable defense on the fairness level of GNNs.
arXiv Detail & Related papers (2023-11-05T20:29:40Z) - Distributed Graph Neural Network Training: A Survey [51.77035975191926]
Graph neural networks (GNNs) are a type of deep learning models that are trained on graphs and have been successfully applied in various domains.
Despite the effectiveness of GNNs, it is still challenging for GNNs to efficiently scale to large graphs.
As a remedy, distributed computing becomes a promising solution of training large-scale GNNs.
arXiv Detail & Related papers (2022-11-01T01:57:00Z) - Boosting Graph Neural Networks via Adaptive Knowledge Distillation [18.651451228086643]
Graph neural networks (GNNs) have shown remarkable performance on diverse graph mining tasks.
Knowledge distillation (KD) is developed to combine the diverse knowledge from multiple models.
We propose a novel adaptive KD framework, called BGNN, which sequentially transfers knowledge from multiple GNNs into a student GNN.
arXiv Detail & Related papers (2022-10-12T04:48:50Z) - EDITS: Modeling and Mitigating Data Bias for Graph Neural Networks [29.974829042502375]
We develop a framework named EDITS to mitigate the bias in attributed networks.
EDITS works in a model-agnostic manner, which means that it is independent of the specific GNNs applied for downstream tasks.
arXiv Detail & Related papers (2021-08-11T14:07:01Z) - AKE-GNN: Effective Graph Learning with Adaptive Knowledge Exchange [14.919474099848816]
Graph Neural Networks (GNNs) have already been widely used in various graph mining tasks.
Recent works reveal that the learned weights (channels) in well-trained GNNs are highly redundant, which limits the performance of GNNs.
We introduce a novel GNN learning framework named AKE-GNN, which performs the Adaptive Knowledge Exchange strategy.
arXiv Detail & Related papers (2021-06-10T02:00:26Z) - Graph-Free Knowledge Distillation for Graph Neural Networks [30.38128029453977]
We propose the first dedicated approach to distilling knowledge from a graph neural network without graph data.
The proposed graph-free KD (GFKD) learns graph topology structures for knowledge transfer by modeling them with multinomial distribution.
We provide the strategies for handling different types of prior knowledge in the graph data or the GNNs.
arXiv Detail & Related papers (2021-05-16T21:38:24Z) - FedGraphNN: A Federated Learning System and Benchmark for Graph Neural
Networks [68.64678614325193]
Graph Neural Network (GNN) research is rapidly growing thanks to the capacity of GNNs to learn representations from graph-structured data.
Centralizing a massive amount of real-world graph data for GNN training is prohibitive due to user-side privacy concerns.
We introduce FedGraphNN, an open research federated learning system and a benchmark to facilitate GNN-based FL research.
arXiv Detail & Related papers (2021-04-14T22:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.