The Devil is in the Data: Learning Fair Graph Neural Networks via
Partial Knowledge Distillation
- URL: http://arxiv.org/abs/2311.17373v1
- Date: Wed, 29 Nov 2023 05:54:58 GMT
- Title: The Devil is in the Data: Learning Fair Graph Neural Networks via
Partial Knowledge Distillation
- Authors: Yuchang Zhu, Jintang Li, Liang Chen, Zibin Zheng
- Abstract summary: Graph neural networks (GNNs) are being increasingly used in many high-stakes tasks.
GNNs have been shown to be unfair as they tend to make discriminatory decisions toward certain demographic groups.
We present a demographic-agnostic method to learn fair GNNs via knowledge distillation, namely FairGKD.
- Score: 35.17007613884196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) are being increasingly used in many high-stakes
tasks, and as a result, there is growing attention on their fairness recently.
GNNs have been shown to be unfair as they tend to make discriminatory decisions
toward certain demographic groups, divided by sensitive attributes such as
gender and race. While recent works have been devoted to improving their
fairness performance, they often require accessible demographic information.
This greatly limits their applicability in real-world scenarios due to legal
restrictions. To address this problem, we present a demographic-agnostic method
to learn fair GNNs via knowledge distillation, namely FairGKD. Our work is
motivated by the empirical observation that training GNNs on partial data
(i.e., only node attributes or topology data) can improve their fairness,
albeit at the cost of utility. To make a balanced trade-off between fairness
and utility performance, we employ a set of fairness experts (i.e., GNNs
trained on different partial data) to construct the synthetic teacher, which
distills fairer and informative knowledge to guide the learning of the GNN
student. Experiments on several benchmark datasets demonstrate that FairGKD,
which does not require access to demographic information, significantly
improves the fairness of GNNs by a large margin while maintaining their
utility.
Related papers
- Towards Fair Graph Representation Learning in Social Networks [20.823461673845756]
We introduce constraints for fair representation learning based on three principles: sufficiency, independence, and separation.
We theoretically demonstrate that our EAGNN method can effectively achieve group fairness.
arXiv Detail & Related papers (2024-10-15T10:57:02Z) - ELEGANT: Certified Defense on the Fairness of Graph Neural Networks [94.10433608311604]
Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks.
malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data.
We propose a principled framework named ELEGANT to study a novel problem of certifiable defense on the fairness level of GNNs.
arXiv Detail & Related papers (2023-11-05T20:29:40Z) - Adversarial Attacks on Fairness of Graph Neural Networks [63.155299388146176]
Fairness-aware graph neural networks (GNNs) have gained a surge of attention as they can reduce the bias of predictions on any demographic group.
Although these methods greatly improve the algorithmic fairness of GNNs, the fairness can be easily corrupted by carefully designed adversarial attacks.
arXiv Detail & Related papers (2023-10-20T21:19:54Z) - Towards Fair Graph Neural Networks via Graph Counterfactual [38.721295940809135]
Graph neural networks (GNNs) have shown great ability in representation (GNNs) learning on graphs, facilitating various tasks.
Recent works show that GNNs tend to inherit and amplify the bias from training data, causing concerns of the adoption of GNNs in high-stake scenarios.
We propose a novel framework CAF, which can select counterfactuals from training data to avoid non-realistic counterfactuals.
arXiv Detail & Related papers (2023-07-10T23:28:03Z) - Fairness-Aware Graph Neural Networks: A Survey [53.41838868516936]
Graph Neural Networks (GNNs) have become increasingly important due to their representational power and state-of-the-art predictive performance.
GNNs suffer from fairness issues that arise as a result of the underlying graph data and the fundamental aggregation mechanism.
In this article, we examine and categorize fairness techniques for improving the fairness of GNNs.
arXiv Detail & Related papers (2023-07-08T08:09:06Z) - Migrate Demographic Group For Fair GNNs [23.096452685455134]
Graph Neural networks (GNNs) have been applied in many scenarios due to the superior performance of graph learning.
FairMigration is composed of two training stages. In the first stage, the GNNs are initially optimized by personalized self-supervised learning.
In the second stage, the new demographic groups are frozen and supervised learning is carried out under the constraints of new demographic groups and adversarial training.
arXiv Detail & Related papers (2023-06-07T07:37:01Z) - Mitigating Relational Bias on Knowledge Graphs [51.346018842327865]
We propose Fair-KGNN, a framework that simultaneously alleviates multi-hop bias and preserves the proximity information of entity-to-relation in knowledge graphs.
We develop two instances of Fair-KGNN incorporating with two state-of-the-art KGNN models, RGCN and CompGCN, to mitigate gender-occupation and nationality-salary bias.
arXiv Detail & Related papers (2022-11-26T05:55:34Z) - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy,
Robustness, Fairness, and Explainability [59.80140875337769]
Graph Neural Networks (GNNs) have made rapid developments in the recent years.
GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data.
This paper gives a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability.
arXiv Detail & Related papers (2022-04-18T21:41:07Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z) - Say No to the Discrimination: Learning Fair Graph Neural Networks with
Limited Sensitive Attribute Information [37.90997236795843]
Graph neural networks (GNNs) have shown great power in modeling graph structured data.
GNNs may make predictions biased on protected sensitive attributes, e.g., skin color and gender.
We propose FairGNN to eliminate the bias of GNNs whilst maintaining high node classification accuracy.
arXiv Detail & Related papers (2020-09-03T05:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.