Say No to the Discrimination: Learning Fair Graph Neural Networks with
Limited Sensitive Attribute Information
- URL: http://arxiv.org/abs/2009.01454v5
- Date: Fri, 15 Oct 2021 14:05:53 GMT
- Title: Say No to the Discrimination: Learning Fair Graph Neural Networks with
Limited Sensitive Attribute Information
- Authors: Enyan Dai, Suhang Wang
- Abstract summary: Graph neural networks (GNNs) have shown great power in modeling graph structured data.
GNNs may make predictions biased on protected sensitive attributes, e.g., skin color and gender.
We propose FairGNN to eliminate the bias of GNNs whilst maintaining high node classification accuracy.
- Score: 37.90997236795843
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have shown great power in modeling graph
structured data. However, similar to other machine learning models, GNNs may
make predictions biased on protected sensitive attributes, e.g., skin color and
gender. Because machine learning algorithms including GNNs are trained to
reflect the distribution of the training data which often contains historical
bias towards sensitive attributes. In addition, the discrimination in GNNs can
be magnified by graph structures and the message-passing mechanism. As a
result, the applications of GNNs in sensitive domains such as crime rate
prediction would be largely limited. Though extensive studies of fair
classification have been conducted on i.i.d data, methods to address the
problem of discrimination on non-i.i.d data are rather limited. Furthermore,
the practical scenario of sparse annotations in sensitive attributes is rarely
considered in existing works. Therefore, we study the novel and important
problem of learning fair GNNs with limited sensitive attribute information.
FairGNN is proposed to eliminate the bias of GNNs whilst maintaining high node
classification accuracy by leveraging graph structures and limited sensitive
information. Our theoretical analysis shows that FairGNN can ensure the
fairness of GNNs under mild conditions given limited nodes with known sensitive
attributes. Extensive experiments on real-world datasets also demonstrate the
effectiveness of FairGNN in debiasing and keeping high accuracy.
Related papers
- The Devil is in the Data: Learning Fair Graph Neural Networks via
Partial Knowledge Distillation [35.17007613884196]
Graph neural networks (GNNs) are being increasingly used in many high-stakes tasks.
GNNs have been shown to be unfair as they tend to make discriminatory decisions toward certain demographic groups.
We present a demographic-agnostic method to learn fair GNNs via knowledge distillation, namely FairGKD.
arXiv Detail & Related papers (2023-11-29T05:54:58Z) - ELEGANT: Certified Defense on the Fairness of Graph Neural Networks [94.10433608311604]
Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks.
malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data.
We propose a principled framework named ELEGANT to study a novel problem of certifiable defense on the fairness level of GNNs.
arXiv Detail & Related papers (2023-11-05T20:29:40Z) - Information Flow in Graph Neural Networks: A Clinical Triage Use Case [49.86931948849343]
Graph Neural Networks (GNNs) have gained popularity in healthcare and other domains due to their ability to process multi-modal and multi-relational graphs.
We investigate how the flow of embedding information within GNNs affects the prediction of links in Knowledge Graphs (KGs)
Our results demonstrate that incorporating domain knowledge into the GNN connectivity leads to better performance than using the same connectivity as the KG or allowing unconstrained embedding propagation.
arXiv Detail & Related papers (2023-09-12T09:18:12Z) - Towards Fair Graph Neural Networks via Graph Counterfactual [38.721295940809135]
Graph neural networks (GNNs) have shown great ability in representation (GNNs) learning on graphs, facilitating various tasks.
Recent works show that GNNs tend to inherit and amplify the bias from training data, causing concerns of the adoption of GNNs in high-stake scenarios.
We propose a novel framework CAF, which can select counterfactuals from training data to avoid non-realistic counterfactuals.
arXiv Detail & Related papers (2023-07-10T23:28:03Z) - Fairness-Aware Graph Neural Networks: A Survey [53.41838868516936]
Graph Neural Networks (GNNs) have become increasingly important due to their representational power and state-of-the-art predictive performance.
GNNs suffer from fairness issues that arise as a result of the underlying graph data and the fundamental aggregation mechanism.
In this article, we examine and categorize fairness techniques for improving the fairness of GNNs.
arXiv Detail & Related papers (2023-07-08T08:09:06Z) - Mitigating Relational Bias on Knowledge Graphs [51.346018842327865]
We propose Fair-KGNN, a framework that simultaneously alleviates multi-hop bias and preserves the proximity information of entity-to-relation in knowledge graphs.
We develop two instances of Fair-KGNN incorporating with two state-of-the-art KGNN models, RGCN and CompGCN, to mitigate gender-occupation and nationality-salary bias.
arXiv Detail & Related papers (2022-11-26T05:55:34Z) - On Structural Explanation of Bias in Graph Neural Networks [40.323880315453906]
Graph Neural Networks (GNNs) have shown satisfying performance in various graph analytical problems.
GNNs could yield biased results against certain demographic subgroups.
We study a novel research problem of structural explanation of bias in GNNs.
arXiv Detail & Related papers (2022-06-24T06:49:21Z) - Improving Fairness in Graph Neural Networks via Mitigating Sensitive
Attribute Leakage [35.810534649478576]
Graph Neural Networks (GNNs) have shown great power in learning node representations on graphs.
GNNs may inherit historical prejudices from training data, leading to discriminatory bias in predictions.
We propose Fair View Graph Neural Network (FairVGNN) to generate fair views of features by automatically identifying and masking sensitive-correlated features.
arXiv Detail & Related papers (2022-06-07T16:25:20Z) - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy,
Robustness, Fairness, and Explainability [59.80140875337769]
Graph Neural Networks (GNNs) have made rapid developments in the recent years.
GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data.
This paper gives a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability.
arXiv Detail & Related papers (2022-04-18T21:41:07Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.