Better Fair than Sorry: Adversarial Missing Data Imputation for Fair
GNNs
- URL: http://arxiv.org/abs/2311.01591v2
- Date: Thu, 15 Feb 2024 17:48:33 GMT
- Title: Better Fair than Sorry: Adversarial Missing Data Imputation for Fair
GNNs
- Authors: Debolina Halder Lina and Arlei Silva
- Abstract summary: This paper addresses the problem of learning fair Graph Neural Networks (GNNs) under missing protected attributes.
We propose Better Fair than Sorry (BFtS), a fair missing data imputation model for protected attributes used by fair GNNs.
- Score: 6.680930089714339
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper addresses the problem of learning fair Graph Neural Networks
(GNNs) under missing protected attributes. GNNs have achieved state-of-the-art
results in many relevant tasks where decisions might disproportionately impact
specific communities. However, existing work on fair GNNs assumes that either
protected attributes are fully-observed or that the missing data imputation is
fair. In practice, biases in the imputation will be propagated to the model
outcomes, leading them to overestimate the fairness of their predictions. We
address this challenge by proposing Better Fair than Sorry (BFtS), a fair
missing data imputation model for protected attributes used by fair GNNs. The
key design principle behind BFtS is that imputations should approximate the
worst-case scenario for the fair GNN -- i.e. when optimizing fairness is the
hardest. We implement this idea using a 3-player adversarial scheme where two
adversaries collaborate against the fair GNN. Experiments using synthetic and
real datasets show that BFtS often achieves a better fairness $\times$ accuracy
trade-off than existing alternatives.
Related papers
- Learning Counterfactually Fair Models via Improved Generation with Neural Causal Models [0.0]
One of the main concerns while deploying machine learning models in real-world applications is fairness.
Existing methodologies for enforcing counterfactual fairness seem to have two limitations.
We propose employing Neural Causal Models for generating the counterfactual samples.
We also propose a new MMD-based regularizer term that explicitly enforces the counterfactual fairness conditions into the base model while training.
arXiv Detail & Related papers (2025-02-18T11:59:03Z) - Are Your Models Still Fair? Fairness Attacks on Graph Neural Networks via Node Injections [28.86365261170078]
Research has revealed the fairness vulnerabilities in Graph Neural Networks (GNNs) when facing malicious adversarial attacks.
We introduce a Node Injection-based Fairness Attack (NIFA) exploring the vulnerabilities of GNN fairness in such a more realistic setting.
NIFA can significantly undermine the fairness of mainstream GNNs, even including fairness-aware GNNs, by injecting merely 1% of nodes.
arXiv Detail & Related papers (2024-06-05T08:26:53Z) - Fairness Without Harm: An Influence-Guided Active Sampling Approach [32.173195437797766]
We aim to train models that mitigate group fairness disparity without causing harm to model accuracy.
The current data acquisition methods, such as fair active learning approaches, typically require annotating sensitive attributes.
We propose a tractable active data sampling algorithm that does not rely on training group annotations.
arXiv Detail & Related papers (2024-02-20T07:57:38Z) - Endowing Pre-trained Graph Models with Provable Fairness [49.8431177748876]
We propose a novel adapter-tuning framework that endows pre-trained graph models with provable fairness (called GraphPAR)
Specifically, we design a sensitive semantic augmenter on node representations, to extend the node representations with different sensitive attribute semantics for each node.
With GraphPAR, we quantify whether the fairness of each node is provable, i.e., predictions are always fair within a certain range of sensitive attribute semantics.
arXiv Detail & Related papers (2024-02-19T14:16:08Z) - Marginal Debiased Network for Fair Visual Recognition [59.05212866862219]
We propose a novel marginal debiased network (MDN) to learn debiased representations.
Our MDN can achieve a remarkable performance on under-represented samples.
arXiv Detail & Related papers (2024-01-04T08:57:09Z) - The Devil is in the Data: Learning Fair Graph Neural Networks via
Partial Knowledge Distillation [35.17007613884196]
Graph neural networks (GNNs) are being increasingly used in many high-stakes tasks.
GNNs have been shown to be unfair as they tend to make discriminatory decisions toward certain demographic groups.
We present a demographic-agnostic method to learn fair GNNs via knowledge distillation, namely FairGKD.
arXiv Detail & Related papers (2023-11-29T05:54:58Z) - ELEGANT: Certified Defense on the Fairness of Graph Neural Networks [94.10433608311604]
Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks.
malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data.
We propose a principled framework named ELEGANT to study a novel problem of certifiable defense on the fairness level of GNNs.
arXiv Detail & Related papers (2023-11-05T20:29:40Z) - Adversarial Attacks on Fairness of Graph Neural Networks [63.155299388146176]
Fairness-aware graph neural networks (GNNs) have gained a surge of attention as they can reduce the bias of predictions on any demographic group.
Although these methods greatly improve the algorithmic fairness of GNNs, the fairness can be easily corrupted by carefully designed adversarial attacks.
arXiv Detail & Related papers (2023-10-20T21:19:54Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Towards Fair Graph Neural Networks via Graph Counterfactual [38.721295940809135]
Graph neural networks (GNNs) have shown great ability in representation (GNNs) learning on graphs, facilitating various tasks.
Recent works show that GNNs tend to inherit and amplify the bias from training data, causing concerns of the adoption of GNNs in high-stake scenarios.
We propose a novel framework CAF, which can select counterfactuals from training data to avoid non-realistic counterfactuals.
arXiv Detail & Related papers (2023-07-10T23:28:03Z) - Fairness-Aware Graph Neural Networks: A Survey [53.41838868516936]
Graph Neural Networks (GNNs) have become increasingly important due to their representational power and state-of-the-art predictive performance.
GNNs suffer from fairness issues that arise as a result of the underlying graph data and the fundamental aggregation mechanism.
In this article, we examine and categorize fairness techniques for improving the fairness of GNNs.
arXiv Detail & Related papers (2023-07-08T08:09:06Z) - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy,
Robustness, Fairness, and Explainability [59.80140875337769]
Graph Neural Networks (GNNs) have made rapid developments in the recent years.
GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data.
This paper gives a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability.
arXiv Detail & Related papers (2022-04-18T21:41:07Z) - Learning Fair Node Representations with Graph Counterfactual Fairness [56.32231787113689]
We propose graph counterfactual fairness, which considers the biases led by the above facts.
We generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
Our framework outperforms the state-of-the-art baselines in graph counterfactual fairness.
arXiv Detail & Related papers (2022-01-10T21:43:44Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z) - Fairness Through Robustness: Investigating Robustness Disparity in Deep
Learning [61.93730166203915]
We argue that traditional notions of fairness are not sufficient when the model is vulnerable to adversarial attacks.
We show that measuring robustness bias is a challenging task for DNNs and propose two methods to measure this form of bias.
arXiv Detail & Related papers (2020-06-17T22:22:24Z) - Convex Fairness Constrained Model Using Causal Effect Estimators [6.414055487487486]
We devise novel models, called FairCEEs, which remove discrimination while keeping explanatory bias.
We provide an efficient algorithm for solving FairCEEs in regression and binary classification tasks.
arXiv Detail & Related papers (2020-02-16T03:40:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.