Better Fair than Sorry: Adversarial Missing Data Imputation for Fair
GNNs
- URL: http://arxiv.org/abs/2311.01591v2
- Date: Thu, 15 Feb 2024 17:48:33 GMT
- Title: Better Fair than Sorry: Adversarial Missing Data Imputation for Fair
GNNs
- Authors: Debolina Halder Lina and Arlei Silva
- Abstract summary: This paper addresses the problem of learning fair Graph Neural Networks (GNNs) under missing protected attributes.
We propose Better Fair than Sorry (BFtS), a fair missing data imputation model for protected attributes used by fair GNNs.
- Score: 6.680930089714339
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper addresses the problem of learning fair Graph Neural Networks
(GNNs) under missing protected attributes. GNNs have achieved state-of-the-art
results in many relevant tasks where decisions might disproportionately impact
specific communities. However, existing work on fair GNNs assumes that either
protected attributes are fully-observed or that the missing data imputation is
fair. In practice, biases in the imputation will be propagated to the model
outcomes, leading them to overestimate the fairness of their predictions. We
address this challenge by proposing Better Fair than Sorry (BFtS), a fair
missing data imputation model for protected attributes used by fair GNNs. The
key design principle behind BFtS is that imputations should approximate the
worst-case scenario for the fair GNN -- i.e. when optimizing fairness is the
hardest. We implement this idea using a 3-player adversarial scheme where two
adversaries collaborate against the fair GNN. Experiments using synthetic and
real datasets show that BFtS often achieves a better fairness $\times$ accuracy
trade-off than existing alternatives.
Related papers
- Learning Counterfactually Fair Models via Improved Generation with Neural Causal Models [0.0]
One of the main concerns while deploying machine learning models in real-world applications is fairness.
Existing methodologies for enforcing counterfactual fairness seem to have two limitations.
We propose employing Neural Causal Models for generating the counterfactual samples.
We also propose a new MMD-based regularizer term that explicitly enforces the counterfactual fairness conditions into the base model while training.
arXiv Detail & Related papers (2025-02-18T11:59:03Z) - Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Fairness Without Harm: An Influence-Guided Active Sampling Approach [32.173195437797766]
We aim to train models that mitigate group fairness disparity without causing harm to model accuracy.
The current data acquisition methods, such as fair active learning approaches, typically require annotating sensitive attributes.
We propose a tractable active data sampling algorithm that does not rely on training group annotations.
arXiv Detail & Related papers (2024-02-20T07:57:38Z) - Marginal Debiased Network for Fair Visual Recognition [59.05212866862219]
We propose a novel marginal debiased network (MDN) to learn debiased representations.
Our MDN can achieve a remarkable performance on under-represented samples.
arXiv Detail & Related papers (2024-01-04T08:57:09Z) - Adversarial Attacks on Fairness of Graph Neural Networks [63.155299388146176]
Fairness-aware graph neural networks (GNNs) have gained a surge of attention as they can reduce the bias of predictions on any demographic group.
Although these methods greatly improve the algorithmic fairness of GNNs, the fairness can be easily corrupted by carefully designed adversarial attacks.
arXiv Detail & Related papers (2023-10-20T21:19:54Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Towards Fair Graph Neural Networks via Graph Counterfactual [38.721295940809135]
Graph neural networks (GNNs) have shown great ability in representation (GNNs) learning on graphs, facilitating various tasks.
Recent works show that GNNs tend to inherit and amplify the bias from training data, causing concerns of the adoption of GNNs in high-stake scenarios.
We propose a novel framework CAF, which can select counterfactuals from training data to avoid non-realistic counterfactuals.
arXiv Detail & Related papers (2023-07-10T23:28:03Z) - Learning Fair Node Representations with Graph Counterfactual Fairness [56.32231787113689]
We propose graph counterfactual fairness, which considers the biases led by the above facts.
We generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
Our framework outperforms the state-of-the-art baselines in graph counterfactual fairness.
arXiv Detail & Related papers (2022-01-10T21:43:44Z) - Fairness Through Robustness: Investigating Robustness Disparity in Deep
Learning [61.93730166203915]
We argue that traditional notions of fairness are not sufficient when the model is vulnerable to adversarial attacks.
We show that measuring robustness bias is a challenging task for DNNs and propose two methods to measure this form of bias.
arXiv Detail & Related papers (2020-06-17T22:22:24Z) - Convex Fairness Constrained Model Using Causal Effect Estimators [6.414055487487486]
We devise novel models, called FairCEEs, which remove discrimination while keeping explanatory bias.
We provide an efficient algorithm for solving FairCEEs in regression and binary classification tasks.
arXiv Detail & Related papers (2020-02-16T03:40:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.