MAPPING: Debiasing Graph Neural Networks for Fair Node Classification
with Limited Sensitive Information Leakage
- URL: http://arxiv.org/abs/2401.12824v1
- Date: Tue, 23 Jan 2024 14:59:46 GMT
- Title: MAPPING: Debiasing Graph Neural Networks for Fair Node Classification
with Limited Sensitive Information Leakage
- Authors: Ying Song and Balaji Palanisamy
- Abstract summary: We propose a novel model-agnostic debiasing framework named MAPPING for fair node classification.
Our results show that MAPPING can achieve better trade-offs between utility and fairness, and privacy risks of sensitive information leakage.
- Score: 1.8238848494579714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite remarkable success in diverse web-based applications, Graph Neural
Networks(GNNs) inherit and further exacerbate historical discrimination and
social stereotypes, which critically hinder their deployments in high-stake
domains such as online clinical diagnosis, financial crediting, etc. However,
current fairness research that primarily craft on i.i.d data, cannot be
trivially replicated to non-i.i.d. graph structures with topological dependence
among samples. Existing fair graph learning typically favors pairwise
constraints to achieve fairness but fails to cast off dimensional limitations
and generalize them into multiple sensitive attributes; besides, most studies
focus on in-processing techniques to enforce and calibrate fairness,
constructing a model-agnostic debiasing GNN framework at the pre-processing
stage to prevent downstream misuses and improve training reliability is still
largely under-explored. Furthermore, previous work on GNNs tend to enhance
either fairness or privacy individually but few probe into their interplays. In
this paper, we propose a novel model-agnostic debiasing framework named MAPPING
(\underline{M}asking \underline{A}nd \underline{P}runing and
Message-\underline{P}assing train\underline{ING}) for fair node classification,
in which we adopt the distance covariance($dCov$)-based fairness constraints to
simultaneously reduce feature and topology biases in arbitrary dimensions, and
combine them with adversarial debiasing to confine the risks of attribute
inference attacks. Experiments on real-world datasets with different GNN
variants demonstrate the effectiveness and flexibility of MAPPING. Our results
show that MAPPING can achieve better trade-offs between utility and fairness,
and mitigate privacy risks of sensitive information leakage.
Related papers
- Disentangling, Amplifying, and Debiasing: Learning Disentangled Representations for Fair Graph Neural Networks [22.5976413484192]
We propose a novel GNN framework, DAB-GNN, that Disentangles, Amplifies, and deBiases attribute, structure, and potential biases in the GNN mechanism.
Dab-GNN significantly outperforms ten state-of-the-art competitors in terms of achieving an optimal balance between accuracy and fairness.
arXiv Detail & Related papers (2024-08-23T07:14:56Z) - xAI-Drop: Don't Use What You Cannot Explain [23.33477769275026]
Graph Neural Networks (GNNs) have emerged as the predominant paradigm for learning from graph-structured data.
GNNs face challenges such as lack of generalization and poor interpretability.
We introduce xAI-Drop, a novel topological-level dropping regularizer.
arXiv Detail & Related papers (2024-07-29T14:53:45Z) - DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - Enhancing Fairness in Unsupervised Graph Anomaly Detection through Disentanglement [33.565252991113766]
Graph anomaly detection (GAD) is increasingly crucial in various applications, ranging from financial fraud detection to fake news detection.
Current GAD methods largely overlook the fairness problem, which might result in discriminatory decisions skewed toward certain demographic groups.
We devise a novel DisEntangle-based FairnEss-aware aNomaly Detection framework on the attributed graph, named DEFEND.
Our empirical evaluations on real-world datasets reveal that DEFEND performs effectively in GAD and significantly enhances fairness compared to state-of-the-art baselines.
arXiv Detail & Related papers (2024-06-03T04:48:45Z) - Graph Out-of-Distribution Generalization via Causal Intervention [69.70137479660113]
We introduce a conceptually simple yet principled approach for training robust graph neural networks (GNNs) under node-level distribution shifts.
Our method resorts to a new learning objective derived from causal inference that coordinates an environment estimator and a mixture-of-expert GNN predictor.
Our model can effectively enhance generalization with various types of distribution shifts and yield up to 27.4% accuracy improvement over state-of-the-arts on graph OOD generalization benchmarks.
arXiv Detail & Related papers (2024-02-18T07:49:22Z) - Marginal Debiased Network for Fair Visual Recognition [59.05212866862219]
We propose a novel marginal debiased network (MDN) to learn debiased representations.
Our MDN can achieve a remarkable performance on under-represented samples.
arXiv Detail & Related papers (2024-01-04T08:57:09Z) - Disparity, Inequality, and Accuracy Tradeoffs in Graph Neural Networks
for Node Classification [2.8282906214258796]
Graph neural networks (GNNs) are increasingly used in critical human applications for predicting node labels in attributed graphs.
We propose two new GNN-agnostic interventions namely, PFR-AX which decreases the separability between nodes in protected and non-protected groups, and PostProcess which updates model predictions based on a blackbox policy.
Our results show that no single intervention offers a universally optimal tradeoff, but PFR-AX and PostProcess provide granular control and improve model confidence when correctly predicting positive outcomes for nodes in protected groups.
arXiv Detail & Related papers (2023-08-18T14:45:28Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - FairNorm: Fair and Fast Graph Neural Network Training [9.492903649862761]
Graph neural networks (GNNs) have been demonstrated to achieve state-of-the-art for a number of graph-based learning tasks.
It has been shown that GNNs may inherit and even amplify bias within training data, which leads to unfair results towards certain sensitive groups.
This work proposes FairNorm, a unified normalization framework that reduces the bias in GNN-based learning.
arXiv Detail & Related papers (2022-05-20T06:10:27Z) - A Biased Graph Neural Network Sampler with Near-Optimal Regret [57.70126763759996]
Graph neural networks (GNN) have emerged as a vehicle for applying deep network architectures to graph and relational data.
In this paper, we build upon existing work and treat GNN neighbor sampling as a multi-armed bandit problem.
We introduce a newly-designed reward function that introduces some degree of bias designed to reduce variance and avoid unstable, possibly-unbounded payouts.
arXiv Detail & Related papers (2021-03-01T15:55:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.