Disparity, Inequality, and Accuracy Tradeoffs in Graph Neural Networks
for Node Classification
- URL: http://arxiv.org/abs/2308.09596v1
- Date: Fri, 18 Aug 2023 14:45:28 GMT
- Title: Disparity, Inequality, and Accuracy Tradeoffs in Graph Neural Networks
for Node Classification
- Authors: Arpit Merchant, Carlos Castillo
- Abstract summary: Graph neural networks (GNNs) are increasingly used in critical human applications for predicting node labels in attributed graphs.
We propose two new GNN-agnostic interventions namely, PFR-AX which decreases the separability between nodes in protected and non-protected groups, and PostProcess which updates model predictions based on a blackbox policy.
Our results show that no single intervention offers a universally optimal tradeoff, but PFR-AX and PostProcess provide granular control and improve model confidence when correctly predicting positive outcomes for nodes in protected groups.
- Score: 2.8282906214258796
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graph neural networks (GNNs) are increasingly used in critical human
applications for predicting node labels in attributed graphs. Their ability to
aggregate features from nodes' neighbors for accurate classification also has
the capacity to exacerbate existing biases in data or to introduce new ones
towards members from protected demographic groups. Thus, it is imperative to
quantify how GNNs may be biased and to what extent their harmful effects may be
mitigated. To this end, we propose two new GNN-agnostic interventions namely,
(i) PFR-AX which decreases the separability between nodes in protected and
non-protected groups, and (ii) PostProcess which updates model predictions
based on a blackbox policy to minimize differences between error rates across
demographic groups. Through a large set of experiments on four datasets, we
frame the efficacies of our approaches (and three variants) in terms of their
algorithmic fairness-accuracy tradeoff and benchmark our results against three
strong baseline interventions on three state-of-the-art GNN models. Our results
show that no single intervention offers a universally optimal tradeoff, but
PFR-AX and PostProcess provide granular control and improve model confidence
when correctly predicting positive outcomes for nodes in protected groups.
Related papers
- Conditional Shift-Robust Conformal Prediction for Graph Neural Network [0.0]
Graph Neural Networks (GNNs) have emerged as potent tools for predicting outcomes in graph-structured data.
Despite their efficacy, GNNs have limited ability to provide robust uncertainty estimates.
We propose Conditional Shift Robust (CondSR) conformal prediction for GNNs.
arXiv Detail & Related papers (2024-05-20T11:47:31Z) - MAPPING: Debiasing Graph Neural Networks for Fair Node Classification
with Limited Sensitive Information Leakage [1.8238848494579714]
We propose a novel model-agnostic debiasing framework named MAPPING for fair node classification.
Our results show that MAPPING can achieve better trade-offs between utility and fairness, and privacy risks of sensitive information leakage.
arXiv Detail & Related papers (2024-01-23T14:59:46Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - Uncertainty Quantification over Graph with Conformalized Graph Neural
Networks [52.20904874696597]
Graph Neural Networks (GNNs) are powerful machine learning prediction models on graph-structured data.
GNNs lack rigorous uncertainty estimates, limiting their reliable deployment in settings where the cost of errors is significant.
We propose conformalized GNN (CF-GNN), extending conformal prediction (CP) to graph-based models for guaranteed uncertainty estimates.
arXiv Detail & Related papers (2023-05-23T21:38:23Z) - Pushing the Accuracy-Group Robustness Frontier with Introspective
Self-play [16.262574174989698]
Introspective Self-play (ISP) is a simple approach to improve the uncertainty estimation of a deep neural network under dataset bias.
We show that ISP provably improves the bias-awareness of the model representation and the resulting uncertainty estimates.
arXiv Detail & Related papers (2023-02-11T22:59:08Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Heterogeneous Randomized Response for Differential Privacy in Graph
Neural Networks [18.4005860362025]
Graph neural networks (GNNs) are susceptible to privacy inference attacks (PIAs)
We propose a novel mechanism to protect nodes' features and edges against PIAs under differential privacy (DP) guarantees.
We derive significantly better randomization probabilities and tighter error bounds at both levels of nodes' features and edges.
arXiv Detail & Related papers (2022-11-10T18:52:46Z) - Label-Only Membership Inference Attack against Node-Level Graph Neural
Networks [30.137860266059004]
Graph Neural Networks (GNNs) are vulnerable to Membership Inference Attacks (MIAs)
We propose a label-only MIA against GNNs for node classification with the help of GNNs' flexible prediction mechanism.
Our attacking method achieves around 60% accuracy, precision, and Area Under the Curve (AUC) for most datasets and GNN models.
arXiv Detail & Related papers (2022-07-27T19:46:26Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.