BeMap: Balanced Message Passing for Fair Graph Neural Network
- URL: http://arxiv.org/abs/2306.04107v2
- Date: Fri, 8 Mar 2024 22:40:45 GMT
- Title: BeMap: Balanced Message Passing for Fair Graph Neural Network
- Authors: Xiao Lin, Jian Kang, Weilin Cong, Hanghang Tong
- Abstract summary: We show that message passing could amplify the bias when the 1-hop neighbors from different demographic groups are unbalanced.
We propose BeMap, a fair message passing method, that balances the number of the 1-hop neighbors of each node among different demographic groups.
- Score: 50.910842893257275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness in graph neural networks has been actively studied recently.
However, existing works often do not explicitly consider the role of message
passing in introducing or amplifying the bias. In this paper, we first
investigate the problem of bias amplification in message passing. We
empirically and theoretically demonstrate that message passing could amplify
the bias when the 1-hop neighbors from different demographic groups are
unbalanced. Guided by such analyses, we propose BeMap, a fair message passing
method, that leverages a balance-aware sampling strategy to balance the number
of the 1-hop neighbors of each node among different demographic groups.
Extensive experiments on node classification demonstrate the efficacy of BeMap
in mitigating bias while maintaining classification accuracy. The code is
available at https://github.com/xiaolin-cs/BeMap.
Related papers
- Rethinking Fair Graph Neural Networks from Re-balancing [26.70771023446706]
We find that simple re-balancing methods can easily match or surpass existing fair GNN methods.
We propose FairGB, Fair Graph Neural Network via re-Balancing, which mitigates the unfairness of GNNs by group balancing.
arXiv Detail & Related papers (2024-07-16T11:39:27Z) - FairSample: Training Fair and Accurate Graph Convolutional Neural
Networks Efficiently [29.457338893912656]
Societal biases against sensitive groups may exist in many real world graphs.
We present an in-depth analysis on how graph structure bias, node attribute bias, and model parameters may affect the demographic parity of GCNs.
Our insights lead to FairSample, a framework that jointly mitigates the three types of biases.
arXiv Detail & Related papers (2024-01-26T08:17:12Z) - Interpreting Unfairness in Graph Neural Networks via Training Node
Attribution [46.384034587689136]
We study a novel problem of interpreting GNN unfairness through attributing it to the influence of training nodes.
Specifically, we propose a novel strategy named Probabilistic Distribution Disparity (PDD) to measure the bias exhibited in GNNs.
We verify the validity of PDD and the effectiveness of influence estimation through experiments on real-world datasets.
arXiv Detail & Related papers (2022-11-25T21:52:30Z) - Deconfounded Training for Graph Neural Networks [98.06386851685645]
We present a new paradigm of decon training (DTP) that better mitigates the confounding effect and latches on the critical information.
Specifically, we adopt the attention modules to disentangle the critical subgraph and trivial subgraph.
It allows GNNs to capture a more reliable subgraph whose relation with the label is robust across different distributions.
arXiv Detail & Related papers (2021-12-30T15:22:35Z) - Balancing out Bias: Achieving Fairness Through Training Reweighting [58.201275105195485]
Bias in natural language processing arises from models learning characteristics of the author such as gender and race.
Existing methods for mitigating and measuring bias do not directly account for correlations between author demographics and linguistic variables.
This paper introduces a very simple but highly effective method for countering bias using instance reweighting.
arXiv Detail & Related papers (2021-09-16T23:40:28Z) - Towards Measuring Bias in Image Classification [61.802949761385]
Convolutional Neural Networks (CNN) have become state-of-the-art for the main computer vision tasks.
However, due to the complex structure their decisions are hard to understand which limits their use in some context of the industrial world.
We present a systematic approach to uncover data bias by means of attribution maps.
arXiv Detail & Related papers (2021-07-01T10:50:39Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.