Migrate Demographic Group For Fair GNNs
- URL: http://arxiv.org/abs/2306.04212v2
- Date: Sat, 23 Mar 2024 11:37:21 GMT
- Title: Migrate Demographic Group For Fair GNNs
- Authors: YanMing Hu, TianChi Liao, JiaLong Chen, Jing Bian, ZiBin Zheng, Chuan Chen,
- Abstract summary: Graph Neural networks (GNNs) have been applied in many scenarios due to the superior performance of graph learning.
FairMigration is composed of two training stages. In the first stage, the GNNs are initially optimized by personalized self-supervised learning.
In the second stage, the new demographic groups are frozen and supervised learning is carried out under the constraints of new demographic groups and adversarial training.
- Score: 23.096452685455134
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural networks (GNNs) have been applied in many scenarios due to the superior performance of graph learning. However, fairness is always ignored when designing GNNs. As a consequence, biased information in training data can easily affect vanilla GNNs, causing biased results toward particular demographic groups (divided by sensitive attributes, such as race and age). There have been efforts to address the fairness issue. However, existing fair techniques generally divide the demographic groups by raw sensitive attributes and assume that are fixed. The biased information correlated with raw sensitive attributes will run through the training process regardless of the implemented fair techniques. It is urgent to resolve this problem for training fair GNNs. To tackle this problem, we propose a brand new framework, FairMigration, which can dynamically migrate the demographic groups instead of keeping that fixed with raw sensitive attributes. FairMigration is composed of two training stages. In the first stage, the GNNs are initially optimized by personalized self-supervised learning, and the demographic groups are adjusted dynamically. In the second stage, the new demographic groups are frozen and supervised learning is carried out under the constraints of new demographic groups and adversarial training. Extensive experiments reveal that FairMigration balances model performance and fairness well.
Related papers
- The Devil is in the Data: Learning Fair Graph Neural Networks via
Partial Knowledge Distillation [35.17007613884196]
Graph neural networks (GNNs) are being increasingly used in many high-stakes tasks.
GNNs have been shown to be unfair as they tend to make discriminatory decisions toward certain demographic groups.
We present a demographic-agnostic method to learn fair GNNs via knowledge distillation, namely FairGKD.
arXiv Detail & Related papers (2023-11-29T05:54:58Z) - Equipping Federated Graph Neural Networks with Structure-aware Group Fairness [9.60194163484604]
Graph Neural Networks (GNNs) have been widely used for various types of graph data processing and analytical tasks.
textF2$GNN is a Fair Federated Graph Neural Network that enhances group fairness of federated GNNs.
arXiv Detail & Related papers (2023-10-18T21:51:42Z) - Towards Fair Graph Neural Networks via Graph Counterfactual [38.721295940809135]
Graph neural networks (GNNs) have shown great ability in representation (GNNs) learning on graphs, facilitating various tasks.
Recent works show that GNNs tend to inherit and amplify the bias from training data, causing concerns of the adoption of GNNs in high-stake scenarios.
We propose a novel framework CAF, which can select counterfactuals from training data to avoid non-realistic counterfactuals.
arXiv Detail & Related papers (2023-07-10T23:28:03Z) - Fairness-Aware Graph Neural Networks: A Survey [53.41838868516936]
Graph Neural Networks (GNNs) have become increasingly important due to their representational power and state-of-the-art predictive performance.
GNNs suffer from fairness issues that arise as a result of the underlying graph data and the fundamental aggregation mechanism.
In this article, we examine and categorize fairness techniques for improving the fairness of GNNs.
arXiv Detail & Related papers (2023-07-08T08:09:06Z) - Mitigating Relational Bias on Knowledge Graphs [51.346018842327865]
We propose Fair-KGNN, a framework that simultaneously alleviates multi-hop bias and preserves the proximity information of entity-to-relation in knowledge graphs.
We develop two instances of Fair-KGNN incorporating with two state-of-the-art KGNN models, RGCN and CompGCN, to mitigate gender-occupation and nationality-salary bias.
arXiv Detail & Related papers (2022-11-26T05:55:34Z) - Analyzing the Effect of Sampling in GNNs on Individual Fairness [79.28449844690566]
Graph neural network (GNN) based methods have saturated the field of recommender systems.
We extend an existing method for promoting individual fairness on graphs to support mini-batch, or sub-sample based, training of a GNN.
We show that mini-batch training facilitate individual fairness promotion by allowing for local nuance to guide the process of fairness promotion in representation learning.
arXiv Detail & Related papers (2022-09-08T16:20:25Z) - FairNorm: Fair and Fast Graph Neural Network Training [9.492903649862761]
Graph neural networks (GNNs) have been demonstrated to achieve state-of-the-art for a number of graph-based learning tasks.
It has been shown that GNNs may inherit and even amplify bias within training data, which leads to unfair results towards certain sensitive groups.
This work proposes FairNorm, a unified normalization framework that reduces the bias in GNN-based learning.
arXiv Detail & Related papers (2022-05-20T06:10:27Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.