Graph Fairness Learning under Distribution Shifts
- URL: http://arxiv.org/abs/2401.16784v1
- Date: Tue, 30 Jan 2024 06:51:24 GMT
- Title: Graph Fairness Learning under Distribution Shifts
- Authors: Yibo Li, Xiao Wang, Yujie Xing, Shaohua Fan, Ruijia Wang, Yaoqi Liu,
and Chuan Shi
- Abstract summary: Graph neural networks (GNNs) have achieved remarkable performance on graph-structured data.
GNNs may inherit prejudice from the training data and make discriminatory predictions based on sensitive attributes, such as gender and race.
We propose a graph generator to produce numerous graphs with significant bias and under different distances.
- Score: 33.9878682279549
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) have achieved remarkable performance on
graph-structured data. However, GNNs may inherit prejudice from the training
data and make discriminatory predictions based on sensitive attributes, such as
gender and race. Recently, there has been an increasing interest in ensuring
fairness on GNNs, but all of them are under the assumption that the training
and testing data are under the same distribution, i.e., training data and
testing data are from the same graph. Will graph fairness performance decrease
under distribution shifts? How does distribution shifts affect graph fairness
learning? All these open questions are largely unexplored from a theoretical
perspective. To answer these questions, we first theoretically identify the
factors that determine bias on a graph. Subsequently, we explore the factors
influencing fairness on testing graphs, with a noteworthy factor being the
representation distances of certain groups between the training and testing
graph. Motivated by our theoretical analysis, we propose our framework
FatraGNN. Specifically, to guarantee fairness performance on unknown testing
graphs, we propose a graph generator to produce numerous graphs with
significant bias and under different distributions. Then we minimize the
representation distances for each certain group between the training graph and
generated graphs. This empowers our model to achieve high classification and
fairness performance even on generated graphs with significant bias, thereby
effectively handling unknown testing graphs. Experiments on real-world and
semi-synthetic datasets demonstrate the effectiveness of our model in terms of
both accuracy and fairness.
Related papers
- Deceptive Fairness Attacks on Graphs via Meta Learning [102.53029537886314]
We study deceptive fairness attacks on graphs to answer the question: How can we achieve poisoning attacks on a graph learning model to exacerbate the bias deceptively?
We propose a meta learning-based framework named FATE to attack various fairness definitions and graph learning models.
We conduct extensive experimental evaluations on real-world datasets in the task of semi-supervised node classification.
arXiv Detail & Related papers (2023-10-24T09:10:14Z) - Graph Out-of-Distribution Generalization with Controllable Data
Augmentation [51.17476258673232]
Graph Neural Network (GNN) has demonstrated extraordinary performance in classifying graph properties.
Due to the selection bias of training and testing data, distribution deviation is widespread.
We propose OOD calibration to measure the distribution deviation of virtual samples.
arXiv Detail & Related papers (2023-08-16T13:10:27Z) - Towards Fair Graph Neural Networks via Graph Counterfactual [38.721295940809135]
Graph neural networks (GNNs) have shown great ability in representation (GNNs) learning on graphs, facilitating various tasks.
Recent works show that GNNs tend to inherit and amplify the bias from training data, causing concerns of the adoption of GNNs in high-stake scenarios.
We propose a novel framework CAF, which can select counterfactuals from training data to avoid non-realistic counterfactuals.
arXiv Detail & Related papers (2023-07-10T23:28:03Z) - OOD-GNN: Out-of-Distribution Generalized Graph Neural Network [73.67049248445277]
Graph neural networks (GNNs) have achieved impressive performance when testing and training graph data come from identical distribution.
Existing GNNs lack out-of-distribution generalization abilities so that their performance substantially degrades when there exist distribution shifts between testing and training graph data.
We propose an out-of-distribution generalized graph neural network (OOD-GNN) for achieving satisfactory performance on unseen testing graphs that have different distributions with training graphs.
arXiv Detail & Related papers (2021-12-07T16:29:10Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Stable Prediction on Graphs with Agnostic Distribution Shift [105.12836224149633]
Graph neural networks (GNNs) have been shown to be effective on various graph tasks with randomly separated training and testing data.
In real applications, however, the distribution of training graph might be different from that of the test one.
We propose a novel stable prediction framework for GNNs, which permits both locally and globally stable learning and prediction on graphs.
arXiv Detail & Related papers (2021-10-08T02:45:47Z) - Graph Classification by Mixture of Diverse Experts [67.33716357951235]
We present GraphDIVE, a framework leveraging mixture of diverse experts for imbalanced graph classification.
With a divide-and-conquer principle, GraphDIVE employs a gating network to partition an imbalanced graph dataset into several subsets.
Experiments on real-world imbalanced graph datasets demonstrate the effectiveness of GraphDIVE.
arXiv Detail & Related papers (2021-03-29T14:03:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.