EDITS: Modeling and Mitigating Data Bias for Graph Neural Networks
- URL: http://arxiv.org/abs/2108.05233v1
- Date: Wed, 11 Aug 2021 14:07:01 GMT
- Title: EDITS: Modeling and Mitigating Data Bias for Graph Neural Networks
- Authors: Yushun Dong, Ninghao Liu, Brian Jalaian, Jundong Li
- Abstract summary: We develop a framework named EDITS to mitigate the bias in attributed networks.
EDITS works in a model-agnostic manner, which means that it is independent of the specific GNNs applied for downstream tasks.
- Score: 29.974829042502375
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have recently demonstrated superior capability
of tackling graph analytical problems in various applications. Nevertheless,
with the wide-spreading practice of GNNs in high-stake decision-making
processes, there is an increasing societal concern that GNNs could make
discriminatory decisions that may be illegal towards certain demographic
groups. Although some explorations have been made towards developing fair GNNs,
existing approaches are tailored for a specific GNN model. However, in
practical scenarios, myriads of GNN variants have been proposed for different
tasks, and it is costly to train and fine-tune existing debiasing models for
different GNNs. Also, bias in a trained model could originate from training
data, while how to mitigate bias in the graph data is usually overlooked. In
this work, different from existing work, we first propose novel definitions and
metrics to measure the bias in an attributed network, which leads to the
optimization objective to mitigate bias. Based on the optimization objective,
we develop a framework named EDITS to mitigate the bias in attributed networks
while preserving useful information. EDITS works in a model-agnostic manner,
which means that it is independent of the specific GNNs applied for downstream
tasks. Extensive experiments on both synthetic and real-world datasets
demonstrate the validity of the proposed bias metrics and the superiority of
EDITS on both bias mitigation and utility maintenance. Open-source
implementation: https://github.com/yushundong/EDITS.
Related papers
- ComFairGNN: Community Fair Graph Neural Network [6.946292440025013]
We introduce a novel framework designed to mitigate community-level bias in Graph Neural Networks (GNNs)
Our approach employs a learnable coreset-based debiasing function that addresses bias arising from diverse local neighborhood distributions during GNNs neighborhood aggregation.
arXiv Detail & Related papers (2024-11-07T02:04:34Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Towards Fair Graph Neural Networks via Graph Counterfactual [38.721295940809135]
Graph neural networks (GNNs) have shown great ability in representation (GNNs) learning on graphs, facilitating various tasks.
Recent works show that GNNs tend to inherit and amplify the bias from training data, causing concerns of the adoption of GNNs in high-stake scenarios.
We propose a novel framework CAF, which can select counterfactuals from training data to avoid non-realistic counterfactuals.
arXiv Detail & Related papers (2023-07-10T23:28:03Z) - Interpreting Unfairness in Graph Neural Networks via Training Node
Attribution [46.384034587689136]
We study a novel problem of interpreting GNN unfairness through attributing it to the influence of training nodes.
Specifically, we propose a novel strategy named Probabilistic Distribution Disparity (PDD) to measure the bias exhibited in GNNs.
We verify the validity of PDD and the effectiveness of influence estimation through experiments on real-world datasets.
arXiv Detail & Related papers (2022-11-25T21:52:30Z) - Distributed Graph Neural Network Training: A Survey [51.77035975191926]
Graph neural networks (GNNs) are a type of deep learning models that are trained on graphs and have been successfully applied in various domains.
Despite the effectiveness of GNNs, it is still challenging for GNNs to efficiently scale to large graphs.
As a remedy, distributed computing becomes a promising solution of training large-scale GNNs.
arXiv Detail & Related papers (2022-11-01T01:57:00Z) - On Structural Explanation of Bias in Graph Neural Networks [40.323880315453906]
Graph Neural Networks (GNNs) have shown satisfying performance in various graph analytical problems.
GNNs could yield biased results against certain demographic subgroups.
We study a novel research problem of structural explanation of bias in GNNs.
arXiv Detail & Related papers (2022-06-24T06:49:21Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - Debiased Graph Neural Networks with Agnostic Label Selection Bias [59.61301255860836]
Most existing Graph Neural Networks (GNNs) are proposed without considering the selection bias in data.
We propose a novel Debiased Graph Neural Networks (DGNN) with a differentiated decorrelation regularizer.
Our proposed model outperforms the state-of-the-art methods and DGNN is a flexible framework to enhance existing GNNs.
arXiv Detail & Related papers (2022-01-19T16:50:29Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.