Mitigating Relational Bias on Knowledge Graphs
- URL: http://arxiv.org/abs/2211.14489v2
- Date: Sat, 3 Dec 2022 09:03:11 GMT
- Title: Mitigating Relational Bias on Knowledge Graphs
- Authors: Yu-Neng Chuang, Kwei-Herng Lai, Ruixiang Tang, Mengnan Du, Chia-Yuan
Chang, Na Zou and Xia Hu
- Abstract summary: We propose Fair-KGNN, a framework that simultaneously alleviates multi-hop bias and preserves the proximity information of entity-to-relation in knowledge graphs.
We develop two instances of Fair-KGNN incorporating with two state-of-the-art KGNN models, RGCN and CompGCN, to mitigate gender-occupation and nationality-salary bias.
- Score: 51.346018842327865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graph data are prevalent in real-world applications, and knowledge
graph neural networks (KGNNs) are essential techniques for knowledge graph
representation learning. Although KGNN effectively models the structural
information from knowledge graphs, these frameworks amplify the underlying data
bias that leads to discrimination towards certain groups or individuals in
resulting applications. Additionally, as existing debiasing approaches mainly
focus on the entity-wise bias, eliminating the multi-hop relational bias that
pervasively exists in knowledge graphs remains an open question. However, it is
very challenging to eliminate relational bias due to the sparsity of the paths
that generate the bias and the non-linear proximity structure of knowledge
graphs. To tackle the challenges, we propose Fair-KGNN, a KGNN framework that
simultaneously alleviates multi-hop bias and preserves the proximity
information of entity-to-relation in knowledge graphs. The proposed framework
is generalizable to mitigate the relational bias for all types of KGNN. We
develop two instances of Fair-KGNN incorporating with two state-of-the-art KGNN
models, RGCN and CompGCN, to mitigate gender-occupation and nationality-salary
bias. The experiments carried out on three benchmark knowledge graph datasets
demonstrate that the Fair-KGNN can effectively mitigate unfair situations
during representation learning while preserving the predictive performance of
KGNN models.
Related papers
- Graph Out-of-Distribution Generalization via Causal Intervention [69.70137479660113]
We introduce a conceptually simple yet principled approach for training robust graph neural networks (GNNs) under node-level distribution shifts.
Our method resorts to a new learning objective derived from causal inference that coordinates an environment estimator and a mixture-of-expert GNN predictor.
Our model can effectively enhance generalization with various types of distribution shifts and yield up to 27.4% accuracy improvement over state-of-the-arts on graph OOD generalization benchmarks.
arXiv Detail & Related papers (2024-02-18T07:49:22Z) - Equipping Federated Graph Neural Networks with Structure-aware Group Fairness [9.60194163484604]
Graph Neural Networks (GNNs) have been widely used for various types of graph data processing and analytical tasks.
textF2$GNN is a Fair Federated Graph Neural Network that enhances group fairness of federated GNNs.
arXiv Detail & Related papers (2023-10-18T21:51:42Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Debiasing Graph Neural Networks via Learning Disentangled Causal
Substructure [46.86463923605841]
We present a graph classification investigation on the training graphs with severe bias.
We discover that GNNs always tend to explore the spurious correlations to make decision.
We propose a general disentangled GNN framework to learn the causal substructure and bias substructure.
arXiv Detail & Related papers (2022-09-28T13:55:52Z) - KGNN: Distributed Framework for Graph Neural Knowledge Representation [38.080926752998586]
We develop a novel framework called KGNN to take full advantage of knowledge data for representation learning in the distributed learning system.
KGNN is equipped with GNN based encoder and knowledge aware decoder, which aim to jointly explore high-order structure and attribute information together.
arXiv Detail & Related papers (2022-05-17T12:32:02Z) - Generalizing Graph Neural Networks on Out-Of-Distribution Graphs [51.33152272781324]
Graph Neural Networks (GNNs) are proposed without considering the distribution shifts between training and testing graphs.
In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation.
We propose a general causal representation framework, called StableGNN, to eliminate the impact of spurious correlations.
arXiv Detail & Related papers (2021-11-20T18:57:18Z) - EDITS: Modeling and Mitigating Data Bias for Graph Neural Networks [29.974829042502375]
We develop a framework named EDITS to mitigate the bias in attributed networks.
EDITS works in a model-agnostic manner, which means that it is independent of the specific GNNs applied for downstream tasks.
arXiv Detail & Related papers (2021-08-11T14:07:01Z) - Learning Intents behind Interactions with Knowledge Graph for
Recommendation [93.08709357435991]
Knowledge graph (KG) plays an increasingly important role in recommender systems.
Existing GNN-based models fail to identify user-item relation at a fine-grained level of intents.
We propose a new model, Knowledge Graph-based Intent Network (KGIN)
arXiv Detail & Related papers (2021-02-14T03:21:36Z) - Adversarial Learning for Debiasing Knowledge Graph Embeddings [9.53284633479507]
Social and cultural biases can have detrimental consequences on different population and minority groups.
This paper aims at identifying and mitigating such biases in Knowledge Graph (KG) embeddings.
We introduce a novel framework to filter out the sensitive attribute information from the KG embeddings, which we call FAN (Filtering Adversarial Network)
arXiv Detail & Related papers (2020-06-29T18:36:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.