FairMod: Fair Link Prediction and Recommendation via Graph Modification
- URL: http://arxiv.org/abs/2201.11596v1
- Date: Thu, 27 Jan 2022 15:49:33 GMT
- Title: FairMod: Fair Link Prediction and Recommendation via Graph Modification
- Authors: Sean Current, Yuntian He, Saket Gurukar, Srinivasan Parthasarathy
- Abstract summary: We propose FairMod to mitigate the bias learned by GNNs through modifying the input graph.
Our proposed models perform either microscopic or macroscopic edits to the input graph while training GNNs and learn node embeddings that are both accurate and fair under the context of link recommendations.
We demonstrate the effectiveness of our approach on four real world datasets and show that we can improve the recommendation fairness by several factors at negligible cost to link prediction accuracy.
- Score: 7.239011273682701
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As machine learning becomes more widely adopted across domains, it is
critical that researchers and ML engineers think about the inherent biases in
the data that may be perpetuated by the model. Recently, many studies have
shown that such biases are also imbibed in Graph Neural Network (GNN) models if
the input graph is biased. In this work, we aim to mitigate the bias learned by
GNNs through modifying the input graph. To that end, we propose FairMod, a Fair
Graph Modification methodology with three formulations: the Global Fairness
Optimization (GFO), Community Fairness Optimization (CFO), and Fair Edge
Weighting (FEW) models. Our proposed models perform either microscopic or
macroscopic edits to the input graph while training GNNs and learn node
embeddings that are both accurate and fair under the context of link
recommendations. We demonstrate the effectiveness of our approach on four real
world datasets and show that we can improve the recommendation fairness by
several factors at negligible cost to link prediction accuracy.
Related papers
- Unbiased GNN Learning via Fairness-Aware Subgraph Diffusion [23.615250207134004]
We propose a novel generative Neural Fairness-Aware Subgraph (FASD) method for unbiased GNN learning.
We show that FASD induces fair node predictions on the input graph by performing standard GNN learning on the debiased subgraphs.
Experimental results demonstrate the superior performance of the proposed method over state-of-the-art Fair GNN baselines.
arXiv Detail & Related papers (2024-12-31T18:48:30Z) - FairSample: Training Fair and Accurate Graph Convolutional Neural
Networks Efficiently [29.457338893912656]
Societal biases against sensitive groups may exist in many real world graphs.
We present an in-depth analysis on how graph structure bias, node attribute bias, and model parameters may affect the demographic parity of GCNs.
Our insights lead to FairSample, a framework that jointly mitigates the three types of biases.
arXiv Detail & Related papers (2024-01-26T08:17:12Z) - GOODAT: Towards Test-time Graph Out-of-Distribution Detection [103.40396427724667]
Graph neural networks (GNNs) have found widespread application in modeling graph data across diverse domains.
Recent studies have explored graph OOD detection, often focusing on training a specific model or modifying the data on top of a well-trained GNN.
This paper introduces a data-centric, unsupervised, and plug-and-play solution that operates independently of training data and modifications of GNN architecture.
arXiv Detail & Related papers (2024-01-10T08:37:39Z) - Chasing Fairness in Graphs: A GNN Architecture Perspective [73.43111851492593]
We propose textsfFair textsfMessage textsfPassing (FMP) designed within a unified optimization framework for graph neural networks (GNNs)
In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together.
Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three real-world datasets.
arXiv Detail & Related papers (2023-12-19T18:00:15Z) - Equipping Federated Graph Neural Networks with Structure-aware Group Fairness [9.60194163484604]
Graph Neural Networks (GNNs) have been widely used for various types of graph data processing and analytical tasks.
textF2$GNN is a Fair Federated Graph Neural Network that enhances group fairness of federated GNNs.
arXiv Detail & Related papers (2023-10-18T21:51:42Z) - Towards Fair Graph Neural Networks via Graph Counterfactual [38.721295940809135]
Graph neural networks (GNNs) have shown great ability in representation (GNNs) learning on graphs, facilitating various tasks.
Recent works show that GNNs tend to inherit and amplify the bias from training data, causing concerns of the adoption of GNNs in high-stake scenarios.
We propose a novel framework CAF, which can select counterfactuals from training data to avoid non-realistic counterfactuals.
arXiv Detail & Related papers (2023-07-10T23:28:03Z) - Fairness-Aware Graph Neural Networks: A Survey [53.41838868516936]
Graph Neural Networks (GNNs) have become increasingly important due to their representational power and state-of-the-art predictive performance.
GNNs suffer from fairness issues that arise as a result of the underlying graph data and the fundamental aggregation mechanism.
In this article, we examine and categorize fairness techniques for improving the fairness of GNNs.
arXiv Detail & Related papers (2023-07-08T08:09:06Z) - Analyzing the Effect of Sampling in GNNs on Individual Fairness [79.28449844690566]
Graph neural network (GNN) based methods have saturated the field of recommender systems.
We extend an existing method for promoting individual fairness on graphs to support mini-batch, or sub-sample based, training of a GNN.
We show that mini-batch training facilitate individual fairness promotion by allowing for local nuance to guide the process of fairness promotion in representation learning.
arXiv Detail & Related papers (2022-09-08T16:20:25Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - EDITS: Modeling and Mitigating Data Bias for Graph Neural Networks [29.974829042502375]
We develop a framework named EDITS to mitigate the bias in attributed networks.
EDITS works in a model-agnostic manner, which means that it is independent of the specific GNNs applied for downstream tasks.
arXiv Detail & Related papers (2021-08-11T14:07:01Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.