Analyzing the Effect of Sampling in GNNs on Individual Fairness
- URL: http://arxiv.org/abs/2209.03904v2
- Date: Fri, 9 Sep 2022 14:01:57 GMT
- Title: Analyzing the Effect of Sampling in GNNs on Individual Fairness
- Authors: Rebecca Salganik, Fernando Diaz, Golnoosh Farnadi
- Abstract summary: Graph neural network (GNN) based methods have saturated the field of recommender systems.
We extend an existing method for promoting individual fairness on graphs to support mini-batch, or sub-sample based, training of a GNN.
We show that mini-batch training facilitate individual fairness promotion by allowing for local nuance to guide the process of fairness promotion in representation learning.
- Score: 79.28449844690566
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Graph neural network (GNN) based methods have saturated the field of
recommender systems. The gains of these systems have been significant, showing
the advantages of interpreting data through a network structure. However,
despite the noticeable benefits of using graph structures in recommendation
tasks, this representational form has also bred new challenges which exacerbate
the complexity of mitigating algorithmic bias. When GNNs are integrated into
downstream tasks, such as recommendation, bias mitigation can become even more
difficult. Furthermore, the intractability of applying existing methods of
fairness promotion to large, real world datasets places even more serious
constraints on mitigation attempts. Our work sets out to fill in this gap by
taking an existing method for promoting individual fairness on graphs and
extending it to support mini-batch, or sub-sample based, training of a GNN,
thus laying the groundwork for applying this method to a downstream
recommendation task. We evaluate two popular GNN methods: Graph Convolutional
Network (GCN), which trains on the entire graph, and GraphSAGE, which uses
probabilistic random walks to create subgraphs for mini-batch training, and
assess the effects of sub-sampling on individual fairness. We implement an
individual fairness notion called \textit{REDRESS}, proposed by Dong et al.,
which uses rank optimization to learn individual fair node, or item,
embeddings. We empirically show on two real world datasets that GraphSAGE is
able to achieve, not just, comparable accuracy, but also, improved fairness as
compared with the GCN model. These finding have consequential ramifications to
individual fairness promotion, GNNs, and in downstream form, recommender
systems, showing that mini-batch training facilitate individual fairness
promotion by allowing for local nuance to guide the process of fairness
promotion in representation learning.
Related papers
- GraphLoRA: Structure-Aware Contrastive Low-Rank Adaptation for Cross-Graph Transfer Learning [17.85404473268992]
Graph Neural Networks (GNNs) have demonstrated remarkable proficiency in handling a range of graph analytical tasks.
Despite their versatility, GNNs face significant challenges in transferability, limiting their utility in real-world applications.
We propose GraphLoRA, an effective and parameter-efficient method for transferring well-trained GNNs to diverse graph domains.
arXiv Detail & Related papers (2024-09-25T06:57:42Z) - FairSample: Training Fair and Accurate Graph Convolutional Neural
Networks Efficiently [29.457338893912656]
Societal biases against sensitive groups may exist in many real world graphs.
We present an in-depth analysis on how graph structure bias, node attribute bias, and model parameters may affect the demographic parity of GCNs.
Our insights lead to FairSample, a framework that jointly mitigates the three types of biases.
arXiv Detail & Related papers (2024-01-26T08:17:12Z) - Chasing Fairness in Graphs: A GNN Architecture Perspective [73.43111851492593]
We propose textsfFair textsfMessage textsfPassing (FMP) designed within a unified optimization framework for graph neural networks (GNNs)
In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together.
Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three real-world datasets.
arXiv Detail & Related papers (2023-12-19T18:00:15Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - GFairHint: Improving Individual Fairness for Graph Neural Networks via
Fairness Hint [15.828830496326885]
algorithmic fairness in Graph Neural Networks (GNNs) has attracted significant attention.
We propose a novel method, GFairHint, which promotes individual fairness in GNNs.
GFairHint achieves the best fairness results in almost all combinations of datasets with various backbone models.
arXiv Detail & Related papers (2023-05-25T00:03:22Z) - FairNorm: Fair and Fast Graph Neural Network Training [9.492903649862761]
Graph neural networks (GNNs) have been demonstrated to achieve state-of-the-art for a number of graph-based learning tasks.
It has been shown that GNNs may inherit and even amplify bias within training data, which leads to unfair results towards certain sensitive groups.
This work proposes FairNorm, a unified normalization framework that reduces the bias in GNN-based learning.
arXiv Detail & Related papers (2022-05-20T06:10:27Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - An Adaptive Graph Pre-training Framework for Localized Collaborative
Filtering [79.17319280791237]
We propose an adaptive graph pre-training framework for localized collaborative filtering (ADAPT)
ADAPT captures both the common knowledge across different graphs and the uniqueness for each graph.
It does not require transferring user/item embeddings, and is able to capture both the common knowledge across different graphs and the uniqueness for each graph.
arXiv Detail & Related papers (2021-12-14T06:53:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.