GFairHint: Improving Individual Fairness for Graph Neural Networks via
Fairness Hint
- URL: http://arxiv.org/abs/2305.15622v1
- Date: Thu, 25 May 2023 00:03:22 GMT
- Title: GFairHint: Improving Individual Fairness for Graph Neural Networks via
Fairness Hint
- Authors: Paiheng Xu, Yuhang Zhou, Bang An, Wei Ai, Furong Huang
- Abstract summary: algorithmic fairness in Graph Neural Networks (GNNs) has attracted significant attention.
We propose a novel method, GFairHint, which promotes individual fairness in GNNs.
GFairHint achieves the best fairness results in almost all combinations of datasets with various backbone models.
- Score: 15.828830496326885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given the growing concerns about fairness in machine learning and the
impressive performance of Graph Neural Networks (GNNs) on graph data learning,
algorithmic fairness in GNNs has attracted significant attention. While many
existing studies improve fairness at the group level, only a few works promote
individual fairness, which renders similar outcomes for similar individuals. A
desirable framework that promotes individual fairness should (1) balance
between fairness and performance, (2) accommodate two commonly-used individual
similarity measures (externally annotated and computed from input features),
(3) generalize across various GNN models, and (4) be computationally efficient.
Unfortunately, none of the prior work achieves all the desirables. In this
work, we propose a novel method, GFairHint, which promotes individual fairness
in GNNs and achieves all aforementioned desirables. GFairHint learns fairness
representations through an auxiliary link prediction task, and then
concatenates the representations with the learned node embeddings in original
GNNs as a "fairness hint". Through extensive experimental investigations on
five real-world graph datasets under three prevalent GNN models covering both
individual similarity measures above, GFairHint achieves the best fairness
results in almost all combinations of datasets with various backbone models,
while generating comparable utility results, with much less computational cost
compared to the previous state-of-the-art (SoTA) method.
Related papers
- Towards Fair Graph Representation Learning in Social Networks [20.823461673845756]
We introduce constraints for fair representation learning based on three principles: sufficiency, independence, and separation.
We theoretically demonstrate that our EAGNN method can effectively achieve group fairness.
arXiv Detail & Related papers (2024-10-15T10:57:02Z) - Rethinking Fair Graph Neural Networks from Re-balancing [26.70771023446706]
We find that simple re-balancing methods can easily match or surpass existing fair GNN methods.
We propose FairGB, Fair Graph Neural Network via re-Balancing, which mitigates the unfairness of GNNs by group balancing.
arXiv Detail & Related papers (2024-07-16T11:39:27Z) - GRAPHGINI: Fostering Individual and Group Fairness in Graph Neural
Networks [17.539327573240488]
We introduce for the first time a method for incorporating the Gini coefficient as a measure of fairness to be used within the GNN framework.
Our proposal, GRAPHGINI, works with the two different goals of individual and group fairness in a single system.
arXiv Detail & Related papers (2024-02-20T11:38:52Z) - Chasing Fairness in Graphs: A GNN Architecture Perspective [73.43111851492593]
We propose textsfFair textsfMessage textsfPassing (FMP) designed within a unified optimization framework for graph neural networks (GNNs)
In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together.
Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three real-world datasets.
arXiv Detail & Related papers (2023-12-19T18:00:15Z) - Fairness-Aware Graph Neural Networks: A Survey [53.41838868516936]
Graph Neural Networks (GNNs) have become increasingly important due to their representational power and state-of-the-art predictive performance.
GNNs suffer from fairness issues that arise as a result of the underlying graph data and the fundamental aggregation mechanism.
In this article, we examine and categorize fairness techniques for improving the fairness of GNNs.
arXiv Detail & Related papers (2023-07-08T08:09:06Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Analyzing the Effect of Sampling in GNNs on Individual Fairness [79.28449844690566]
Graph neural network (GNN) based methods have saturated the field of recommender systems.
We extend an existing method for promoting individual fairness on graphs to support mini-batch, or sub-sample based, training of a GNN.
We show that mini-batch training facilitate individual fairness promotion by allowing for local nuance to guide the process of fairness promotion in representation learning.
arXiv Detail & Related papers (2022-09-08T16:20:25Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.