GRAPHGINI: Fostering Individual and Group Fairness in Graph Neural
Networks
- URL: http://arxiv.org/abs/2402.12937v1
- Date: Tue, 20 Feb 2024 11:38:52 GMT
- Title: GRAPHGINI: Fostering Individual and Group Fairness in Graph Neural
Networks
- Authors: Anuj Kumar Sirohi, Anjali Gupta, Sayan Ranu, Sandeep Kumar, Amitabha
Bagchi
- Abstract summary: We introduce for the first time a method for incorporating the Gini coefficient as a measure of fairness to be used within the GNN framework.
Our proposal, GRAPHGINI, works with the two different goals of individual and group fairness in a single system.
- Score: 17.539327573240488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the growing apprehension that GNNs, in the absence of fairness
constraints, might produce biased decisions that disproportionately affect
underprivileged groups or individuals. Departing from previous work, we
introduce for the first time a method for incorporating the Gini coefficient as
a measure of fairness to be used within the GNN framework. Our proposal,
GRAPHGINI, works with the two different goals of individual and group fairness
in a single system, while maintaining high prediction accuracy. GRAPHGINI
enforces individual fairness through learnable attention scores that help in
aggregating more information through similar nodes. A heuristic-based maximum
Nash social welfare constraint ensures the maximum possible group fairness.
Both the individual fairness constraint and the group fairness constraint are
stated in terms of a differentiable approximation of the Gini coefficient. This
approximation is a contribution that is likely to be of interest even beyond
the scope of the problem studied in this paper. Unlike other state-of-the-art,
GRAPHGINI automatically balances all three optimization objectives (utility,
individual, and group fairness) of the GNN and is free from any manual tuning
of weight parameters. Extensive experimentation on real-world datasets
showcases the efficacy of GRAPHGINI in making significant improvements in
individual fairness compared to all currently available state-of-the-art
methods while maintaining utility and group equality.
Related papers
- Rethinking Fair Graph Neural Networks from Re-balancing [26.70771023446706]
We find that simple re-balancing methods can easily match or surpass existing fair GNN methods.
We propose FairGB, Fair Graph Neural Network via re-Balancing, which mitigates the unfairness of GNNs by group balancing.
arXiv Detail & Related papers (2024-07-16T11:39:27Z) - Fairness-Aware Meta-Learning via Nash Bargaining [63.44846095241147]
We introduce a two-stage meta-learning framework to address issues of group-level fairness in machine learning.
The first stage involves the use of a Nash Bargaining Solution (NBS) to resolve hypergradient conflicts and steer the model.
We show empirical effects across various fairness objectives in six key fairness datasets and two image classification tasks.
arXiv Detail & Related papers (2024-06-11T07:34:15Z) - Individual Fairness Through Reweighting and Tuning [0.23395944472515745]
Inherent bias within society can be amplified and perpetuated by artificial intelligence (AI) systems.
Recently, Graph Laplacian Regularizer (GLR) has been used as a substitute for the common Lipschitz condition to enhance individual fairness.
In this work, we investigated whether defining a GLR independently on the train and target data could maintain similar accuracy.
arXiv Detail & Related papers (2024-05-02T20:15:25Z) - Bridging the Fairness Divide: Achieving Group and Individual Fairness in Graph Neural Networks [9.806215623623684]
We propose a new concept of individual fairness within groups and a novel framework named Fairness for Group and Individual (FairGI)
Our approach not only outperforms other state-of-the-art models in terms of group fairness and individual fairness within groups, but also exhibits excellent performance in population-level individual fairness.
arXiv Detail & Related papers (2024-04-26T16:26:11Z) - No prejudice! Fair Federated Graph Neural Networks for Personalized
Recommendation [5.183572923833202]
This paper addresses the pervasive issue of inherent bias within Recommendation Systems (RSs) for different demographic groups.
We propose F2PGNN, a novel framework that leverages the power of Personalized Graph Neural Network (GNN) coupled with fairness considerations.
We show that F2PGNN mitigates group unfairness by 47% - 99% compared to the state-of-the-art while preserving privacy and maintaining the utility.
arXiv Detail & Related papers (2023-12-10T18:33:45Z) - Fairness-Aware Graph Neural Networks: A Survey [53.41838868516936]
Graph Neural Networks (GNNs) have become increasingly important due to their representational power and state-of-the-art predictive performance.
GNNs suffer from fairness issues that arise as a result of the underlying graph data and the fundamental aggregation mechanism.
In this article, we examine and categorize fairness techniques for improving the fairness of GNNs.
arXiv Detail & Related papers (2023-07-08T08:09:06Z) - GFairHint: Improving Individual Fairness for Graph Neural Networks via
Fairness Hint [15.828830496326885]
algorithmic fairness in Graph Neural Networks (GNNs) has attracted significant attention.
We propose a novel method, GFairHint, which promotes individual fairness in GNNs.
GFairHint achieves the best fairness results in almost all combinations of datasets with various backbone models.
arXiv Detail & Related papers (2023-05-25T00:03:22Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Analyzing the Effect of Sampling in GNNs on Individual Fairness [79.28449844690566]
Graph neural network (GNN) based methods have saturated the field of recommender systems.
We extend an existing method for promoting individual fairness on graphs to support mini-batch, or sub-sample based, training of a GNN.
We show that mini-batch training facilitate individual fairness promotion by allowing for local nuance to guide the process of fairness promotion in representation learning.
arXiv Detail & Related papers (2022-09-08T16:20:25Z) - Learning Fair Node Representations with Graph Counterfactual Fairness [56.32231787113689]
We propose graph counterfactual fairness, which considers the biases led by the above facts.
We generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
Our framework outperforms the state-of-the-art baselines in graph counterfactual fairness.
arXiv Detail & Related papers (2022-01-10T21:43:44Z) - Group Whitening: Balancing Learning Efficiency and Representational
Capacity [98.52552448012598]
Group whitening (GW) exploits the advantages of the whitening operation and avoids the disadvantages of normalization within mini-batches.
We show that GW consistently improves the performance of different architectures, with absolute gains of $1.02%$ $sim$ $1.49%$ in top-1 accuracy on ImageNet and $1.82%$ $sim$ $3.21%$ in bounding box AP on COCO.
arXiv Detail & Related papers (2020-09-28T14:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.