Drop Edges and Adapt: a Fairness Enforcing Fine-tuning for Graph Neural
Networks
- URL: http://arxiv.org/abs/2302.11479v1
- Date: Wed, 22 Feb 2023 16:28:08 GMT
- Title: Drop Edges and Adapt: a Fairness Enforcing Fine-tuning for Graph Neural
Networks
- Authors: Indro Spinelli, Riccardo Bianchini, Simone Scardapane
- Abstract summary: Link prediction algorithms tend to disfavor the links between individuals in specific demographic groups.
This paper proposes a novel way to enforce fairness on graph neural networks with a fine-tuning strategy.
One novelty of DEA is that we can use a discrete yet learnable adjacency matrix in our fine-tuning.
- Score: 9.362130313618797
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rise of graph representation learning as the primary solution for many
different network science tasks led to a surge of interest in the fairness of
this family of methods. Link prediction, in particular, has a substantial
social impact. However, link prediction algorithms tend to increase the
segregation in social networks by disfavoring the links between individuals in
specific demographic groups. This paper proposes a novel way to enforce
fairness on graph neural networks with a fine-tuning strategy. We Drop the
unfair Edges and, simultaneously, we Adapt the model's parameters to those
modifications, DEA in short. We introduce two covariance-based constraints
designed explicitly for the link prediction task. We use these constraints to
guide the optimization process responsible for learning the new "fair"
adjacency matrix. One novelty of DEA is that we can use a discrete yet
learnable adjacency matrix in our fine-tuning. We demonstrate the effectiveness
of our approach on five real-world datasets and show that we can improve both
the accuracy and the fairness of the link prediction tasks. In addition, we
present an in-depth ablation study demonstrating that our training algorithm
for the adjacency matrix can be used to improve link prediction performances
during training. Finally, we compute the relevance of each component of our
framework to show that the combination of both the constraints and the training
of the adjacency matrix leads to optimal performances.
Related papers
- Probabilistic Self-supervised Learning via Scoring Rules Minimization [19.347097627898876]
We propose a novel probabilistic self-supervised learning via Scoring Rule Minimization (ProSMIN) to enhance representation quality and mitigate collapsing representations.
Our method achieves superior accuracy and calibration, surpassing the self-supervised baseline in a wide range of experiments on large-scale datasets.
arXiv Detail & Related papers (2023-09-05T08:48:25Z) - You Only Transfer What You Share: Intersection-Induced Graph Transfer
Learning for Link Prediction [79.15394378571132]
We investigate a previously overlooked phenomenon: in many cases, a densely connected, complementary graph can be found for the original graph.
The denser graph may share nodes with the original graph, which offers a natural bridge for transferring selective, meaningful knowledge.
We identify this setting as Graph Intersection-induced Transfer Learning (GITL), which is motivated by practical applications in e-commerce or academic co-authorship predictions.
arXiv Detail & Related papers (2023-02-27T22:56:06Z) - Analyzing the Effect of Sampling in GNNs on Individual Fairness [79.28449844690566]
Graph neural network (GNN) based methods have saturated the field of recommender systems.
We extend an existing method for promoting individual fairness on graphs to support mini-batch, or sub-sample based, training of a GNN.
We show that mini-batch training facilitate individual fairness promotion by allowing for local nuance to guide the process of fairness promotion in representation learning.
arXiv Detail & Related papers (2022-09-08T16:20:25Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Fair Node Representation Learning via Adaptive Data Augmentation [9.492903649862761]
This work theoretically explains the sources of bias in node representations obtained via Graph Neural Networks (GNNs)
Building upon the analysis, fairness-aware data augmentation frameworks are developed to reduce the intrinsic bias.
Our analysis and proposed schemes can be readily employed to enhance the fairness of various GNN-based learning mechanisms.
arXiv Detail & Related papers (2022-01-21T05:49:15Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Fairness-Aware Node Representation Learning [9.850791193881651]
This study addresses fairness issues in graph contrastive learning with fairness-aware graph augmentation designs.
Different fairness notions on graphs are introduced, which serve as guidelines for the proposed graph augmentations.
Experimental results on real social networks are presented to demonstrate that the proposed augmentations can enhance fairness in terms of statistical parity and equal opportunity.
arXiv Detail & Related papers (2021-06-09T21:12:14Z) - Biased Edge Dropout for Enhancing Fairness in Graph Representation
Learning [14.664485680918725]
We propose a biased edge dropout algorithm (FairDrop) to counter-act homophily and improve fairness in graph representation learning.
FairDrop can be plugged in easily on many existing algorithms, is efficient, adaptable, and can be combined with other fairness-inducing solutions.
We prove that the proposed algorithm can successfully improve the fairness of all models up to a small or negligible drop in accuracy.
arXiv Detail & Related papers (2021-04-29T08:59:36Z) - GCN-ALP: Addressing Matching Collisions in Anchor Link Prediction [40.811988657941946]
The problem textitanchor link prediction is formalized to link user data with the common ground on user profile, content and network structure across social networks.
We propose graph convolution networks with mini-batch strategy, efficiently solving anchor link prediction on matching graph.
arXiv Detail & Related papers (2021-03-19T02:41:55Z) - Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph
Link Prediction [69.1473775184952]
We introduce a realistic problem of few-shot out-of-graph link prediction.
We tackle this problem with a novel transductive meta-learning framework.
We validate our model on multiple benchmark datasets for knowledge graph completion and drug-drug interaction prediction.
arXiv Detail & Related papers (2020-06-11T17:42:46Z) - Graph Ordering: Towards the Optimal by Learning [69.72656588714155]
Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
arXiv Detail & Related papers (2020-01-18T09:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.