Biased Edge Dropout for Enhancing Fairness in Graph Representation
Learning
- URL: http://arxiv.org/abs/2104.14210v1
- Date: Thu, 29 Apr 2021 08:59:36 GMT
- Title: Biased Edge Dropout for Enhancing Fairness in Graph Representation
Learning
- Authors: Indro Spinelli, Simone Scardapane, Amir Hussain, Aurelio Uncini
- Abstract summary: We propose a biased edge dropout algorithm (FairDrop) to counter-act homophily and improve fairness in graph representation learning.
FairDrop can be plugged in easily on many existing algorithms, is efficient, adaptable, and can be combined with other fairness-inducing solutions.
We prove that the proposed algorithm can successfully improve the fairness of all models up to a small or negligible drop in accuracy.
- Score: 14.664485680918725
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph representation learning has become a ubiquitous component in many
scenarios, ranging from social network analysis to energy forecasting in smart
grids. In several applications, ensuring the fairness of the node (or graph)
representations with respect to some protected attributes is crucial for their
correct deployment. Yet, fairness in graph deep learning remains
under-explored, with few solutions available. In particular, the tendency of
similar nodes to cluster on several real-world graphs (i.e., homophily) can
dramatically worsen the fairness of these procedures. In this paper, we propose
a biased edge dropout algorithm (FairDrop) to counter-act homophily and improve
fairness in graph representation learning. FairDrop can be plugged in easily on
many existing algorithms, is efficient, adaptable, and can be combined with
other fairness-inducing solutions. After describing the general algorithm, we
demonstrate its application on two benchmark tasks, specifically, as a random
walk model for producing node embeddings, and to a graph convolutional network
for link prediction. We prove that the proposed algorithm can successfully
improve the fairness of all models up to a small or negligible drop in
accuracy, and compares favourably with existing state-of-the-art solutions. In
an ablation study, we demonstrate that our algorithm can flexibly interpolate
between biasing towards fairness and an unbiased edge dropout. Furthermore, to
better evaluate the gains, we propose a new dyadic group definition to measure
the bias of a link prediction task when paired with group-based fairness
metrics. In particular, we extend the metric used to measure the bias in the
node embeddings to take into account the graph structure.
Related papers
- FairSample: Training Fair and Accurate Graph Convolutional Neural
Networks Efficiently [29.457338893912656]
Societal biases against sensitive groups may exist in many real world graphs.
We present an in-depth analysis on how graph structure bias, node attribute bias, and model parameters may affect the demographic parity of GCNs.
Our insights lead to FairSample, a framework that jointly mitigates the three types of biases.
arXiv Detail & Related papers (2024-01-26T08:17:12Z) - Chasing Fairness in Graphs: A GNN Architecture Perspective [73.43111851492593]
We propose textsfFair textsfMessage textsfPassing (FMP) designed within a unified optimization framework for graph neural networks (GNNs)
In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together.
Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three real-world datasets.
arXiv Detail & Related papers (2023-12-19T18:00:15Z) - Deceptive Fairness Attacks on Graphs via Meta Learning [102.53029537886314]
We study deceptive fairness attacks on graphs to answer the question: How can we achieve poisoning attacks on a graph learning model to exacerbate the bias deceptively?
We propose a meta learning-based framework named FATE to attack various fairness definitions and graph learning models.
We conduct extensive experimental evaluations on real-world datasets in the task of semi-supervised node classification.
arXiv Detail & Related papers (2023-10-24T09:10:14Z) - Fairness-aware Optimal Graph Filter Design [25.145533328758614]
Graphs are mathematical tools that can be used to represent complex real-world interconnected systems.
Machine learning (ML) over graphs has attracted significant attention recently.
We take a fresh look at the problem of bias mitigation in graph-based learning by borrowing insights from graph signal processing.
arXiv Detail & Related papers (2023-10-22T22:40:40Z) - Drop Edges and Adapt: a Fairness Enforcing Fine-tuning for Graph Neural
Networks [9.362130313618797]
Link prediction algorithms tend to disfavor the links between individuals in specific demographic groups.
This paper proposes a novel way to enforce fairness on graph neural networks with a fine-tuning strategy.
One novelty of DEA is that we can use a discrete yet learnable adjacency matrix in our fine-tuning.
arXiv Detail & Related papers (2023-02-22T16:28:08Z) - Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph
Matching [68.35685422301613]
We propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs.
It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance.
Experiments on synthetic and real-world datasets show the effectiveness of our MatchExplainer by outperforming all state-of-the-art parametric baselines with significant margins.
arXiv Detail & Related papers (2023-01-07T05:14:45Z) - Graph Learning with Localized Neighborhood Fairness [32.301270877134]
We introduce the notion of neighborhood fairness and develop a computational framework for learning such locally fair embeddings.
We demonstrate the effectiveness of the proposed neighborhood fairness framework for a variety of graph machine learning tasks including fair link prediction, link classification, and learning fair graph embeddings.
arXiv Detail & Related papers (2022-12-22T21:20:43Z) - Analyzing the Effect of Sampling in GNNs on Individual Fairness [79.28449844690566]
Graph neural network (GNN) based methods have saturated the field of recommender systems.
We extend an existing method for promoting individual fairness on graphs to support mini-batch, or sub-sample based, training of a GNN.
We show that mini-batch training facilitate individual fairness promotion by allowing for local nuance to guide the process of fairness promotion in representation learning.
arXiv Detail & Related papers (2022-09-08T16:20:25Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Fairness-Aware Node Representation Learning [9.850791193881651]
This study addresses fairness issues in graph contrastive learning with fairness-aware graph augmentation designs.
Different fairness notions on graphs are introduced, which serve as guidelines for the proposed graph augmentations.
Experimental results on real social networks are presented to demonstrate that the proposed augmentations can enhance fairness in terms of statistical parity and equal opportunity.
arXiv Detail & Related papers (2021-06-09T21:12:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.