Graph Learning with Localized Neighborhood Fairness
- URL: http://arxiv.org/abs/2212.12040v1
- Date: Thu, 22 Dec 2022 21:20:43 GMT
- Title: Graph Learning with Localized Neighborhood Fairness
- Authors: April Chen, Ryan Rossi, Nedim Lipka, Jane Hoffswell, Gromit Chan,
Shunan Guo, Eunyee Koh, Sungchul Kim, Nesreen K. Ahmed
- Abstract summary: We introduce the notion of neighborhood fairness and develop a computational framework for learning such locally fair embeddings.
We demonstrate the effectiveness of the proposed neighborhood fairness framework for a variety of graph machine learning tasks including fair link prediction, link classification, and learning fair graph embeddings.
- Score: 32.301270877134
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning fair graph representations for downstream applications is becoming
increasingly important, but existing work has mostly focused on improving
fairness at the global level by either modifying the graph structure or
objective function without taking into account the local neighborhood of a
node. In this work, we formally introduce the notion of neighborhood fairness
and develop a computational framework for learning such locally fair
embeddings. We argue that the notion of neighborhood fairness is more
appropriate since GNN-based models operate at the local neighborhood level of a
node. Our neighborhood fairness framework has two main components that are
flexible for learning fair graph representations from arbitrary data: the first
aims to construct fair neighborhoods for any arbitrary node in a graph and the
second enables adaption of these fair neighborhoods to better capture certain
application or data-dependent constraints, such as allowing neighborhoods to be
more biased towards certain attributes or neighbors in the graph.Furthermore,
while link prediction has been extensively studied, we are the first to
investigate the graph representation learning task of fair link classification.
We demonstrate the effectiveness of the proposed neighborhood fairness
framework for a variety of graph machine learning tasks including fair link
prediction, link classification, and learning fair graph embeddings. Notably,
our approach achieves not only better fairness but also increases the accuracy
in the majority of cases across a wide variety of graphs, problem settings, and
metrics.
Related papers
- Reproducibility Study Of Learning Fair Graph Representations Via Automated Data Augmentations [0.0]
We explore the performance of the Graphair framework in link prediction tasks.
Our findings underscore Graphair's potential for wider adoption in graph-based learning.
arXiv Detail & Related papers (2024-08-31T11:28:22Z) - Federated Graph Semantic and Structural Learning [54.97668931176513]
This paper reveals that local client distortion is brought by both node-level semantics and graph-level structure.
We postulate that a well-structural graph neural network possesses similarity for neighbors due to the inherent adjacency relationships.
We transform the adjacency relationships into the similarity distribution and leverage the global model to distill the relation knowledge into the local model.
arXiv Detail & Related papers (2024-06-27T07:08:28Z) - FairSample: Training Fair and Accurate Graph Convolutional Neural
Networks Efficiently [29.457338893912656]
Societal biases against sensitive groups may exist in many real world graphs.
We present an in-depth analysis on how graph structure bias, node attribute bias, and model parameters may affect the demographic parity of GCNs.
Our insights lead to FairSample, a framework that jointly mitigates the three types of biases.
arXiv Detail & Related papers (2024-01-26T08:17:12Z) - Chasing Fairness in Graphs: A GNN Architecture Perspective [73.43111851492593]
We propose textsfFair textsfMessage textsfPassing (FMP) designed within a unified optimization framework for graph neural networks (GNNs)
In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together.
Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three real-world datasets.
arXiv Detail & Related papers (2023-12-19T18:00:15Z) - Fairness-Aware Graph Neural Networks: A Survey [53.41838868516936]
Graph Neural Networks (GNNs) have become increasingly important due to their representational power and state-of-the-art predictive performance.
GNNs suffer from fairness issues that arise as a result of the underlying graph data and the fundamental aggregation mechanism.
In this article, we examine and categorize fairness techniques for improving the fairness of GNNs.
arXiv Detail & Related papers (2023-07-08T08:09:06Z) - You Only Transfer What You Share: Intersection-Induced Graph Transfer
Learning for Link Prediction [79.15394378571132]
We investigate a previously overlooked phenomenon: in many cases, a densely connected, complementary graph can be found for the original graph.
The denser graph may share nodes with the original graph, which offers a natural bridge for transferring selective, meaningful knowledge.
We identify this setting as Graph Intersection-induced Transfer Learning (GITL), which is motivated by practical applications in e-commerce or academic co-authorship predictions.
arXiv Detail & Related papers (2023-02-27T22:56:06Z) - Fair Node Representation Learning via Adaptive Data Augmentation [9.492903649862761]
This work theoretically explains the sources of bias in node representations obtained via Graph Neural Networks (GNNs)
Building upon the analysis, fairness-aware data augmentation frameworks are developed to reduce the intrinsic bias.
Our analysis and proposed schemes can be readily employed to enhance the fairness of various GNN-based learning mechanisms.
arXiv Detail & Related papers (2022-01-21T05:49:15Z) - Learning Fair Node Representations with Graph Counterfactual Fairness [56.32231787113689]
We propose graph counterfactual fairness, which considers the biases led by the above facts.
We generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
Our framework outperforms the state-of-the-art baselines in graph counterfactual fairness.
arXiv Detail & Related papers (2022-01-10T21:43:44Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Biased Edge Dropout for Enhancing Fairness in Graph Representation
Learning [14.664485680918725]
We propose a biased edge dropout algorithm (FairDrop) to counter-act homophily and improve fairness in graph representation learning.
FairDrop can be plugged in easily on many existing algorithms, is efficient, adaptable, and can be combined with other fairness-inducing solutions.
We prove that the proposed algorithm can successfully improve the fairness of all models up to a small or negligible drop in accuracy.
arXiv Detail & Related papers (2021-04-29T08:59:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.