TopoFair: Linking Topological Bias to Fairness in Link Prediction Benchmarks
- URL: http://arxiv.org/abs/2602.11802v1
- Date: Thu, 12 Feb 2026 10:29:44 GMT
- Title: TopoFair: Linking Topological Bias to Fairness in Link Prediction Benchmarks
- Authors: Lilian Marey, Mathilde Perez, Tiphaine Viard, Charlotte Laclau,
- Abstract summary: Graph link prediction (LP) plays a critical role in socially impactful applications, such as job recommendation and friendship formation.<n>While many fairness-aware methods manipulate graph structures to mitigate prediction disparities, the topological biases inherent to social graph structures remain poorly understood.<n>We propose a novel benchmarking framework for fair LP, centered on the structural biases of the underlying graphs.
- Score: 2.227306407687634
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph link prediction (LP) plays a critical role in socially impactful applications, such as job recommendation and friendship formation. Ensuring fairness in this task is thus essential. While many fairness-aware methods manipulate graph structures to mitigate prediction disparities, the topological biases inherent to social graph structures remain poorly understood and are often reduced to homophily alone. This undermines the generalization potential of fairness interventions and limits their applicability across diverse network topologies. In this work, we propose a novel benchmarking framework for fair LP, centered on the structural biases of the underlying graphs. We begin by reviewing and formalizing a broad taxonomy of topological bias measures relevant to fairness in graphs. In parallel, we introduce a flexible graph generation method that simultaneously ensures fidelity to real-world graph patterns and enables controlled variation across a wide spectrum of structural biases. We apply this framework to evaluate both classical and fairness-aware LP models across multiple use cases. Our results provide a fine-grained empirical analysis of the interactions between predictive fairness and structural biases. This new perspective reveals the sensitivity of fairness interventions to beyond-homophily biases and underscores the need for structurally grounded fairness evaluations in graph learning.
Related papers
- k-hop Fairness: Addressing Disparities in Graph Link Prediction Beyond First-Order Neighborhoods [2.4331722417973873]
Link prediction plays a central role in graph-based applications, particularly in social recommendation.<n>Real-world graphs often reflect structural biases, most notably homophily, the tendency of nodes with similar attributes to connect.<n>We propose $k$-hop fairness, a structural notion of fairness for LP, that assesses disparities conditioned on the distance between nodes.
arXiv Detail & Related papers (2026-03-04T09:20:06Z) - Breaking the Dyadic Barrier: Rethinking Fairness in Link Prediction Beyond Demographic Parity [5.932575574212546]
We argue that demographic parity does not meet desired properties for fairness assessment in ranking-based tasks such as link prediction.<n>We formalize the limitations of existing fairness evaluations and propose a framework that enables a more expressive assessment.
arXiv Detail & Related papers (2025-11-09T22:58:29Z) - Estimating Fair Graphs from Graph-Stationary Data [58.94389691379349]
We consider group and individual fairness for graphs corresponding to group- and node-level definitions.<n>To evaluate the fairness of a given graph, we provide multiple bias metrics, including novel measurements in the spectral domain.<n>One variant of FairSpecTemp exploits commutativity properties of graph stationarity while directly constraining bias.<n>The other implicitly encourages fair estimates by restricting bias in the graph spectrum and is thus more flexible.
arXiv Detail & Related papers (2025-10-08T20:51:57Z) - Fair Deepfake Detectors Can Generalize [51.21167546843708]
We show that controlling for confounders (data distribution and model capacity) enables improved generalization via fairness interventions.<n>Motivated by this insight, we propose Demographic Attribute-insensitive Intervention Detection (DAID), a plug-and-play framework composed of: i) Demographic-aware data rebalancing, which employs inverse-propensity weighting and subgroup-wise feature normalization to neutralize distributional biases; and ii) Demographic-agnostic feature aggregation, which uses a novel alignment loss to suppress sensitive-attribute signals.<n>DAID consistently achieves superior performance in both fairness and generalization compared to several state-of-the-art
arXiv Detail & Related papers (2025-07-03T14:10:02Z) - Graph Out-of-Distribution Generalization via Causal Intervention [69.70137479660113]
We introduce a conceptually simple yet principled approach for training robust graph neural networks (GNNs) under node-level distribution shifts.
Our method resorts to a new learning objective derived from causal inference that coordinates an environment estimator and a mixture-of-expert GNN predictor.
Our model can effectively enhance generalization with various types of distribution shifts and yield up to 27.4% accuracy improvement over state-of-the-arts on graph OOD generalization benchmarks.
arXiv Detail & Related papers (2024-02-18T07:49:22Z) - FairWire: Fair Graph Generation [18.6649050946022]
This work focuses on the analysis and mitigation of structural bias for both real and synthetic graphs.
To alleviate the identified bias factors, we design a novel fairness regularizer that offers a versatile use.
We propose a fair graph generation framework, FairWire, by leveraging our fair regularizer design in a generative model.
arXiv Detail & Related papers (2024-02-06T20:43:00Z) - FairSample: Training Fair and Accurate Graph Convolutional Neural
Networks Efficiently [29.457338893912656]
Societal biases against sensitive groups may exist in many real world graphs.
We present an in-depth analysis on how graph structure bias, node attribute bias, and model parameters may affect the demographic parity of GCNs.
Our insights lead to FairSample, a framework that jointly mitigates the three types of biases.
arXiv Detail & Related papers (2024-01-26T08:17:12Z) - Marginal Nodes Matter: Towards Structure Fairness in Graphs [77.25149739933596]
We propose textbfStructural textbfFair textbfGraph textbfNeural textbfNetwork (SFairGNN) to achieve structure fairness.
Our experiments show SFairGNN can significantly improve structure fairness while maintaining overall performance in the downstream tasks.
arXiv Detail & Related papers (2023-10-23T03:20:32Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - Fair Node Representation Learning via Adaptive Data Augmentation [9.492903649862761]
This work theoretically explains the sources of bias in node representations obtained via Graph Neural Networks (GNNs)
Building upon the analysis, fairness-aware data augmentation frameworks are developed to reduce the intrinsic bias.
Our analysis and proposed schemes can be readily employed to enhance the fairness of various GNN-based learning mechanisms.
arXiv Detail & Related papers (2022-01-21T05:49:15Z) - Learning Fair Node Representations with Graph Counterfactual Fairness [56.32231787113689]
We propose graph counterfactual fairness, which considers the biases led by the above facts.
We generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
Our framework outperforms the state-of-the-art baselines in graph counterfactual fairness.
arXiv Detail & Related papers (2022-01-10T21:43:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.