Breaking the Dyadic Barrier: Rethinking Fairness in Link Prediction Beyond Demographic Parity
- URL: http://arxiv.org/abs/2511.06568v2
- Date: Sun, 16 Nov 2025 19:26:28 GMT
- Title: Breaking the Dyadic Barrier: Rethinking Fairness in Link Prediction Beyond Demographic Parity
- Authors: João Mattos, Debolina Halder Lina, Arlei Silva,
- Abstract summary: We argue that demographic parity does not meet desired properties for fairness assessment in ranking-based tasks such as link prediction.<n>We formalize the limitations of existing fairness evaluations and propose a framework that enables a more expressive assessment.
- Score: 5.932575574212546
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Link prediction is a fundamental task in graph machine learning with applications, ranging from social recommendation to knowledge graph completion. Fairness in this setting is critical, as biased predictions can exacerbate societal inequalities. Prior work adopts a dyadic definition of fairness, enforcing fairness through demographic parity between intra-group and inter-group link predictions. However, we show that this dyadic framing can obscure underlying disparities across subgroups, allowing systemic biases to go undetected. Moreover, we argue that demographic parity does not meet desired properties for fairness assessment in ranking-based tasks such as link prediction. We formalize the limitations of existing fairness evaluations and propose a framework that enables a more expressive assessment. Additionally, we propose a lightweight post-processing method combined with decoupled link predictors that effectively mitigates bias and achieves state-of-the-art fairness-utility trade-offs.
Related papers
- k-hop Fairness: Addressing Disparities in Graph Link Prediction Beyond First-Order Neighborhoods [2.4331722417973873]
Link prediction plays a central role in graph-based applications, particularly in social recommendation.<n>Real-world graphs often reflect structural biases, most notably homophily, the tendency of nodes with similar attributes to connect.<n>We propose $k$-hop fairness, a structural notion of fairness for LP, that assesses disparities conditioned on the distance between nodes.
arXiv Detail & Related papers (2026-03-04T09:20:06Z) - Fairness under Graph Uncertainty: Achieving Interventional Fairness with Partially Known Causal Graphs over Clusters of Variables [2.436681150766912]
Causal notions of fairness align with legal requirements, yet many methods assume access to detailed knowledge of the underlying causal graph.<n>We propose a learning framework that achieves interventional fairness by leveraging a causal graph over textitclusters of variables<n>Our framework strikes a better balance between fairness and accuracy than existing approaches, highlighting its effectiveness under limited causal graph knowledge.
arXiv Detail & Related papers (2026-02-27T02:25:50Z) - TopoFair: Linking Topological Bias to Fairness in Link Prediction Benchmarks [2.227306407687634]
Graph link prediction (LP) plays a critical role in socially impactful applications, such as job recommendation and friendship formation.<n>While many fairness-aware methods manipulate graph structures to mitigate prediction disparities, the topological biases inherent to social graph structures remain poorly understood.<n>We propose a novel benchmarking framework for fair LP, centered on the structural biases of the underlying graphs.
arXiv Detail & Related papers (2026-02-12T10:29:44Z) - Identifying and Mitigating Social Bias Knowledge in Language Models [52.52955281662332]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.<n>FAST surpasses state-of-the-art baselines with superior debiasing performance.<n>This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Algorithmic Fairness in Performative Policy Learning: Escaping the Impossibility of Group Fairness [19.183108418687226]
We develop algorithmic fairness practices that leverage performativity to achieve stronger group fairness guarantees in social classification problems.
A crucial benefit of this approach is that it is possible to resolve the incompatibilities between conflicting group fairness definitions.
arXiv Detail & Related papers (2024-05-30T19:46:47Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.<n>We show that enforcing a causal constraint often reduces the disparity between demographic groups.<n>We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Consistent End-to-End Estimation for Counterfactual Fairness [56.9060492313073]
We propose a novel counterfactual fairness predictor for making predictions under counterfactual fairness.<n>We provide theoretical guarantees that our method is effective in ensuring the notion of counterfactual fairness.
arXiv Detail & Related papers (2023-10-26T17:58:39Z) - Reconciling Predictive and Statistical Parity: A Causal Approach [68.59381759875734]
We propose a new causal decomposition formula for the fairness measures associated with predictive parity.
We show that the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions.
arXiv Detail & Related papers (2023-06-08T09:23:22Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - All of the Fairness for Edge Prediction with Optimal Transport [11.51786288978429]
We study the problem of fairness for the task of edge prediction in graphs.
We propose an embedding-agnostic repairing procedure for the adjacency matrix of an arbitrary graph with a trade-off between the group and individual fairness.
arXiv Detail & Related papers (2020-10-30T15:33:13Z) - On the Fairness of Causal Algorithmic Recourse [36.519629650529666]
We propose two new fairness criteria at the group and individual level.
We show that fairness of recourse is complementary to fairness of prediction.
We discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions.
arXiv Detail & Related papers (2020-10-13T16:35:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.