k-hop Fairness: Addressing Disparities in Graph Link Prediction Beyond First-Order Neighborhoods
- URL: http://arxiv.org/abs/2603.03867v1
- Date: Wed, 04 Mar 2026 09:20:06 GMT
- Title: k-hop Fairness: Addressing Disparities in Graph Link Prediction Beyond First-Order Neighborhoods
- Authors: Lilian Marey, Tiphaine Viard, Charlotte Laclau,
- Abstract summary: Link prediction plays a central role in graph-based applications, particularly in social recommendation.<n>Real-world graphs often reflect structural biases, most notably homophily, the tendency of nodes with similar attributes to connect.<n>We propose $k$-hop fairness, a structural notion of fairness for LP, that assesses disparities conditioned on the distance between nodes.
- Score: 2.4331722417973873
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Link prediction (LP) plays a central role in graph-based applications, particularly in social recommendation. However, real-world graphs often reflect structural biases, most notably homophily, the tendency of nodes with similar attributes to connect. While this property can improve predictive performance, it also risks reinforcing existing social disparities. In response, fairness-aware LP methods have emerged, often seeking to mitigate these effects by promoting inter-group connections, that is, links between nodes with differing sensitive attributes (e.g., gender), following the principle of dyadic fairness. However, dyadic fairness overlooks potential disparities within the sensitive groups themselves. To overcome this issue, we propose $k$-hop fairness, a structural notion of fairness for LP, that assesses disparities conditioned on the distance between nodes in the graph. We formalize this notion through predictive fairness and structural bias metrics, and propose pre- and post-processing mitigation strategies. Experiments across standard LP benchmarks reveal: (1) a strong tendency of models to reproduce structural biases at different $k$-hops; (2) interdependence between structural biases at different hops when rewiring graphs; and (3) that our post-processing method achieves favorable $k$-hop performance-fairness trade-offs compared to existing fair LP baselines.
Related papers
- TopoFair: Linking Topological Bias to Fairness in Link Prediction Benchmarks [2.227306407687634]
Graph link prediction (LP) plays a critical role in socially impactful applications, such as job recommendation and friendship formation.<n>While many fairness-aware methods manipulate graph structures to mitigate prediction disparities, the topological biases inherent to social graph structures remain poorly understood.<n>We propose a novel benchmarking framework for fair LP, centered on the structural biases of the underlying graphs.
arXiv Detail & Related papers (2026-02-12T10:29:44Z) - Cauchy-Schwarz Fairness Regularizer [17.898277374771254]
Group fairness in machine learning is often enforced by adding a regularizer that reduces the dependence between model predictions and sensitive attributes.<n>We propose a Cauchy-Schwarz fairness regularizer that penalizes the empirical CS divergence between prediction distributions conditioned on sensitive groups.
arXiv Detail & Related papers (2025-12-10T09:39:30Z) - Breaking the Dyadic Barrier: Rethinking Fairness in Link Prediction Beyond Demographic Parity [5.932575574212546]
We argue that demographic parity does not meet desired properties for fairness assessment in ranking-based tasks such as link prediction.<n>We formalize the limitations of existing fairness evaluations and propose a framework that enables a more expressive assessment.
arXiv Detail & Related papers (2025-11-09T22:58:29Z) - Estimating Fair Graphs from Graph-Stationary Data [58.94389691379349]
We consider group and individual fairness for graphs corresponding to group- and node-level definitions.<n>To evaluate the fairness of a given graph, we provide multiple bias metrics, including novel measurements in the spectral domain.<n>One variant of FairSpecTemp exploits commutativity properties of graph stationarity while directly constraining bias.<n>The other implicitly encourages fair estimates by restricting bias in the graph spectrum and is thus more flexible.
arXiv Detail & Related papers (2025-10-08T20:51:57Z) - Fair Deepfake Detectors Can Generalize [51.21167546843708]
We show that controlling for confounders (data distribution and model capacity) enables improved generalization via fairness interventions.<n>Motivated by this insight, we propose Demographic Attribute-insensitive Intervention Detection (DAID), a plug-and-play framework composed of: i) Demographic-aware data rebalancing, which employs inverse-propensity weighting and subgroup-wise feature normalization to neutralize distributional biases; and ii) Demographic-agnostic feature aggregation, which uses a novel alignment loss to suppress sensitive-attribute signals.<n>DAID consistently achieves superior performance in both fairness and generalization compared to several state-of-the-art
arXiv Detail & Related papers (2025-07-03T14:10:02Z) - Promoting Fairness in Link Prediction with Graph Enhancement [6.477859104817626]
Link prediction is a crucial task in network analysis, but it has been shown to be prone to biased predictions.
We study the fair link prediction problem, which aims to ensure that the predicted link probability is independent of the sensitive attributes of the connected nodes.
We propose FairLink, a method that learns a fairness-enhanced graph to bypass the need for debiasing during the link predictor's training.
arXiv Detail & Related papers (2024-09-13T09:18:29Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.<n>We show that enforcing a causal constraint often reduces the disparity between demographic groups.<n>We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Endowing Pre-trained Graph Models with Provable Fairness [49.8431177748876]
We propose a novel adapter-tuning framework that endows pre-trained graph models with provable fairness (called GraphPAR)
Specifically, we design a sensitive semantic augmenter on node representations, to extend the node representations with different sensitive attribute semantics for each node.
With GraphPAR, we quantify whether the fairness of each node is provable, i.e., predictions are always fair within a certain range of sensitive attribute semantics.
arXiv Detail & Related papers (2024-02-19T14:16:08Z) - Marginal Nodes Matter: Towards Structure Fairness in Graphs [77.25149739933596]
We propose textbfStructural textbfFair textbfGraph textbfNeural textbfNetwork (SFairGNN) to achieve structure fairness.
Our experiments show SFairGNN can significantly improve structure fairness while maintaining overall performance in the downstream tasks.
arXiv Detail & Related papers (2023-10-23T03:20:32Z) - Learning Fair Node Representations with Graph Counterfactual Fairness [56.32231787113689]
We propose graph counterfactual fairness, which considers the biases led by the above facts.
We generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
Our framework outperforms the state-of-the-art baselines in graph counterfactual fairness.
arXiv Detail & Related papers (2022-01-10T21:43:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.