Fairness under Graph Uncertainty: Achieving Interventional Fairness with Partially Known Causal Graphs over Clusters of Variables
- URL: http://arxiv.org/abs/2602.23611v1
- Date: Fri, 27 Feb 2026 02:25:50 GMT
- Title: Fairness under Graph Uncertainty: Achieving Interventional Fairness with Partially Known Causal Graphs over Clusters of Variables
- Authors: Yoichi Chikahara,
- Abstract summary: Causal notions of fairness align with legal requirements, yet many methods assume access to detailed knowledge of the underlying causal graph.<n>We propose a learning framework that achieves interventional fairness by leveraging a causal graph over textitclusters of variables<n>Our framework strikes a better balance between fairness and accuracy than existing approaches, highlighting its effectiveness under limited causal graph knowledge.
- Score: 2.436681150766912
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithmic decisions about individuals require predictions that are not only accurate but also fair with respect to sensitive attributes such as gender and race. Causal notions of fairness align with legal requirements, yet many methods assume access to detailed knowledge of the underlying causal graph, which is a demanding assumption in practice. We propose a learning framework that achieves interventional fairness by leveraging a causal graph over \textit{clusters of variables}, which is substantially easier to estimate than a variable-level graph. With possible \textit{adjustment cluster sets} identified from such a cluster causal graph, our framework trains a prediction model by reducing the worst-case discrepancy between interventional distributions across these sets. To this end, we develop a computationally efficient barycenter kernel maximum mean discrepancy (MMD) that scales favorably with the number of sensitive attribute values. Extensive experiments show that our framework strikes a better balance between fairness and accuracy than existing approaches, highlighting its effectiveness under limited causal graph knowledge.
Related papers
- Cauchy-Schwarz Fairness Regularizer [17.898277374771254]
Group fairness in machine learning is often enforced by adding a regularizer that reduces the dependence between model predictions and sensitive attributes.<n>We propose a Cauchy-Schwarz fairness regularizer that penalizes the empirical CS divergence between prediction distributions conditioned on sensitive groups.
arXiv Detail & Related papers (2025-12-10T09:39:30Z) - Breaking the Dyadic Barrier: Rethinking Fairness in Link Prediction Beyond Demographic Parity [5.932575574212546]
We argue that demographic parity does not meet desired properties for fairness assessment in ranking-based tasks such as link prediction.<n>We formalize the limitations of existing fairness evaluations and propose a framework that enables a more expressive assessment.
arXiv Detail & Related papers (2025-11-09T22:58:29Z) - The Statistical Fairness-Accuracy Frontier [50.323024516295725]
Machine learning models must balance accuracy and fairness, but these goals often conflict.<n>A useful tool for understanding this trade-off is the fairness-accuracy frontier, which characterizes the set of models that cannot be simultaneously improved in both fairness and accuracy.<n>We study the FA frontier in the finite-sample regime, showing how it deviates from its population counterpart and quantifying the worst-case gap between them.
arXiv Detail & Related papers (2025-08-25T03:01:35Z) - A Semidefinite Relaxation Approach for Fair Graph Clustering [1.03590082373586]
This study introduces fair graph clustering within the framework of the disparate impact doctrine.
We employ a semidefinite relaxation approach to approximate the underlying optimization problem.
arXiv Detail & Related papers (2024-10-19T22:51:24Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.<n>We show that enforcing a causal constraint often reduces the disparity between demographic groups.<n>We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Interventional Fairness on Partially Known Causal Graphs: A Constrained
Optimization Approach [44.48385991344273]
We propose a framework for achieving causal fairness based on the notion of interventions when the true causal graph is partially known.
The proposed approach involves modeling fair prediction using a class of causal DAGs that can be learned from observational data combined with domain knowledge.
Results on both simulated and real-world datasets demonstrate the effectiveness of this method.
arXiv Detail & Related papers (2024-01-19T11:20:31Z) - Counterfactual Fairness with Partially Known Causal Graph [85.15766086381352]
This paper proposes a general method to achieve the notion of counterfactual fairness when the true causal graph is unknown.
We find that counterfactual fairness can be achieved as if the true causal graph were fully known, when specific background knowledge is provided.
arXiv Detail & Related papers (2022-05-27T13:40:50Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Generalizing Graph Neural Networks on Out-Of-Distribution Graphs [51.33152272781324]
Graph Neural Networks (GNNs) are proposed without considering the distribution shifts between training and testing graphs.
In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation.
We propose a general causal representation framework, called StableGNN, to eliminate the impact of spurious correlations.
arXiv Detail & Related papers (2021-11-20T18:57:18Z) - Fundamental Limits and Tradeoffs in Invariant Representation Learning [99.2368462915979]
Many machine learning applications involve learning representations that achieve two competing goals.
Minimax game-theoretic formulation represents a fundamental tradeoff between accuracy and invariance.
We provide an information-theoretic analysis of this general and important problem under both classification and regression settings.
arXiv Detail & Related papers (2020-12-19T15:24:04Z) - All of the Fairness for Edge Prediction with Optimal Transport [11.51786288978429]
We study the problem of fairness for the task of edge prediction in graphs.
We propose an embedding-agnostic repairing procedure for the adjacency matrix of an arbitrary graph with a trade-off between the group and individual fairness.
arXiv Detail & Related papers (2020-10-30T15:33:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.