Beyond Procedure: Substantive Fairness in Conformal Prediction
- URL: http://arxiv.org/abs/2602.16794v1
- Date: Wed, 18 Feb 2026 19:00:43 GMT
- Title: Beyond Procedure: Substantive Fairness in Conformal Prediction
- Authors: Pengqi Liu, Zijun Yu, Mouloud Belbahri, Arthur Charpentier, Masoud Asgharian, Jesse C. Cresswell,
- Abstract summary: Conformal prediction (CP) offers distribution-free uncertainty quantification for machine learning models.<n>We analyze the holistic decision-making pipeline to evaluate substantive fairness-the equity of downstream outcomes.<n>Our experiments reveal that label-clustered CP variants consistently deliver superior substantive fairness.
- Score: 10.582635211917031
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conformal prediction (CP) offers distribution-free uncertainty quantification for machine learning models, yet its interplay with fairness in downstream decision-making remains underexplored. Moving beyond CP as a standalone operation (procedural fairness), we analyze the holistic decision-making pipeline to evaluate substantive fairness-the equity of downstream outcomes. Theoretically, we derive an upper bound that decomposes prediction-set size disparity into interpretable components, clarifying how label-clustered CP helps control method-driven contributions to unfairness. To facilitate scalable empirical analysis, we introduce an LLM-in-the-loop evaluator that approximates human assessment of substantive fairness across diverse modalities. Our experiments reveal that label-clustered CP variants consistently deliver superior substantive fairness. Finally, we empirically show that equalized set sizes, rather than coverage, strongly correlate with improved substantive fairness, enabling practitioners to design more fair CP systems. Our code is available at https://github.com/layer6ai-labs/llm-in-the-loop-conformal-fairness.
Related papers
- Fairness Aware Reward Optimization [78.85867531002346]
We introduce Fairness Aware Reward Optimization (Faro), an in-processing framework that trains reward models under demographic parity, equalized odds, or counterfactual fairness constraints.<n>We provide the first theoretical analysis of reward-level fairness in LLM alignment.<n>Faro significantly reduces bias and harmful generations while maintaining or improving model quality.
arXiv Detail & Related papers (2026-02-08T03:35:49Z) - Decoupling the Effect of Chain-of-Thought Reasoning: A Human Label Variation Perspective [60.45433515408158]
We show that long Chain-of-Thought (CoT) serves as a decisive decision-maker for the top option but fails to function as a granular distribution calibrator for ambiguous tasks.<n>We observe a distinct "decoupled mechanism": while CoT improves distributional alignment, final accuracy is dictated by CoT content.
arXiv Detail & Related papers (2026-01-06T16:26:40Z) - IFFair: Influence Function-driven Sample Reweighting for Fair Classification [20.099162424205936]
We propose a pre-processing method IFFair based on the influence function.<n>Compared with other fairness optimization approaches, IFFair only uses the influence disparity of training samples on different groups.<n>It achieves better trade-off between multiple utility and fairness metrics compared with previous pre-processing methods.
arXiv Detail & Related papers (2025-12-08T07:45:55Z) - Counterfactually Fair Conformal Prediction [8.13153220792812]
We develop Counterfactually Fair Conformal Prediction (CF-CP) that produces counterfactually fair prediction sets.<n>Through symmetrization of conformity scores across protected-attribute interventions, we prove that CF-CP results in counterfactually fair prediction sets.
arXiv Detail & Related papers (2025-10-09T18:32:47Z) - FedCF: Fair Federated Conformal Prediction [4.145290936792853]
We extend the Conformal Fairness (CF) framework to the Federated Learning setting and discuss how we can audit a federated model for fairness.<n>We empirically validate our framework by conducting experiments on several datasets spanning multiple domains.
arXiv Detail & Related papers (2025-09-26T20:35:22Z) - Achieving Fairness in Predictive Process Analytics via Adversarial Learning [50.31323204077591]
This paper addresses the challenge of integrating a debiasing phase into predictive business process analytics.
Our framework leverages on adversial debiasing is evaluated on four case studies, showing a significant reduction in the contribution of biased variables to the predicted value.
arXiv Detail & Related papers (2024-10-03T15:56:03Z) - Counterfactual Fairness by Combining Factual and Counterfactual Predictions [18.950415688199993]
In high-stake domains such as healthcare and hiring, the role of machine learning (ML) in decision-making raises significant fairness concerns.<n>This work focuses on Counterfactual Fairness (CF), which posits that an ML model's outcome on any individual should remain unchanged if they had belonged to a different demographic group.<n>We provide a theoretical study on the inherent trade-off between CF and predictive performance in a model-agnostic manner.
arXiv Detail & Related papers (2024-09-03T15:21:10Z) - Equal Opportunity of Coverage in Fair Regression [50.76908018786335]
We study fair machine learning (ML) under predictive uncertainty to enable reliable and trustworthy decision-making.
We propose Equal Opportunity of Coverage (EOC) that aims to achieve two properties: (1) coverage rates for different groups with similar outcomes are close, and (2) the coverage rate for the entire population remains at a predetermined level.
arXiv Detail & Related papers (2023-11-03T21:19:59Z) - FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods [84.1077756698332]
This paper introduces the Fair Fairness Benchmark (textsfFFB), a benchmarking framework for in-processing group fairness methods.
We provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness.
arXiv Detail & Related papers (2023-06-15T19:51:28Z) - Federated Conformal Predictors for Distributed Uncertainty
Quantification [83.50609351513886]
Conformal prediction is emerging as a popular paradigm for providing rigorous uncertainty quantification in machine learning.
In this paper, we extend conformal prediction to the federated learning setting.
We propose a weaker notion of partial exchangeability, better suited to the FL setting, and use it to develop the Federated Conformal Prediction framework.
arXiv Detail & Related papers (2023-05-27T19:57:27Z) - Fairness and Explainability: Bridging the Gap Towards Fair Model
Explanations [12.248793742165278]
We bridge the gap between fairness and explainability by presenting a novel perspective of procedure-oriented fairness based on explanations.
We propose a Comprehensive Fairness Algorithm (CFA), which simultaneously fulfills multiple objectives - improving traditional fairness, satisfying explanation fairness, and maintaining the utility performance.
arXiv Detail & Related papers (2022-12-07T18:35:54Z) - Fair and Optimal Classification via Post-Processing [10.163721748735801]
This paper provides a complete characterization of the inherent tradeoff of demographic parity on classification problems.
We show that the minimum error rate achievable by randomized and attribute-aware fair classifiers is given by the optimal value of a Wasserstein-barycenter problem.
arXiv Detail & Related papers (2022-11-03T00:04:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.