Conformal Prediction Sets Can Cause Disparate Impact
- URL: http://arxiv.org/abs/2410.01888v2
- Date: Thu, 13 Feb 2025 19:02:40 GMT
- Title: Conformal Prediction Sets Can Cause Disparate Impact
- Authors: Jesse C. Cresswell, Bhargava Kumar, Yi Sui, Mouloud Belbahri,
- Abstract summary: We show that providing prediction sets can lead to disparate impact in decisions.<n>We propose to equalize set sizes across groups which empirically leads to lower disparate impact.
- Score: 4.61590049339329
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conformal prediction is a statistically rigorous method for quantifying uncertainty in models by having them output sets of predictions, with larger sets indicating more uncertainty. However, prediction sets are not inherently actionable; many applications require a single output to act on, not several. To overcome this limitation, prediction sets can be provided to a human who then makes an informed decision. In any such system it is crucial to ensure the fairness of outcomes across protected groups, and researchers have proposed that Equalized Coverage be used as the standard for fairness. By conducting experiments with human participants, we demonstrate that providing prediction sets can lead to disparate impact in decisions. Disquietingly, we find that providing sets that satisfy Equalized Coverage actually increases disparate impact compared to marginal coverage. Instead of equalizing coverage, we propose to equalize set sizes across groups which empirically leads to lower disparate impact.
Related papers
- Conformal Prediction Sets with Improved Conditional Coverage using Trust Scores [52.92618442300405]
It is impossible to achieve exact, distribution-free conditional coverage in finite samples.
We propose an alternative conformal prediction algorithm that targets coverage where it matters most.
arXiv Detail & Related papers (2025-01-17T12:01:56Z) - Bin-Conditional Conformal Prediction of Fatalities from Armed Conflict [0.5312303275762104]
We introduce bin-conditional conformal prediction (BCCP), which enhances standard conformal prediction by ensuring consistent coverage rates across user-defined subsets.
Compared to standard conformal prediction, BCCP offers improved local coverage, though this comes at the cost of slightly wider prediction intervals.
arXiv Detail & Related papers (2024-10-18T14:41:42Z) - Performative Prediction on Games and Mechanism Design [69.7933059664256]
We study a collective risk dilemma where agents decide whether to trust predictions based on past accuracy.
As predictions shape collective outcomes, social welfare arises naturally as a metric of concern.
We show how to achieve better trade-offs and use them for mechanism design.
arXiv Detail & Related papers (2024-08-09T16:03:44Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Equal Opportunity of Coverage in Fair Regression [50.76908018786335]
We study fair machine learning (ML) under predictive uncertainty to enable reliable and trustworthy decision-making.
We propose Equal Opportunity of Coverage (EOC) that aims to achieve two properties: (1) coverage rates for different groups with similar outcomes are close, and (2) the coverage rate for the entire population remains at a predetermined level.
arXiv Detail & Related papers (2023-11-03T21:19:59Z) - Conformal Prediction for Deep Classifier via Label Ranking [29.784336674173616]
Conformal prediction is a statistical framework that generates prediction sets with a desired coverage guarantee.
We propose a novel algorithm named $textitSorted Adaptive Prediction Sets$ (SAPS)
SAPS discards all the probability values except for the maximum softmax probability.
arXiv Detail & Related papers (2023-10-10T08:54:14Z) - Arbitrariness Lies Beyond the Fairness-Accuracy Frontier [3.383670923637875]
We show that state-of-the-art fairness interventions can mask high predictive multiplicity behind favorable group fairness and accuracy metrics.
We propose an ensemble algorithm applicable to any fairness intervention that provably ensures more consistent predictions.
arXiv Detail & Related papers (2023-06-15T18:15:46Z) - On the Expected Size of Conformal Prediction Sets [24.161372736642157]
We theoretically quantify the expected size of the prediction sets under the split conformal prediction framework.
As this precise formulation cannot usually be calculated directly, we derive point estimates and high-probability bounds interval.
We corroborate the efficacy of our results with experiments on real-world datasets for both regression and classification problems.
arXiv Detail & Related papers (2023-06-12T17:22:57Z) - Post-selection Inference for Conformal Prediction: Trading off Coverage
for Precision [0.0]
Traditionally, conformal prediction inference requires a data-independent specification of miscoverage level.
We develop simultaneous conformal inference to account for data-dependent miscoverage levels.
arXiv Detail & Related papers (2023-04-12T20:56:43Z) - Conformal Off-Policy Prediction in Contextual Bandits [54.67508891852636]
Conformal off-policy prediction can output reliable predictive intervals for the outcome under a new target policy.
We provide theoretical finite-sample guarantees without making any additional assumptions beyond the standard contextual bandit setup.
arXiv Detail & Related papers (2022-06-09T10:39:33Z) - Fair Bayes-Optimal Classifiers Under Predictive Parity [33.648053823193855]
This paper considers predictive parity, which requires equalizing the probability of success given a positive prediction among different protected groups.
We propose an algorithm we call FairBayes-DPP, aiming to ensure predictive parity when our condition is satisfied.
arXiv Detail & Related papers (2022-05-15T04:58:10Z) - Attainability and Optimality: The Equalized Odds Fairness Revisited [8.44348159032116]
We consider the attainability of the Equalized Odds notion of fairness.
For classification, we prove that compared to enforcing fairness by post-processing, one can always benefit from exploiting all available features.
While performance prediction can attain Equalized Odds with theoretical guarantees, we also discuss its limitation and potential negative social impacts.
arXiv Detail & Related papers (2022-02-24T01:30:31Z) - Fair When Trained, Unfair When Deployed: Observable Fairness Measures
are Unstable in Performative Prediction Settings [0.0]
In performative prediction settings, predictors are precisely intended to induce distribution shift.
In criminal justice, healthcare, and consumer finance, the purpose of building a predictor is to reduce the rate of adverse outcomes.
We show how many of these issues can be avoided by using fairness definitions that depend on counterfactual rather than observable outcomes.
arXiv Detail & Related papers (2022-02-10T14:09:02Z) - Selective Regression Under Fairness Criteria [30.672082160544996]
In some cases, the performance of minority group can decrease while we reduce the coverage.
We show that such an unwanted behavior can be avoided if we can construct features satisfying the sufficiency criterion.
arXiv Detail & Related papers (2021-10-28T19:05:12Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z) - Right Decisions from Wrong Predictions: A Mechanism Design Alternative
to Individual Calibration [107.15813002403905]
Decision makers often need to rely on imperfect probabilistic forecasts.
We propose a compensation mechanism ensuring that the forecasted utility matches the actually accrued utility.
We demonstrate an application showing how passengers could confidently optimize individual travel plans based on flight delay probabilities.
arXiv Detail & Related papers (2020-11-15T08:22:39Z) - Counterfactual Predictions under Runtime Confounding [74.90756694584839]
We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
arXiv Detail & Related papers (2020-06-30T15:49:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.