Conformal Prediction Sets Can Cause Disparate Impact
- URL: http://arxiv.org/abs/2410.01888v1
- Date: Wed, 2 Oct 2024 18:00:01 GMT
- Title: Conformal Prediction Sets Can Cause Disparate Impact
- Authors: Jesse C. Cresswell, Bhargava Kumar, Yi Sui, Mouloud Belbahri,
- Abstract summary: Conformal prediction is a promising method for quantifying the uncertainty of machine learning models.
We show that providing prediction sets can increase the unfairness of their decisions.
Instead of equalizing coverage, we propose to equalize set sizes across groups which empirically leads to more fair outcomes.
- Score: 4.61590049339329
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although conformal prediction is a promising method for quantifying the uncertainty of machine learning models, the prediction sets it outputs are not inherently actionable. Many applications require a single output to act on, not several. To overcome this, prediction sets can be provided to a human who then makes an informed decision. In any such system it is crucial to ensure the fairness of outcomes across protected groups, and researchers have proposed that Equalized Coverage be used as the standard for fairness. By conducting experiments with human participants, we demonstrate that providing prediction sets can increase the unfairness of their decisions. Disquietingly, we find that providing sets that satisfy Equalized Coverage actually increases unfairness compared to marginal coverage. Instead of equalizing coverage, we propose to equalize set sizes across groups which empirically leads to more fair outcomes.
Related papers
- Performative Prediction on Games and Mechanism Design [69.7933059664256]
We study a collective risk dilemma where agents decide whether to trust predictions based on past accuracy.
As predictions shape collective outcomes, social welfare arises naturally as a metric of concern.
We show how to achieve better trade-offs and use them for mechanism design.
arXiv Detail & Related papers (2024-08-09T16:03:44Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Equal Opportunity of Coverage in Fair Regression [50.76908018786335]
We study fair machine learning (ML) under predictive uncertainty to enable reliable and trustworthy decision-making.
We propose Equal Opportunity of Coverage (EOC) that aims to achieve two properties: (1) coverage rates for different groups with similar outcomes are close, and (2) the coverage rate for the entire population remains at a predetermined level.
arXiv Detail & Related papers (2023-11-03T21:19:59Z) - Arbitrariness Lies Beyond the Fairness-Accuracy Frontier [3.383670923637875]
We show that state-of-the-art fairness interventions can mask high predictive multiplicity behind favorable group fairness and accuracy metrics.
We propose an ensemble algorithm applicable to any fairness intervention that provably ensures more consistent predictions.
arXiv Detail & Related papers (2023-06-15T18:15:46Z) - Post-selection Inference for Conformal Prediction: Trading off Coverage
for Precision [0.0]
Traditionally, conformal prediction inference requires a data-independent specification of miscoverage level.
We develop simultaneous conformal inference to account for data-dependent miscoverage levels.
arXiv Detail & Related papers (2023-04-12T20:56:43Z) - Fair Bayes-Optimal Classifiers Under Predictive Parity [33.648053823193855]
This paper considers predictive parity, which requires equalizing the probability of success given a positive prediction among different protected groups.
We propose an algorithm we call FairBayes-DPP, aiming to ensure predictive parity when our condition is satisfied.
arXiv Detail & Related papers (2022-05-15T04:58:10Z) - Attainability and Optimality: The Equalized Odds Fairness Revisited [8.44348159032116]
We consider the attainability of the Equalized Odds notion of fairness.
For classification, we prove that compared to enforcing fairness by post-processing, one can always benefit from exploiting all available features.
While performance prediction can attain Equalized Odds with theoretical guarantees, we also discuss its limitation and potential negative social impacts.
arXiv Detail & Related papers (2022-02-24T01:30:31Z) - Fair When Trained, Unfair When Deployed: Observable Fairness Measures
are Unstable in Performative Prediction Settings [0.0]
In performative prediction settings, predictors are precisely intended to induce distribution shift.
In criminal justice, healthcare, and consumer finance, the purpose of building a predictor is to reduce the rate of adverse outcomes.
We show how many of these issues can be avoided by using fairness definitions that depend on counterfactual rather than observable outcomes.
arXiv Detail & Related papers (2022-02-10T14:09:02Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z) - Right Decisions from Wrong Predictions: A Mechanism Design Alternative
to Individual Calibration [107.15813002403905]
Decision makers often need to rely on imperfect probabilistic forecasts.
We propose a compensation mechanism ensuring that the forecasted utility matches the actually accrued utility.
We demonstrate an application showing how passengers could confidently optimize individual travel plans based on flight delay probabilities.
arXiv Detail & Related papers (2020-11-15T08:22:39Z) - Counterfactual Predictions under Runtime Confounding [74.90756694584839]
We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
arXiv Detail & Related papers (2020-06-30T15:49:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.