Fair When Trained, Unfair When Deployed: Observable Fairness Measures
are Unstable in Performative Prediction Settings
- URL: http://arxiv.org/abs/2202.05049v1
- Date: Thu, 10 Feb 2022 14:09:02 GMT
- Title: Fair When Trained, Unfair When Deployed: Observable Fairness Measures
are Unstable in Performative Prediction Settings
- Authors: Alan Mishler, Niccol\`o Dalmasso
- Abstract summary: In performative prediction settings, predictors are precisely intended to induce distribution shift.
In criminal justice, healthcare, and consumer finance, the purpose of building a predictor is to reduce the rate of adverse outcomes.
We show how many of these issues can be avoided by using fairness definitions that depend on counterfactual rather than observable outcomes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many popular algorithmic fairness measures depend on the joint distribution
of predictions, outcomes, and a sensitive feature like race or gender. These
measures are sensitive to distribution shift: a predictor which is trained to
satisfy one of these fairness definitions may become unfair if the distribution
changes. In performative prediction settings, however, predictors are precisely
intended to induce distribution shift. For example, in many applications in
criminal justice, healthcare, and consumer finance, the purpose of building a
predictor is to reduce the rate of adverse outcomes such as recidivism,
hospitalization, or default on a loan. We formalize the effect of such
predictors as a type of concept shift-a particular variety of distribution
shift-and show both theoretically and via simulated examples how this causes
predictors which are fair when they are trained to become unfair when they are
deployed. We further show how many of these issues can be avoided by using
fairness definitions that depend on counterfactual rather than observable
outcomes.
Related papers
- Conformal Prediction Sets Can Cause Disparate Impact [4.61590049339329]
Conformal prediction is a promising method for quantifying the uncertainty of machine learning models.
We show that providing prediction sets can increase the unfairness of their decisions.
Instead of equalizing coverage, we propose to equalize set sizes across groups which empirically leads to more fair outcomes.
arXiv Detail & Related papers (2024-10-02T18:00:01Z) - Performative Prediction on Games and Mechanism Design [69.7933059664256]
We study a collective risk dilemma where agents decide whether to trust predictions based on past accuracy.
As predictions shape collective outcomes, social welfare arises naturally as a metric of concern.
We show how to achieve better trade-offs and use them for mechanism design.
arXiv Detail & Related papers (2024-08-09T16:03:44Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Counterfactual Fairness for Predictions using Generative Adversarial
Networks [28.65556399421874]
We develop a novel deep neural network called Generative Counterfactual Fairness Network (GCFN) for making predictions under counterfactual fairness.
Our method is mathematically guaranteed to ensure the notion of counterfactual fairness.
arXiv Detail & Related papers (2023-10-26T17:58:39Z) - Performative Time-Series Forecasting [71.18553214204978]
We formalize performative time-series forecasting (PeTS) from a machine-learning perspective.
We propose a novel approach, Feature Performative-Shifting (FPS), which leverages the concept of delayed response to anticipate distribution shifts.
We conduct comprehensive experiments using multiple time-series models on COVID-19 and traffic forecasting tasks.
arXiv Detail & Related papers (2023-10-09T18:34:29Z) - Variational Prediction [95.00085314353436]
We present a technique for learning a variational approximation to the posterior predictive distribution using a variational bound.
This approach can provide good predictive distributions without test time marginalization costs.
arXiv Detail & Related papers (2023-07-14T18:19:31Z) - Fairness Transferability Subject to Bounded Distribution Shift [5.62716254065607]
Given an algorithmic predictor that is "fair" on some source distribution, will it still be fair on an unknown target distribution that differs from the source within some bound?
We study the transferability of statistical group fairness for machine learning predictors subject to bounded distribution shifts.
arXiv Detail & Related papers (2022-05-31T22:16:44Z) - Prediction Sensitivity: Continual Audit of Counterfactual Fairness in
Deployed Classifiers [2.0625936401496237]
Traditional group fairness metrics can miss discrimination against individuals and are difficult to apply after deployment.
We present prediction sensitivity, an approach for continual audit of counterfactual fairness in deployed classifiers.
Our empirical results demonstrate that prediction sensitivity is effective for detecting violations of counterfactual fairness.
arXiv Detail & Related papers (2022-02-09T15:06:45Z) - Predicting with Confidence on Unseen Distributions [90.68414180153897]
We connect domain adaptation and predictive uncertainty literature to predict model accuracy on challenging unseen distributions.
We find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts.
We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference.
arXiv Detail & Related papers (2021-07-07T15:50:18Z) - When Does Uncertainty Matter?: Understanding the Impact of Predictive
Uncertainty in ML Assisted Decision Making [68.19284302320146]
We carry out user studies to assess how people with differing levels of expertise respond to different types of predictive uncertainty.
We found that showing posterior predictive distributions led to smaller disagreements with the ML model's predictions.
This suggests that posterior predictive distributions can potentially serve as useful decision aids which should be used with caution and take into account the type of distribution and the expertise of the human.
arXiv Detail & Related papers (2020-11-12T02:23:53Z) - Calibrated Prediction with Covariate Shift via Unsupervised Domain
Adaptation [25.97333838935589]
Uncertainty estimates are an important tool for helping autonomous agents or human decision makers understand and leverage predictive models.
Existing algorithms can overestimate certainty, possibly yielding a false sense of confidence in the predictive model.
arXiv Detail & Related papers (2020-02-29T20:31:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.